patent_id
stringlengths 7
8
| description
stringlengths 125
2.47M
| length
int64 125
2.47M
|
---|---|---|
11861423 | DESCRIPTION OF EMBODIMENTS Example methods, apparatus, and products for an artificial intelligence and machine learning infrastructure in accordance with embodiments of the present disclosure are described with reference to the accompanying drawings, beginning withFIG.1A.FIG.1Aillustrates an example system for data storage, in accordance with some implementations. System100(also referred to as “storage system” herein) includes numerous elements for purposes of illustration rather than limitation. It may be noted that system100may include the same, more, or fewer elements configured in the same or different manner in other implementations. System100includes a number of computing devices164A-B. Computing devices (also referred to as “client devices” herein) may be embodied, for example, a server in a data center, a workstation, a personal computer, a notebook, or the like. Computing devices164A-B may be coupled for data communications to one or more storage arrays102A-B through a storage area network (‘SAN’)158or a local area network (‘LAN’)160. The SAN158may be implemented with a variety of data communications fabrics, devices, and protocols. For example, the fabrics for SAN158may include Fibre Channel, Ethernet, Infiniband, Serial Attached Small Computer System Interface (‘SAS’), or the like. Data communications protocols for use with SAN158may include Advanced Technology Attachment (‘ATA’), Fibre Channel Protocol, Small Computer System Interface (‘SCSI’), Internet Small Computer System Interface (‘iSCSI’), HyperSCSI, Non-Volatile Memory Express (‘NVMe’) over Fabrics, or the like. It may be noted that SAN158is provided for illustration, rather than limitation. Other data communication couplings may be implemented between computing devices164A-B and storage arrays102A-B. The LAN160may also be implemented with a variety of fabrics, devices, and protocols. For example, the fabrics for LAN160may include Ethernet (802.3), wireless (802.11), or the like. Data communication protocols for use in LAN160may include Transmission Control Protocol (‘TCP’), User Datagram Protocol (‘UDP’), Internet Protocol (‘IP’), HyperText Transfer Protocol (‘HTTP’), Wireless Access Protocol (‘WAP’), Handheld Device Transport Protocol (‘HDTP’), Session Initiation Protocol (SIT), Real Time Protocol (‘RTP’), or the like. Storage arrays102A-B may provide persistent data storage for the computing devices164A-B. Storage array102A may be contained in a chassis (not shown), and storage array102B may be contained in another chassis (not shown), in implementations. Storage array102A and102B may include one or more storage array controllers110A-D (also referred to as “controller” herein). A storage array controller110A-D may be embodied as a module of automated computing machinery comprising computer hardware, computer software, or a combination of computer hardware and software. In some implementations, the storage array controllers110A-D may be configured to carry out various storage tasks. Storage tasks may include writing data received from the computing devices164A-B to storage array102A-B, erasing data from storage array102A-B, retrieving data from storage array102A-B and providing data to computing devices164A-B, monitoring and reporting of disk utilization and performance, performing redundancy operations, such as Redundant Array of Independent Drives (‘RAID’) or RAID-like data redundancy operations, compressing data, encrypting data, and so forth. Storage array controller110A-D may be implemented in a variety of ways, including as a Field Programmable Gate Array (‘FPGA’), a Programmable Logic Chip (‘PLC’), an Application Specific Integrated Circuit (‘ASIC’), System-on-Chip (‘SOC’), or any computing device that includes discrete components such as a processing device, central processing unit, computer memory, or various adapters. Storage array controller110A-D may include, for example, a data communications adapter configured to support communications via the SAN158or LAN160. In some implementations, storage array controller110A-D may be independently coupled to the LAN160. In implementations, storage array controller110A-D may include an I/O controller or the like that couples the storage array controller110A-D for data communications, through a midplane (not shown), to a persistent storage resource170A-B (also referred to as a “storage resource” herein). The persistent storage resource170A-B main include any number of storage drives171A-F (also referred to as “storage devices” herein) and any number of non-volatile Random Access Memory (‘NVRAM’) devices (not shown). In some implementations, the NVRAM devices of a persistent storage resource170A-B may be configured to receive, from the storage array controller110A-D, data to be stored in the storage drives171A-F. In some examples, the data may originate from computing devices164A-B. In some examples, writing data to the NVRAM device may be carried out more quickly than directly writing data to the storage drive171A-F. In implementations, the storage array controller110A-D may be configured to utilize the NVRAM devices as a quickly accessible buffer for data destined to be written to the storage drives171A-F. Latency for write requests using NVRAM devices as a buffer may be improved relative to a system in which a storage array controller110A-D writes data directly to the storage drives171A-F. In some implementations, the NVRAM devices may be implemented with computer memory in the form of high bandwidth, low latency RAM. The NVRAM device is referred to as “non-volatile” because the NVRAM device may receive or include a unique power source that maintains the state of the RAM after main power loss to the NVRAM device. Such a power source may be a battery, one or more capacitors, or the like. In response to a power loss, the NVRAM device may be configured to write the contents of the RAM to a persistent storage, such as the storage drives171A-F. In implementations, storage drive171A-F may refer to any device configured to record data persistently, where “persistently” or “persistent” refers as to a device's ability to maintain recorded data after loss of power. In some implementations, storage drive171A-F may correspond to non-disk storage media. For example, the storage drive171A-F may be one or more solid-state drives (‘SSDs’), flash memory based storage, any type of solid-state non-volatile memory, or any other type of non-mechanical storage device. In other implementations, storage drive171A-F may include mechanical or spinning hard disk, such as hard-disk drives (‘HDD’). In some implementations, the storage array controllers110A-D may be configured for offloading device management responsibilities from storage drive171A-F in storage array102A-B. For example, storage array controllers110A-D may manage control information that may describe the state of one or more memory blocks in the storage drives171A-F. The control information may indicate, for example, that a particular memory block has failed and should no longer be written to, that a particular memory block contains boot code for a storage array controller110A-D, the number of program-erase (‘P/E’) cycles that have been performed on a particular memory block, the age of data stored in a particular memory block, the type of data that is stored in a particular memory block, and so forth. In some implementations, the control information may be stored with an associated memory block as metadata. In other implementations, the control information for the storage drives171A-F may be stored in one or more particular memory blocks of the storage drives171A-F that are selected by the storage array controller110A-D. The selected memory blocks may be tagged with an identifier indicating that the selected memory block contains control information. The identifier may be utilized by the storage array controllers110A-D in conjunction with storage drives171A-F to quickly identify the memory blocks that contain control information. For example, the storage controllers110A-D may issue a command to locate memory blocks that contain control information. It may be noted that control information may be so large that parts of the control information may be stored in multiple locations, that the control information may be stored in multiple locations for purposes of redundancy, for example, or that the control information may otherwise be distributed across multiple memory blocks in the storage drive171A-F. In implementations, storage array controllers110A-D may offload device management responsibilities from storage drives171A-F of storage array102A-B by retrieving, from the storage drives171A-F, control information describing the state of one or more memory blocks in the storage drives171A-F. Retrieving the control information from the storage drives171A-F may be carried out, for example, by the storage array controller110A-D querying the storage drives171A-F for the location of control information for a particular storage drive171A-F. The storage drives171A-F may be configured to execute instructions that enable the storage drive171A-F to identify the location of the control information. The instructions may be executed by a controller (not shown) associated with or otherwise located on the storage drive171A-F and may cause the storage drive171A-F to scan a portion of each memory block to identify the memory blocks that store control information for the storage drives171A-F. The storage drives171A-F may respond by sending a response message to the storage array controller110A-D that includes the location of control information for the storage drive171A-F. Responsive to receiving the response message, storage array controllers110A-D may issue a request to read data stored at the address associated with the location of control information for the storage drives171A-F. In other implementations, the storage array controllers110A-D may further offload device management responsibilities from storage drives171A-F by performing, in response to receiving the control information, a storage drive management operation. A storage drive management operation may include, for example, an operation that is typically performed by the storage drive171A-F (e.g., the controller (not shown) associated with a particular storage drive171A-F). A storage drive management operation may include, for example, ensuring that data is not written to failed memory blocks within the storage drive171A-F, ensuring that data is written to memory blocks within the storage drive171A-F in such a way that adequate wear leveling is achieved, and so forth. In implementations, storage array102A-B may implement two or more storage array controllers110A-D. For example, storage array102A may include storage array controllers110A and storage array controllers110B. At a given instance, a single storage array controller110A-D (e.g., storage array controller110A) of a storage system100may be designated with primary status (also referred to as “primary controller” herein), and other storage array controllers110A-D (e.g., storage array controller110B) may be designated with secondary status (also referred to as “secondary controller” herein). The primary controller may have particular rights, such as permission to alter data in persistent storage resource170A-B (e.g., writing data to persistent storage resource170A-B). At least some of the rights of the primary controller may supersede the rights of the secondary controller. For instance, the secondary controller may not have permission to alter data in persistent storage resource170A-B when the primary controller has the right. The status of storage array controllers110A-D may change. For example, storage array controller110A may be designated with secondary status, and storage array controller110B may be designated with primary status. In some implementations, a primary controller, such as storage array controller110A, may serve as the primary controller for one or more storage arrays102A-B, and a second controller, such as storage array controller110B, may serve as the secondary controller for the one or more storage arrays102A-B. For example, storage array controller110A may be the primary controller for storage array102A and storage array102B, and storage array controller110B may be the secondary controller for storage array102A and102B. In some implementations, storage array controllers110C and110D (also referred to as “storage processing modules”) may neither have primary or secondary status. Storage array controllers110C and110D, implemented as storage processing modules, may act as a communication interface between the primary and secondary controllers (e.g., storage array controllers110A and110B, respectively) and storage array102B. For example, storage array controller110A of storage array102A may send a write request, via SAN158, to storage array102B. The write request may be received by both storage array controllers110C and110D of storage array102B. Storage array controllers110C and110D facilitate the communication, e.g., send the write request to the appropriate storage drive171A-F. It may be noted that in some implementations storage processing modules may be used to increase the number of storage drives controlled by the primary and secondary controllers. In implementations, storage array controllers110A-D are communicatively coupled, via a midplane (not shown), to one or more storage drives171A-F and to one or more NVRAM devices (not shown) that are included as part of a storage array102A-B. The storage array controllers110A-D may be coupled to the midplane via one or more data communication links and the midplane may be coupled to the storage drives171A-F and the NVRAM devices via one or more data communications links. The data communications links described herein are collectively illustrated by data communications links108A-D and may include a Peripheral Component Interconnect Express (‘PCIe’) bus, for example. FIG.1Billustrates an example system for data storage, in accordance with some implementations. Storage array controller101illustrated inFIG.1Bmay similar to the storage array controllers110A-D described with respect toFIG.1A. In one example, storage array controller101may be similar to storage array controller110A or storage array controller110B. Storage array controller101includes numerous elements for purposes of illustration rather than limitation. It may be noted that storage array controller101may include the same, more, or fewer elements configured in the same or different manner in other implementations. It may be noted that elements ofFIG.1Amay be included below to help illustrate features of storage array controller101. Storage array controller101may include one or more processing devices104and random access memory (‘RAM’)111. Processing device104(or controller101) represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device104(or controller101) may be a complex instruction set computing (‘CISC’) microprocessor, reduced instruction set computing (‘RISC’) microprocessor, very long instruction word (‘VLIW’) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processing device104(or controller101) may also be one or more special-purpose processing devices such as an ASIC, an FPGA, a digital signal processor (‘DSP’), network processor, or the like. The processing device104may be connected to the RAM111via a data communications link106, which may be embodied as a high speed memory bus such as a Double-Data Rate 4 (‘DDR4’) bus. Stored in RAM111is an operating system112. In some implementations, instructions113are stored in RAM111. Instructions113may include computer program instructions for performing operations in a direct-mapped flash storage system. In one embodiment, a direct-mapped flash storage system is one that addresses data blocks within flash drives directly and without an address translation performed by the storage controllers of the flash drives. In implementations, storage array controller101includes one or more host bus adapters103A-C that are coupled to the processing device104via a data communications link105A-C. In implementations, host bus adapters103A-C may be computer hardware that connects a host system (e.g., the storage array controller) to other network and storage arrays. In some examples, host bus adapters103A-C may be a Fibre Channel adapter that enables the storage array controller101to connect to a SAN, an Ethernet adapter that enables the storage array controller101to connect to a LAN, or the like. Host bus adapters103A-C may be coupled to the processing device104via a data communications link105A-C such as, for example, a PCIe bus. In implementations, storage array controller101may include a host bus adapter114that is coupled to an expander115. The expander115may be used to attach a host system to a larger number of storage drives. The expander115may, for example, be a SAS expander utilized to enable the host bus adapter114to attach to storage drives in an implementation where the host bus adapter114is embodied as a SAS controller. In implementations, storage array controller101may include a switch116coupled to the processing device104via a data communications link109. The switch116may be a computer hardware device that can create multiple endpoints out of a single endpoint, thereby enabling multiple devices to share a single endpoint. The switch116may, for example, be a PCIe switch that is coupled to a PCIe bus (e.g., data communications link109) and presents multiple PCIe connection points to the midplane. In implementations, storage array controller101includes a data communications link107for coupling the storage array controller101to other storage array controllers. In some examples, data communications link107may be a QuickPath Interconnect (QPI) interconnect. A traditional storage system that uses traditional flash drives may implement a process across the flash drives that are part of the traditional storage system. For example, a higher level process of the storage system may initiate and control a process across the flash drives. However, a flash drive of the traditional storage system may include its own storage controller that also performs the process. Thus, for the traditional storage system, a higher level process (e.g., initiated by the storage system) and a lower level process (e.g., initiated by a storage controller of the storage system) may both be performed. To resolve various deficiencies of a traditional storage system, operations may be performed by higher level processes and not by the lower level processes. For example, the flash storage system may include flash drives that do not include storage controllers that provide the process. Thus, the operating system of the flash storage system itself may initiate and control the process. This may be accomplished by a direct-mapped flash storage system that addresses data blocks within the flash drives directly and without an address translation performed by the storage controllers of the flash drives. The operating system of the flash storage system may identify and maintain a list of allocation units across multiple flash drives of the flash storage system. The allocation units may be entire erase blocks or multiple erase blocks. The operating system may maintain a map or address range that directly maps addresses to erase blocks of the flash drives of the flash storage system. Direct mapping to the erase blocks of the flash drives may be used to rewrite data and erase data. For example, the operations may be performed on one or more allocation units that include a first data and a second data where the first data is to be retained and the second data is no longer being used by the flash storage system. The operating system may initiate the process to write the first data to new locations within other allocation units and erasing the second data and marking the allocation units as being available for use for subsequent data. Thus, the process may only be performed by the higher level operating system of the flash storage system without an additional lower level process being performed by controllers of the flash drives. Advantages of the process being performed only by the operating system of the flash storage system include increased reliability of the flash drives of the flash storage system as unnecessary or redundant write operations are not being performed during the process. One possible point of novelty here is the concept of initiating and controlling the process at the operating system of the flash storage system. In addition, the process can be controlled by the operating system across multiple flash drives. This is contrast to the process being performed by a storage controller of a flash drive. A storage system can consist of two storage array controllers that share a set of drives for failover purposes, or it could consist of a single storage array controller that provides a storage service that utilizes multiple drives, or it could consist of a distributed network of storage array controllers each with some number of drives or some amount of Flash storage where the storage array controllers in the network collaborate to provide a complete storage service and collaborate on various aspects of a storage service including storage allocation and garbage collection. FIG.1Cillustrates a third example system117for data storage in accordance with some implementations. System117(also referred to as “storage system” herein) includes numerous elements for purposes of illustration rather than limitation. It may be noted that system117may include the same, more, or fewer elements configured in the same or different manner in other implementations. In one embodiment, system117includes a dual Peripheral Component Interconnect (‘PCI’) flash storage device118with separately addressable fast write storage. System117may include a storage controller119. In one embodiment, storage controller119A-D may be a CPU, ASIC, FPGA, or any other circuitry that may implement control structures necessary according to the present disclosure. In one embodiment, system117includes flash memory devices (e.g., including flash memory devices120a-n), operatively coupled to various channels of the storage device controller119. Flash memory devices120a-n, may be presented to the controller119A-D as an addressable collection of Flash pages, erase blocks, and/or control elements sufficient to allow the storage device controller119A-D to program and retrieve various aspects of the Flash. In one embodiment, storage device controller119A-D may perform operations on flash memory devices120a-nincluding storing and retrieving data content of pages, arranging and erasing any blocks, tracking statistics related to the use and reuse of Flash memory pages, erase blocks, and cells, tracking and predicting error codes and faults within the Flash memory, controlling voltage levels associated with programming and retrieving contents of Flash cells, etc. In one embodiment, system117may include RAM121to store separately addressable fast-write data. In one embodiment, RAM121may be one or more separate discrete devices. In another embodiment, RAM121may be integrated into storage device controller119A-D or multiple storage device controllers. The RAM121may be utilized for other purposes as well, such as temporary program memory for a processing device (e.g., a CPU) in the storage device controller119. In one embodiment, system117may include a stored energy device122, such as a rechargeable battery or a capacitor. Stored energy device122may store energy sufficient to power the storage device controller119, some amount of the RAM (e.g., RAM121), and some amount of Flash memory (e.g., Flash memory120a-120n) for sufficient time to write the contents of RAM to Flash memory. In one embodiment, storage device controller119A-D may write the contents of RAM to Flash Memory if the storage device controller detects loss of external power. In one embodiment, system117includes two data communications links123a,123b. In one embodiment, data communications links123a,123bmay be PCI interfaces. In another embodiment, data communications links123a,123bmay be based on other communications standards (e.g., HyperTransport, InfiniBand, etc.). Data communications links123a,123bmay be based on non-volatile memory express (‘NVMe’) or NVMe over fabrics (‘NVMf’) specifications that allow external connection to the storage device controller119A-D from other components in the storage system117. It should be noted that data communications links may be interchangeably referred to herein as PCI buses for convenience. System117may also include an external power source (not shown), which may be provided over one or both data communications links123a,123b, or which may be provided separately. An alternative embodiment includes a separate Flash memory (not shown) dedicated for use in storing the content of RAM121. The storage device controller119A-D may present a logical device over a PCI bus which may include an addressable fast-write logical device, or a distinct part of the logical address space of the storage device118, which may be presented as PCI memory or as persistent storage. In one embodiment, operations to store into the device are directed into the RAM121. On power failure, the storage device controller119A-D may write stored content associated with the addressable fast-write logical storage to Flash memory (e.g., Flash memory120a-n) for long-term persistent storage. In one embodiment, the logical device may include some presentation of some or all of the content of the Flash memory devices120a-n, where that presentation allows a storage system including a storage device118(e.g., storage system117) to directly address Flash memory pages and directly reprogram erase blocks from storage system components that are external to the storage device through the PCI bus. The presentation may also allow one or more of the external components to control and retrieve other aspects of the Flash memory including some or all of: tracking statistics related to use and reuse of Flash memory pages, erase blocks, and cells across all the Flash memory devices; tracking and predicting error codes and faults within and across the Flash memory devices; controlling voltage levels associated with programming and retrieving contents of Flash cells; etc. In one embodiment, the stored energy device122may be sufficient to ensure completion of in-progress operations to the Flash memory devices120a-120n. The stored energy device122may power storage device controller119A-D and associated Flash memory devices (e.g.,120a-n) for those operations, as well as for the storing of fast-write RAM to Flash memory. Stored energy device122may be used to store accumulated statistics and other parameters kept and tracked by the Flash memory devices120a-nand/or the storage device controller119. Separate capacitors or stored energy devices (such as smaller capacitors near or embedded within the Flash memory devices themselves) may be used for some or all of the operations described herein. Various schemes may be used to track and optimize the life span of the stored energy component, such as adjusting voltage levels over time, partially discharging the storage energy device122to measure corresponding discharge characteristics, etc. If the available energy decreases over time, the effective available capacity of the addressable fast-write storage may be decreased to ensure that it can be written safely based on the currently available stored energy. FIG.1Dillustrates a third example system124for data storage in accordance with some implementations. In one embodiment, system124includes storage controllers125a,125b. In one embodiment, storage controllers125a,125bare operatively coupled to Dual PCI storage devices119a,119band119c,119d, respectively. Storage controllers125a,125bmay be operatively coupled (e.g., via a storage network130) to some number of host computers127a-n. In one embodiment, two storage controllers (e.g.,125aand125b) provide storage services, such as a SCS block storage array, a file server, an object server, a database or data analytics service, etc. The storage controllers125a,125bmay provide services through some number of network interfaces (e.g.,126a-d) to host computers127a-noutside of the storage system124. Storage controllers125a,125bmay provide integrated services or an application entirely within the storage system124, forming a converged storage and compute system. The storage controllers125a,125bmay utilize the fast write memory within or across storage devices119a-dto journal in progress operations to ensure the operations are not lost on a power failure, storage controller removal, storage controller or storage system shutdown, or some fault of one or more software or hardware components within the storage system124. In one embodiment, controllers125a,125boperate as PCI masters to one or the other PCI buses128a,128b. In another embodiment,128aand128bmay be based on other communications standards (e.g., HyperTransport, InfiniBand, etc.). Other storage system embodiments may operate storage controllers125a,125bas multi-masters for both PCI buses128a,128b. Alternately, a PCI/NVMe/NVMf switching infrastructure or fabric may connect multiple storage controllers. Some storage system embodiments may allow storage devices to communicate with each other directly rather than communicating only with storage controllers. In one embodiment, a storage device controller119amay be operable under direction from a storage controller125ato synthesize and transfer data to be stored into Flash memory devices from data that has been stored in RAM (e.g., RAM121ofFIG.1C). For example, a recalculated version of RAM content may be transferred after a storage controller has determined that an operation has fully committed across the storage system, or when fast-write memory on the device has reached a certain used capacity, or after a certain amount of time, to ensure improve safety of the data or to release addressable fast-write capacity for reuse. This mechanism may be used, for example, to avoid a second transfer over a bus (e.g.,128a,128b) from the storage controllers125a,125b. In one embodiment, a recalculation may include compressing data, attaching indexing or other metadata, combining multiple data segments together, performing erasure code calculations, etc. In one embodiment, under direction from a storage controller125a,125b, a storage device controller119a,119bmay be operable to calculate and transfer data to other storage devices from data stored in RAM (e.g., RAM121ofFIG.1C) without involvement of the storage controllers125a,125b. This operation may be used to mirror data stored in one controller125ato another controller125b, or it could be used to offload compression, data aggregation, and/or erasure coding calculations and transfers to storage devices to reduce load on storage controllers or the storage controller interface129a,129bto the PCI bus128a,128b. A storage device controller119A-D may include mechanisms for implementing high availability primitives for use by other parts of a storage system external to the Dual PCI storage device118. For example, reservation or exclusion primitives may be provided so that, in a storage system with two storage controllers providing a highly available storage service, one storage controller may prevent the other storage controller from accessing or continuing to access the storage device. This could be used, for example, in cases where one controller detects that the other controller is not functioning properly or where the interconnect between the two storage controllers may itself not be functioning properly. In one embodiment, a storage system for use with Dual PCI direct mapped storage devices with separately addressable fast write storage includes systems that manage erase blocks or groups of erase blocks as allocation units for storing data on behalf of the storage service, or for storing metadata (e.g., indexes, logs, etc.) associated with the storage service, or for proper management of the storage system itself. Flash pages, which may be a few kilobytes in size, may be written as data arrives or as the storage system is to persist data for long intervals of time (e.g., above a defined threshold of time). To commit data more quickly, or to reduce the number of writes to the Flash memory devices, the storage controllers may first write data into the separately addressable fast write storage on one or more storage devices. In one embodiment, the storage controllers125a,125bmay initiate the use of erase blocks within and across storage devices (e.g.,118) in accordance with an age and expected remaining lifespan of the storage devices, or based on other statistics. The storage controllers125a,125bmay initiate garbage collection and data migration between storage devices in accordance with pages that are no longer needed as well as to manage Flash page and erase block lifespans and to manage overall system performance. In one embodiment, the storage system124may utilize mirroring and/or erasure coding schemes as part of storing data into addressable fast write storage and/or as part of writing data into allocation units associated with erase blocks. Erasure codes may be used across storage devices, as well as within erase blocks or allocation units, or within and across Flash memory devices on a single storage device, to provide redundancy against single or multiple storage device failures or to protect against internal corruptions of Flash memory pages resulting from Flash memory operations or from degradation of Flash memory cells. Mirroring and erasure coding at various levels may be used to recover from multiple types of failures that occur separately or in combination. The embodiments depicted with reference toFIGS.2A-Gillustrate a storage cluster that stores user data, such as user data originating from one or more user or client systems or other sources external to the storage cluster. The storage cluster distributes user data across storage nodes housed within a chassis, or across multiple chassis, using erasure coding and redundant copies of metadata. Erasure coding refers to a method of data protection or reconstruction in which data is stored across a set of different locations, such as disks, storage nodes or geographic locations. Flash memory is one type of solid-state memory that may be integrated with the embodiments, although the embodiments may be extended to other types of solid-state memory or other storage medium, including non-solid state memory. Control of storage locations and workloads are distributed across the storage locations in a clustered peer-to-peer system. Tasks such as mediating communications between the various storage nodes, detecting when a storage node has become unavailable, and balancing I/Os (inputs and outputs) across the various storage nodes, are all handled on a distributed basis. Data is laid out or distributed across multiple storage nodes in data fragments or stripes that support data recovery in some embodiments. Ownership of data can be reassigned within a cluster, independent of input and output patterns. This architecture described in more detail below allows a storage node in the cluster to fail, with the system remaining operational, since the data can be reconstructed from other storage nodes and thus remain available for input and output operations. In various embodiments, a storage node may be referred to as a cluster node, a blade, or a server. The storage cluster may be contained within a chassis, i.e., an enclosure housing one or more storage nodes. A mechanism to provide power to each storage node, such as a power distribution bus, and a communication mechanism, such as a communication bus that enables communication between the storage nodes are included within the chassis. The storage cluster can run as an independent system in one location according to some embodiments. In one embodiment, a chassis contains at least two instances of both the power distribution and the communication bus which may be enabled or disabled independently. The internal communication bus may be an Ethernet bus, however, other technologies such as PCIe, InfiniBand, and others, are equally suitable. The chassis provides a port for an external communication bus for enabling communication between multiple chassis, directly or through a switch, and with client systems. The external communication may use a technology such as Ethernet, InfiniBand, Fibre Channel, etc. In some embodiments, the external communication bus uses different communication bus technologies for inter-chassis and client communication. If a switch is deployed within or between chassis, the switch may act as a translation between multiple protocols or technologies. When multiple chassis are connected to define a storage cluster, the storage cluster may be accessed by a client using either proprietary interfaces or standard interfaces such as network file system (‘NFS’), common internet file system (‘CIFS’), small computer system interface (‘SCSI’) or hypertext transfer protocol (‘HTTP’). Translation from the client protocol may occur at the switch, chassis external communication bus or within each storage node. In some embodiments, multiple chassis may be coupled or connected to each other through an aggregator switch. A portion and/or all of the coupled or connected chassis may be designated as a storage cluster. As discussed above, each chassis can have multiple blades, each blade has a media access control (‘MAC’) address, but the storage cluster is presented to an external network as having a single cluster IP address and a single MAC address in some embodiments. Each storage node may be one or more storage servers and each storage server is connected to one or more non-volatile solid state memory units, which may be referred to as storage units or storage devices. One embodiment includes a single storage server in each storage node and between one to eight non-volatile solid state memory units, however this one example is not meant to be limiting. The storage server may include a processor, DRAM and interfaces for the internal communication bus and power distribution for each of the power buses. Inside the storage node, the interfaces and storage unit share a communication bus, e.g., PCI Express, in some embodiments. The non-volatile solid state memory units may directly access the internal communication bus interface through a storage node communication bus, or request the storage node to access the bus interface. The non-volatile solid state memory unit contains an embedded CPU, solid state storage controller, and a quantity of solid state mass storage, e.g., between 2-32 terabytes (‘TB’) in some embodiments. An embedded volatile storage medium, such as DRAM, and an energy reserve apparatus are included in the non-volatile solid state memory unit. In some embodiments, the energy reserve apparatus is a capacitor, super-capacitor, or battery that enables transferring a subset of DRAM contents to a stable storage medium in the case of power loss. In some embodiments, the non-volatile solid state memory unit is constructed with a storage class memory, such as phase change or magnetoresistive random access memory (‘MRAM’) that substitutes for DRAM and enables a reduced power hold-up apparatus. One of many features of the storage nodes and non-volatile solid state storage is the ability to proactively rebuild data in a storage cluster. The storage nodes and non-volatile solid state storage can determine when a storage node or non-volatile solid state storage in the storage cluster is unreachable, independent of whether there is an attempt to read data involving that storage node or non-volatile solid state storage. The storage nodes and non-volatile solid state storage then cooperate to recover and rebuild the data in at least partially new locations. This constitutes a proactive rebuild, in that the system rebuilds data without waiting until the data is needed for a read access initiated from a client system employing the storage cluster. These and further details of the storage memory and operation thereof are discussed below. FIG.2Ais a perspective view of a storage cluster161, with multiple storage nodes150and internal solid-state memory coupled to each storage node to provide network attached storage or storage area network, in accordance with some embodiments. A network attached storage, storage area network, or a storage cluster, or other storage memory, could include one or more storage clusters161, each having one or more storage nodes150, in a flexible and reconfigurable arrangement of both the physical components and the amount of storage memory provided thereby. The storage cluster161is designed to fit in a rack, and one or more racks can be set up and populated as desired for the storage memory. The storage cluster161has a chassis138having multiple slots142. It should be appreciated that chassis138may be referred to as a housing, enclosure, or rack unit. In one embodiment, the chassis138has fourteen slots142, although other numbers of slots are readily devised. For example, some embodiments have four slots, eight slots, sixteen slots, thirty-two slots, or other suitable number of slots. Each slot142can accommodate one storage node150in some embodiments. Chassis138includes flaps148that can be utilized to mount the chassis138on a rack. Fans144provide air circulation for cooling of the storage nodes150and components thereof, although other cooling components could be used, or an embodiment could be devised without cooling components. A switch fabric146couples storage nodes150within chassis138together and to a network for communication to the memory. In an embodiment depicted in herein, the slots142to the left of the switch fabric146and fans144are shown occupied by storage nodes150, while the slots142to the right of the switch fabric146and fans144are empty and available for insertion of storage node150for illustrative purposes. This configuration is one example, and one or more storage nodes150could occupy the slots142in various further arrangements. The storage node arrangements need not be sequential or adjacent in some embodiments. Storage nodes150are hot pluggable, meaning that a storage node150can be inserted into a slot142in the chassis138, or removed from a slot142, without stopping or powering down the system. Upon insertion or removal of storage node150from slot142, the system automatically reconfigures in order to recognize and adapt to the change. Reconfiguration, in some embodiments, includes restoring redundancy and/or rebalancing data or load. Each storage node150can have multiple components. In the embodiment shown here, the storage node150includes a printed circuit board159populated by a CPU156, i.e., processor, a memory154coupled to the CPU156, and a non-volatile solid state storage152coupled to the CPU156, although other mountings and/or components could be used in further embodiments. The memory154has instructions which are executed by the CPU156and/or data operated on by the CPU156. As further explained below, the non-volatile solid state storage152includes flash or, in further embodiments, other types of solid-state memory. Referring toFIG.2A, storage cluster161is scalable, meaning that storage capacity with non-uniform storage sizes is readily added, as described above. One or more storage nodes150can be plugged into or removed from each chassis and the storage cluster self-configures in some embodiments. Plug-in storage nodes150, whether installed in a chassis as delivered or later added, can have different sizes. For example, in one embodiment a storage node150can have any multiple of 4 TB, e.g., 8 TB, 12 TB, 16 TB, 32 TB, etc. In further embodiments, a storage node150could have any multiple of other storage amounts or capacities. Storage capacity of each storage node150is broadcast, and influences decisions of how to stripe the data. For maximum storage efficiency, an embodiment can self-configure as wide as possible in the stripe, subject to a predetermined requirement of continued operation with loss of up to one, or up to two, non-volatile solid state storage units152or storage nodes150within the chassis. FIG.2Bis a block diagram showing a communications interconnect173and power distribution bus172coupling multiple storage nodes150. Referring back toFIG.2A, the communications interconnect173can be included in or implemented with the switch fabric146in some embodiments. Where multiple storage clusters161occupy a rack, the communications interconnect173can be included in or implemented with a top of rack switch, in some embodiments. As illustrated inFIG.2B, storage cluster161is enclosed within a single chassis138. External port176is coupled to storage nodes150through communications interconnect173, while external port174is coupled directly to a storage node. External power port178is coupled to power distribution bus172. Storage nodes150may include varying amounts and differing capacities of non-volatile solid state storage152as described with reference toFIG.2A. In addition, one or more storage nodes150may be a compute only storage node as illustrated inFIG.2B. Authorities168are implemented on the non-volatile solid state storages152, for example as lists or other data structures stored in memory. In some embodiments the authorities are stored within the non-volatile solid state storage152and supported by software executing on a controller or other processor of the non-volatile solid state storage152. In a further embodiment, authorities168are implemented on the storage nodes150, for example as lists or other data structures stored in the memory154and supported by software executing on the CPU156of the storage node150. Authorities168control how and where data is stored in the non-volatile solid state storages152in some embodiments. This control assists in determining which type of erasure coding scheme is applied to the data, and which storage nodes150have which portions of the data. Each authority168may be assigned to a non-volatile solid state storage152. Each authority may control a range of inode numbers, segment numbers, or other data identifiers which are assigned to data by a file system, by the storage nodes150, or by the non-volatile solid state storage152, in various embodiments. Every piece of data, and every piece of metadata, has redundancy in the system in some embodiments. In addition, every piece of data and every piece of metadata has an owner, which may be referred to as an authority. If that authority is unreachable, for example through failure of a storage node, there is a plan of succession for how to find that data or that metadata. In various embodiments, there are redundant copies of authorities168. Authorities168have a relationship to storage nodes150and non-volatile solid state storage152in some embodiments. Each authority168, covering a range of data segment numbers or other identifiers of the data, may be assigned to a specific non-volatile solid state storage152. In some embodiments the authorities168for all of such ranges are distributed over the non-volatile solid state storages152of a storage cluster. Each storage node150has a network port that provides access to the non-volatile solid state storage(s)152of that storage node150. Data can be stored in a segment, which is associated with a segment number and that segment number is an indirection for a configuration of a RAID (redundant array of independent disks) stripe in some embodiments. The assignment and use of the authorities168thus establishes an indirection to data. Indirection may be referred to as the ability to reference data indirectly, in this case via an authority168, in accordance with some embodiments. A segment identifies a set of non-volatile solid state storage152and a local identifier into the set of non-volatile solid state storage152that may contain data. In some embodiments, the local identifier is an offset into the device and may be reused sequentially by multiple segments. In other embodiments the local identifier is unique for a specific segment and never reused. The offsets in the non-volatile solid state storage152are applied to locating data for writing to or reading from the non-volatile solid state storage152(in the form of a RAID stripe). Data is striped across multiple units of non-volatile solid state storage152, which may include or be different from the non-volatile solid state storage152having the authority168for a particular data segment. If there is a change in where a particular segment of data is located, e.g., during a data move or a data reconstruction, the authority168for that data segment should be consulted, at that non-volatile solid state storage152or storage node150having that authority168. In order to locate a particular piece of data, embodiments calculate a hash value for a data segment or apply an inode number or a data segment number. The output of this operation points to a non-volatile solid state storage152having the authority168for that particular piece of data. In some embodiments there are two stages to this operation. The first stage maps an entity identifier (ID), e.g., a segment number, inode number, or directory number to an authority identifier. This mapping may include a calculation such as a hash or a bit mask. The second stage is mapping the authority identifier to a particular non-volatile solid state storage152, which may be done through an explicit mapping. The operation is repeatable, so that when the calculation is performed, the result of the calculation repeatably and reliably points to a particular non-volatile solid state storage152having that authority168. The operation may include the set of reachable storage nodes as input. If the set of reachable non-volatile solid state storage units changes the optimal set changes. In some embodiments, the persisted value is the current assignment (which is always true) and the calculated value is the target assignment the cluster will attempt to reconfigure towards. This calculation may be used to determine the optimal non-volatile solid state storage152for an authority in the presence of a set of non-volatile solid state storage152that are reachable and constitute the same cluster. The calculation also determines an ordered set of peer non-volatile solid state storage152that will also record the authority to non-volatile solid state storage mapping so that the authority may be determined even if the assigned non-volatile solid state storage is unreachable. A duplicate or substitute authority168may be consulted if a specific authority168is unavailable in some embodiments. With reference toFIGS.2A and2B, two of the many tasks of the CPU156on a storage node150are to break up write data, and reassemble read data. When the system has determined that data is to be written, the authority168for that data is located as above. When the segment ID for data is already determined the request to write is forwarded to the non-volatile solid state storage152currently determined to be the host of the authority168determined from the segment. The host CPU156of the storage node150, on which the non-volatile solid state storage152and corresponding authority168reside, then breaks up or shards the data and transmits the data out to various non-volatile solid state storage152. The transmitted data is written as a data stripe in accordance with an erasure coding scheme. In some embodiments, data is requested to be pulled, and in other embodiments, data is pushed. In reverse, when data is read, the authority168for the segment ID containing the data is located as described above. The host CPU156of the storage node150on which the non-volatile solid state storage152and corresponding authority168reside requests the data from the non-volatile solid state storage and corresponding storage nodes pointed to by the authority. In some embodiments the data is read from flash storage as a data stripe. The host CPU156of storage node150then reassembles the read data, correcting any errors (if present) according to the appropriate erasure coding scheme, and forwards the reassembled data to the network. In further embodiments, some or all of these tasks can be handled in the non-volatile solid state storage152. In some embodiments, the segment host requests the data be sent to storage node150by requesting pages from storage and then sending the data to the storage node making the original request. In some systems, for example in UNIX-style file systems, data is handled with an index node or inode, which specifies a data structure that represents an object in a file system. The object could be a file or a directory, for example. Metadata may accompany the object, as attributes such as permission data and a creation timestamp, among other attributes. A segment number could be assigned to all or a portion of such an object in a file system. In other systems, data segments are handled with a segment number assigned elsewhere. For purposes of discussion, the unit of distribution is an entity, and an entity can be a file, a directory or a segment. That is, entities are units of data or metadata stored by a storage system. Entities are grouped into sets called authorities. Each authority has an authority owner, which is a storage node that has the exclusive right to update the entities in the authority. In other words, a storage node contains the authority, and that the authority, in turn, contains entities. A segment is a logical container of data in accordance with some embodiments. A segment is an address space between medium address space and physical flash locations, i.e., the data segment number, are in this address space. Segments may also contain meta-data, which enable data redundancy to be restored (rewritten to different flash locations or devices) without the involvement of higher level software. In one embodiment, an internal format of a segment contains client data and medium mappings to determine the position of that data. Each data segment is protected, e.g., from memory and other failures, by breaking the segment into a number of data and parity shards, where applicable. The data and parity shards are distributed, i.e., striped, across non-volatile solid state storage152coupled to the host CPUs156(SeeFIGS.2E and2G) in accordance with an erasure coding scheme. Usage of the term segments refers to the container and its place in the address space of segments in some embodiments. Usage of the term stripe refers to the same set of shards as a segment and includes how the shards are distributed along with redundancy or parity information in accordance with some embodiments. A series of address-space transformations takes place across an entire storage system. At the top are the directory entries (file names) which link to an inode. Inodes point into medium address space, where data is logically stored. Medium addresses may be mapped through a series of indirect mediums to spread the load of large files, or implement data services like deduplication or snapshots. Segment addresses are then translated into physical flash locations. Physical flash locations have an address range bounded by the amount of flash in the system in accordance with some embodiments. Medium addresses and segment addresses are logical containers, and in some embodiments use a 128 bit or larger identifier so as to be practically infinite, with a likelihood of reuse calculated as longer than the expected life of the system. Addresses from logical containers are allocated in a hierarchical fashion in some embodiments. Initially, each non-volatile solid state storage unit152may be assigned a range of address space. Within this assigned range, the non-volatile solid state storage152is able to allocate addresses without synchronization with other non-volatile solid state storage152. Data and metadata is stored by a set of underlying storage layouts that are optimized for varying workload patterns and storage devices. These layouts incorporate multiple redundancy schemes, compression formats and index algorithms. Some of these layouts store information about authorities and authority masters, while others store file metadata and file data. The redundancy schemes include error correction codes that tolerate corrupted bits within a single storage device (such as a NAND flash chip), erasure codes that tolerate the failure of multiple storage nodes, and replication schemes that tolerate data center or regional failures. In some embodiments, low density parity check (‘LDPC’) code is used within a single storage unit. Reed-Solomon encoding is used within a storage cluster, and mirroring is used within a storage grid in some embodiments. Metadata may be stored using an ordered log structured index (such as a Log Structured Merge Tree), and large data may not be stored in a log structured layout. In order to maintain consistency across multiple copies of an entity, the storage nodes agree implicitly on two things through calculations: (1) the authority that contains the entity, and (2) the storage node that contains the authority. The assignment of entities to authorities can be done by pseudo randomly assigning entities to authorities, by splitting entities into ranges based upon an externally produced key, or by placing a single entity into each authority. Examples of pseudorandom schemes are linear hashing and the Replication Under Scalable Hashing (‘RUSH’) family of hashes, including Controlled Replication Under Scalable Hashing (‘CRUSH’). In some embodiments, pseudo-random assignment is utilized only for assigning authorities to nodes because the set of nodes can change. The set of authorities cannot change so any subjective function may be applied in these embodiments. Some placement schemes automatically place authorities on storage nodes, while other placement schemes rely on an explicit mapping of authorities to storage nodes. In some embodiments, a pseudorandom scheme is utilized to map from each authority to a set of candidate authority owners. A pseudorandom data distribution function related to CRUSH may assign authorities to storage nodes and create a list of where the authorities are assigned. Each storage node has a copy of the pseudorandom data distribution function, and can arrive at the same calculation for distributing, and later finding or locating an authority. Each of the pseudorandom schemes requires the reachable set of storage nodes as input in some embodiments in order to conclude the same target nodes. Once an entity has been placed in an authority, the entity may be stored on physical devices so that no expected failure will lead to unexpected data loss. In some embodiments, rebalancing algorithms attempt to store the copies of all entities within an authority in the same layout and on the same set of machines. Examples of expected failures include device failures, stolen machines, datacenter fires, and regional disasters, such as nuclear or geological events. Different failures lead to different levels of acceptable data loss. In some embodiments, a stolen storage node impacts neither the security nor the reliability of the system, while depending on system configuration, a regional event could lead to no loss of data, a few seconds or minutes of lost updates, or even complete data loss. In the embodiments, the placement of data for storage redundancy is independent of the placement of authorities for data consistency. In some embodiments, storage nodes that contain authorities do not contain any persistent storage. Instead, the storage nodes are connected to non-volatile solid state storage units that do not contain authorities. The communications interconnect between storage nodes and non-volatile solid state storage units consists of multiple communication technologies and has non-uniform performance and fault tolerance characteristics. In some embodiments, as mentioned above, non-volatile solid state storage units are connected to storage nodes via PCI express, storage nodes are connected together within a single chassis using Ethernet backplane, and chassis are connected together to form a storage cluster. Storage clusters are connected to clients using Ethernet or fiber channel in some embodiments. If multiple storage clusters are configured into a storage grid, the multiple storage clusters are connected using the Internet or other long-distance networking links, such as a “metro scale” link or private link that does not traverse the internet. Authority owners have the exclusive right to modify entities, to migrate entities from one non-volatile solid state storage unit to another non-volatile solid state storage unit, and to add and remove copies of entities. This allows for maintaining the redundancy of the underlying data. When an authority owner fails, is going to be decommissioned, or is overloaded, the authority is transferred to a new storage node. Transient failures make it non-trivial to ensure that all non-faulty machines agree upon the new authority location. The ambiguity that arises due to transient failures can be achieved automatically by a consensus protocol such as Paxos, hot-warm failover schemes, via manual intervention by a remote system administrator, or by a local hardware administrator (such as by physically removing the failed machine from the cluster, or pressing a button on the failed machine). In some embodiments, a consensus protocol is used, and failover is automatic. If too many failures or replication events occur in too short a time period, the system goes into a self-preservation mode and halts replication and data movement activities until an administrator intervenes in accordance with some embodiments. As authorities are transferred between storage nodes and authority owners update entities in their authorities, the system transfers messages between the storage nodes and non-volatile solid state storage units. With regard to persistent messages, messages that have different purposes are of different types. Depending on the type of the message, the system maintains different ordering and durability guarantees. As the persistent messages are being processed, the messages are temporarily stored in multiple durable and non-durable storage hardware technologies. In some embodiments, messages are stored in RAM, NVRAM and on NAND flash devices, and a variety of protocols are used in order to make efficient use of each storage medium. Latency-sensitive client requests may be persisted in replicated NVRAM, and then later NAND, while background rebalancing operations are persisted directly to NAND. Persistent messages are persistently stored prior to being transmitted. This allows the system to continue to serve client requests despite failures and component replacement. Although many hardware components contain unique identifiers that are visible to system administrators, manufacturer, hardware supply chain and ongoing monitoring quality control infrastructure, applications running on top of the infrastructure address virtualize addresses. These virtualized addresses do not change over the lifetime of the storage system, regardless of component failures and replacements. This allows each component of the storage system to be replaced over time without reconfiguration or disruptions of client request processing, i.e., the system supports non-disruptive upgrades. In some embodiments, the virtualized addresses are stored with sufficient redundancy. A continuous monitoring system correlates hardware and software status and the hardware identifiers. This allows detection and prediction of failures due to faulty components and manufacturing details. The monitoring system also enables the proactive transfer of authorities and entities away from impacted devices before failure occurs by removing the component from the critical path in some embodiments. FIG.2Cis a multiple level block diagram, showing contents of a storage node150and contents of a non-volatile solid state storage152of the storage node150. Data is communicated to and from the storage node150by a network interface controller (‘NIC’)202in some embodiments. Each storage node150has a CPU156, and one or more non-volatile solid state storage152, as discussed above. Moving down one level inFIG.2C, each non-volatile solid state storage152has a relatively fast non-volatile solid state memory, such as nonvolatile random access memory (‘NVRAM’)204, and flash memory206. In some embodiments, NVRAM204may be a component that does not require program/erase cycles (DRAM, MRAM, PCM), and can be a memory that can support being written vastly more often than the memory is read from. Moving down another level inFIG.2C, the NVRAM204is implemented in one embodiment as high speed volatile memory, such as dynamic random access memory (DRAM)216, backed up by energy reserve218. Energy reserve218provides sufficient electrical power to keep the DRAM216powered long enough for contents to be transferred to the flash memory206in the event of power failure. In some embodiments, energy reserve218is a capacitor, super-capacitor, battery, or other device, that supplies a suitable supply of energy sufficient to enable the transfer of the contents of DRAM216to a stable storage medium in the case of power loss. The flash memory206is implemented as multiple flash dies222, which may be referred to as packages of flash dies222or an array of flash dies222. It should be appreciated that the flash dies222could be packaged in any number of ways, with a single die per package, multiple dies per package (i.e. multichip packages), in hybrid packages, as bare dies on a printed circuit board or other substrate, as encapsulated dies, etc. In the embodiment shown, the non-volatile solid state storage152has a controller212or other processor, and an input output (I/O) port210coupled to the controller212. I/O port210is coupled to the CPU156and/or the network interface controller202of the flash storage node150. Flash input output (I/O) port220is coupled to the flash dies222, and a direct memory access unit (DMA)214is coupled to the controller212, the DRAM216and the flash dies222. In the embodiment shown, the I/O port210, controller212, DMA unit214and flash I/O port220are implemented on a programmable logic device (‘PLD’)208, e.g., an FPGA. In this embodiment, each flash die222has pages, organized as sixteen kB (kilobyte) pages224, and a register226through which data can be written to or read from the flash die222. In further embodiments, other types of solid-state memory are used in place of, or in addition to flash memory illustrated within flash die222. Storage clusters161, in various embodiments as disclosed herein, can be contrasted with storage arrays in general. The storage nodes150are part of a collection that creates the storage cluster161. Each storage node150owns a slice of data and computing required to provide the data. Multiple storage nodes150cooperate to store and retrieve the data. Storage memory or storage devices, as used in storage arrays in general, are less involved with processing and manipulating the data. Storage memory or storage devices in a storage array receive commands to read, write, or erase data. The storage memory or storage devices in a storage array are not aware of a larger system in which they are embedded, or what the data means. Storage memory or storage devices in storage arrays can include various types of storage memory, such as RAM, solid state drives, hard disk drives, etc. The storage units152described herein have multiple interfaces active simultaneously and serving multiple purposes. In some embodiments, some of the functionality of a storage node150is shifted into a storage unit152, transforming the storage unit152into a combination of storage unit152and storage node150. Placing computing (relative to storage data) into the storage unit152places this computing closer to the data itself. The various system embodiments have a hierarchy of storage node layers with different capabilities. By contrast, in a storage array, a controller owns and knows everything about all of the data that the controller manages in a shelf or storage devices. In a storage cluster161, as described herein, multiple controllers in multiple storage units152and/or storage nodes150cooperate in various ways (e.g., for erasure coding, data sharding, metadata communication and redundancy, storage capacity expansion or contraction, data recovery, and so on). FIG.2Dshows a storage server environment, which uses embodiments of the storage nodes150and storage units152ofFIGS.2A-C. In this version, each storage unit152has a processor such as controller212(seeFIG.2C), an FPGA, flash memory206, and NVRAM204(which is super-capacitor backed DRAM216, seeFIGS.2B and2C) on a PCIe (peripheral component interconnect express) board in a chassis138(seeFIG.2A). The storage unit152may be implemented as a single board containing storage, and may be the largest tolerable failure domain inside the chassis. In some embodiments, up to two storage units152may fail and the device will continue with no data loss. The physical storage is divided into named regions based on application usage in some embodiments. The NVRAM204is a contiguous block of reserved memory in the storage unit152DRAM216, and is backed by NAND flash. NVRAM204is logically divided into multiple memory regions written for two as spool (e.g., spool region). Space within the NVRAM204spools is managed by each authority168independently. Each device provides an amount of storage space to each authority168. That authority168further manages lifetimes and allocations within that space. Examples of a spool include distributed transactions or notions. When the primary power to a storage unit152fails, onboard super-capacitors provide a short duration of power hold up. During this holdup interval, the contents of the NVRAM204are flushed to flash memory206. On the next power-on, the contents of the NVRAM204are recovered from the flash memory206. As for the storage unit controller, the responsibility of the logical “controller” is distributed across each of the blades containing authorities168. This distribution of logical control is shown inFIG.2Das a host controller242, mid-tier controller244and storage unit controller(s)246. Management of the control plane and the storage plane are treated independently, although parts may be physically co-located on the same blade. Each authority168effectively serves as an independent controller. Each authority168provides its own data and metadata structures, its own background workers, and maintains its own lifecycle. FIG.2Eis a blade252hardware block diagram, showing a control plane254, compute and storage planes256,258, and authorities168interacting with underlying physical resources, using embodiments of the storage nodes150and storage units152ofFIGS.2A-Cin the storage server environment ofFIG.2D. The control plane254is partitioned into a number of authorities168which can use the compute resources in the compute plane256to run on any of the blades252. The storage plane258is partitioned into a set of devices, each of which provides access to flash206and NVRAM204resources. In one embodiment, the compute plane256may perform the operations of a storage array controller, as described herein, on one or more devices of the storage plane258(e.g., a storage array). In the compute and storage planes256,258ofFIG.2E, the authorities168interact with the underlying physical resources (i.e., devices). From the point of view of an authority168, its resources are striped over all of the physical devices. From the point of view of a device, it provides resources to all authorities168, irrespective of where the authorities happen to run. Each authority168has allocated or has been allocated one or more partitions260of storage memory in the storage units152, e.g. partitions260in flash memory206and NVRAM204. Each authority168uses those allocated partitions260that belong to it, for writing or reading user data. Authorities can be associated with differing amounts of physical storage of the system. For example, one authority168could have a larger number of partitions260or larger sized partitions260in one or more storage units152than one or more other authorities168. FIG.2Fdepicts elasticity software layers in blades252of a storage cluster, in accordance with some embodiments. In the elasticity structure, elasticity software is symmetric, i.e., each blade's compute module270runs the three identical layers of processes depicted inFIG.2F. Storage managers274execute read and write requests from other blades252for data and metadata stored in local storage unit152NVRAM204and flash206. Authorities168fulfill client requests by issuing the necessary reads and writes to the blades252on whose storage units152the corresponding data or metadata resides. Endpoints272parse client connection requests received from switch fabric146supervisory software, relay the client connection requests to the authorities168responsible for fulfillment, and relay the authorities'168responses to clients. The symmetric three-layer structure enables the storage system's high degree of concurrency. Elasticity scales out efficiently and reliably in these embodiments. In addition, elasticity implements a unique scale-out technique that balances work evenly across all resources regardless of client access pattern, and maximizes concurrency by eliminating much of the need for inter-blade coordination that typically occurs with conventional distributed locking. Still referring toFIG.2F, authorities168running in the compute modules270of a blade252perform the internal operations required to fulfill client requests. One feature of elasticity is that authorities168are stateless, i.e., they cache active data and metadata in their own blades'252DRAMs for fast access, but the authorities store every update in their NVRAM204partitions on three separate blades252until the update has been written to flash206. All the storage system writes to NVRAM204are in triplicate to partitions on three separate blades252in some embodiments. With triple-mirrored NVRAM204and persistent storage protected by parity and Reed-Solomon RAID checksums, the storage system can survive concurrent failure of two blades252with no loss of data, metadata, or access to either. Because authorities168are stateless, they can migrate between blades252. Each authority168has a unique identifier. NVRAM204and flash206partitions are associated with authorities'168identifiers, not with the blades252on which they are running in some embodiments. Thus, when an authority168migrates, the authority168continues to manage the same storage partitions from its new location. When a new blade252is installed in an embodiment of the storage cluster, the system automatically rebalances load by: partitioning the new blade's252storage for use by the system's authorities168, migrating selected authorities168to the new blade252, starting endpoints272on the new blade252and including them in the switch fabric's146client connection distribution algorithm. From their new locations, migrated authorities168persist the contents of their NVRAM204partitions on flash206, process read and write requests from other authorities168, and fulfill the client requests that endpoints272direct to them. Similarly, if a blade252fails or is removed, the system redistributes its authorities168among the system's remaining blades252. The redistributed authorities168continue to perform their original functions from their new locations. FIG.2Gdepicts authorities168and storage resources in blades252of a storage cluster, in accordance with some embodiments. Each authority168is exclusively responsible for a partition of the flash206and NVRAM204on each blade252. The authority168manages the content and integrity of its partitions independently of other authorities168. Authorities168compress incoming data and preserve it temporarily in their NVRAM204partitions, and then consolidate, RAID-protect, and persist the data in segments of the storage in their flash206partitions. As the authorities168write data to flash206, storage managers274perform the necessary flash translation to optimize write performance and maximize media longevity. In the background, authorities168“garbage collect,” or reclaim space occupied by data that clients have made obsolete by overwriting the data. It should be appreciated that since authorities'168partitions are disjoint, there is no need for distributed locking to execute client and writes or to perform background functions. The embodiments described herein may utilize various software, communication and/or networking protocols. In addition, the configuration of the hardware and/or software may be adjusted to accommodate various protocols. For example, the embodiments may utilize Active Directory, which is a database based system that provides authentication, directory, policy, and other services in a WINDOWS™ environment. In these embodiments, LDAP (Lightweight Directory Access Protocol) is one example application protocol for querying and modifying items in directory service providers such as Active Directory. In some embodiments, a network lock manager (‘NLM’) is utilized as a facility that works in cooperation with the Network File System (‘NFS’) to provide a System V style of advisory file and record locking over a network. The Server Message Block (‘SMB’) protocol, one version of which is also known as Common Internet File System (‘CIFS’), may be integrated with the storage systems discussed herein. SMP operates as an application-layer network protocol typically used for providing shared access to files, printers, and serial ports and miscellaneous communications between nodes on a network. SMB also provides an authenticated inter-process communication mechanism. AMAZON™ S3 (Simple Storage Service) is a web service offered by Amazon Web Services, and the systems described herein may interface with Amazon S3 through web services interfaces (REST (representational state transfer), SOAP (simple object access protocol), and BitTorrent). A RESTful API (application programming interface) breaks down a transaction to create a series of small modules. Each module addresses a particular underlying part of the transaction. The control or permissions provided with these embodiments, especially for object data, may include utilization of an access control list (‘ACL’). The ACL is a list of permissions attached to an object and the ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects. The systems may utilize Internet Protocol version 6 (‘IPv6’), as well as IPv4, for the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. The routing of packets between networked systems may include Equal-cost multi-path routing (‘ECMP’), which is a routing strategy where next-hop packet forwarding to a single destination can occur over multiple “best paths” which tie for top place in routing metric calculations. Multi-path routing can be used in conjunction with most routing protocols, because it is a per-hop decision limited to a single router. The software may support Multi-tenancy, which is an architecture in which a single instance of a software application serves multiple customers. Each customer may be referred to as a tenant. Tenants may be given the ability to customize some parts of the application, but may not customize the application's code, in some embodiments. The embodiments may maintain audit logs. An audit log is a document that records an event in a computing system. In addition to documenting what resources were accessed, audit log entries typically include destination and source addresses, a timestamp, and user login information for compliance with various regulations. The embodiments may support various key management policies, such as encryption key rotation. In addition, the system may support dynamic root passwords or some variation dynamically changing passwords. FIG.3Asets forth a diagram of a storage system306that is coupled for data communications with a cloud services provider302in accordance with some embodiments of the present disclosure. Although depicted in less detail, the storage system306depicted inFIG.3Amay be similar to the storage systems described above with reference toFIGS.1A-1DandFIGS.2A-2G. In some embodiments, the storage system306depicted inFIG.3Amay be embodied as a storage system that includes imbalanced active/active controllers, as a storage system that includes balanced active/active controllers, as a storage system that includes active/active controllers where less than all of each controller's resources are utilized such that each controller has reserve resources that may be used to support failover, as a storage system that includes fully active/active controllers, as a storage system that includes dataset-segregated controllers, as a storage system that includes dual-layer architectures with front-end controllers and back-end integrated storage controllers, as a storage system that includes scale-out clusters of dual-controller arrays, as well as combinations of such embodiments. In the example depicted inFIG.3A, the storage system306is coupled to the cloud services provider302via a data communications link304. The data communications link304may be embodied as a dedicated data communications link, as a data communications pathway that is provided through the use of one or data communications networks such as a wide area network (‘WAN’) or LAN, or as some other mechanism capable of transporting digital information between the storage system306and the cloud services provider302. The cloud services provider302depicted inFIG.3Amay be embodied, for example, as a system and computing environment that provides a vast array of services to users of the cloud services provider302through the sharing of computing resources via the data communications link304. The cloud services provider302may provide on-demand access to a shared pool of configurable computing resources such as computer networks, servers, storage, applications and services, and so on. In the example depicted inFIG.3A, the cloud services provider302may be configured to provide a variety of services to the storage system306and users of the storage system306through the implementation of various service models. In the example depicted inFIG.3A, the cloud services provider302may be embodied, for example, as a private cloud, as a public cloud, or as a combination of a private cloud and public cloud. Although not explicitly depicted inFIG.3A, readers will appreciate that a vast amount of additional hardware components and additional software components may be necessary to facilitate the delivery of cloud services to the storage system306and users of the storage system306. For example, the storage system306may be coupled to (or even include) a cloud storage gateway. In order to enable the storage system306and users of the storage system306to make use of the services provided by the cloud services provider302, a cloud migration process may take place during which data, applications, or other elements from an organization's local systems (or even from another cloud environment) are moved to the cloud services provider302. In the example depicted inFIG.3A, and as described briefly above, the cloud services provider302may be configured to provide services to the storage system306and users of the storage system306through the usage of a SaaS service model. The cloud services provider302may also be configured to provide access to virtualized computing environments to the storage system306and users of the storage system306. Such virtualized computing environments may be embodied, for example, as a virtual machine or other virtualized computer hardware platforms, virtual storage devices, virtualized computer network resources, and so on. Examples of such virtualized environments can include virtual machines that are created to emulate an actual computer, virtualized desktop environments that separate a logical desktop from a physical machine, virtualized file systems that allow uniform access to different types of concrete file systems, and many others. For further explanation,FIG.3Bsets forth a diagram of a storage system306in accordance with some embodiments of the present disclosure. Although depicted in less detail, the storage system306depicted inFIG.3Bmay be similar to the storage systems described above with reference toFIGS.1A-1DandFIGS.2A-2Gas the storage system may include many of the components described above. The storage system306depicted inFIG.3Bmay include a vast amount of storage resources308, which may be embodied in many forms. For example, the storage resources308can include nano-RAM or another form of nonvolatile random access memory that utilizes carbon nanotubes deposited on a substrate, 3D crosspoint non-volatile memory, flash memory including single-level cell (‘SLC’) NAND flash, multi-level cell (‘MLC’) NAND flash, triple-level cell (‘TLC’) NAND flash, quad-level cell (‘QLC’) NAND flash, or others. Likewise, the storage resources308may include non-volatile magnetoresistive random-access memory (‘MRAM’), including spin transfer torque (‘STT’) MRAM. The example storage resources308may alternatively include non-volatile phase-change memory (‘PCM’), quantum memory that allows for the storage and retrieval of photonic quantum information, resistive random-access memory (‘ReRAM’), storage class memory (‘SCM’), or other form of storage resources, including any combination of resources described herein. Readers will appreciate that other forms of computer memories and storage devices may be utilized by the storage systems described above, including DRAM, SRAM, EEPROM, universal memory, and many others. The storage resources308depicted inFIG.3Amay be embodied in a variety of form factors, including but not limited to, dual in-line memory modules (‘DIMMs’), non-volatile dual in-line memory modules (‘NVDIMMs’), M.2, U.2, and others. The storage resources308depicted inFIG.3Amay include various forms of SCM. SCM may effectively treat fast, non-volatile memory (e.g., NAND flash) as an extension of DRAM such that an entire dataset may be treated as an in-memory dataset that resides entirely in DRAM. SCM may include non-volatile media such as, for example, NAND flash. Such NAND flash may be accessed utilizing NVMe that can use the PCIe bus as its transport, providing for relatively low access latencies compared to older protocols. In fact, the network protocols used for SSDs in all-flash arrays can include NVMe using Ethernet (ROCE, NVME TCP), Fibre Channel (NVMe FC), InfiniBand (iWARP), and others that make it possible to treat fast, non-volatile memory as an extension of DRAM. In view of the fact that DRAM is often byte-addressable and fast, non-volatile memory such as NAND flash is block-addressable, a controller software/hardware stack may be needed to convert the block data to the bytes that are stored in the media. Examples of media and software that may be used as SCM can include, for example, 3D XPoint, Intel Memory Drive Technology, Samsung's Z-SSD, and others. The example storage system306depicted inFIG.3Bmay implement a variety of storage architectures. For example, storage systems in accordance with some embodiments of the present disclosure may utilize block storage where data is stored in blocks, and each block essentially acts as an individual hard drive. Storage systems in accordance with some embodiments of the present disclosure may utilize object storage, where data is managed as objects. Each object may include the data itself, a variable amount of metadata, and a globally unique identifier, where object storage can be implemented at multiple levels (e.g., device level, system level, interface level). Storage systems in accordance with some embodiments of the present disclosure utilize file storage in which data is stored in a hierarchical structure. Such data may be saved in files and folders, and presented to both the system storing it and the system retrieving it in the same format. The example storage system306depicted inFIG.3Bmay be embodied as a storage system in which additional storage resources can be added through the use of a scale-up model, additional storage resources can be added through the use of a scale-out model, or through some combination thereof. In a scale-up model, additional storage may be added by adding additional storage devices. In a scale-out model, however, additional storage nodes may be added to a cluster of storage nodes, where such storage nodes can include additional processing resources, additional networking resources, and so on. The storage system306depicted inFIG.3Balso includes communications resources310that may be useful in facilitating data communications between components within the storage system306, as well as data communications between the storage system306and computing devices that are outside of the storage system306, including embodiments where those resources are separated by a relatively vast expanse. The communications resources310may be configured to utilize a variety of different protocols and data communication fabrics to facilitate data communications between components within the storage systems as well as computing devices that are outside of the storage system. For example, the communications resources310can include fibre channel (‘FC’) technologies such as FC fabrics and FC protocols that can transport SCSI commands over FC network, FC over ethernet (‘FCoE’) technologies through which FC frames are encapsulated and transmitted over Ethernet networks, InfiniBand (‘IB’) technologies in which a switched fabric topology is utilized to facilitate transmissions between channel adapters, NVM Express (‘NVMe’) technologies and NVMe over fabrics (‘NVMeoF’) technologies through which non-volatile storage media attached via a PCI express (‘PCIe’) bus may be accessed, and others. In fact, the storage systems described above may, directly or indirectly, make use of neutrino communication technologies and devices through which information (including binary information) is transmitted using a beam of neutrinos. The communications resources310can also include mechanisms for accessing storage resources308within the storage system306utilizing serial attached SCSI (‘SAS’), serial ATA (‘SATA’) bus interfaces for connecting storage resources308within the storage system306to host bus adapters within the storage system306, internet small computer systems interface (‘iSCSI’) technologies to provide block-level access to storage resources308within the storage system306, and other communications resources that may be useful in facilitating data communications between components within the storage system306, as well as data communications between the storage system306and computing devices that are outside of the storage system306. The storage system306depicted inFIG.3Balso includes processing resources312that may be useful in executing computer program instructions and performing other computational tasks within the storage system306. The processing resources312may include one or more ASICs that are customized for some particular purpose as well as one or more CPUs. The processing resources312may also include one or more DSPs, one or more FPGAs, one or more systems on a chip (‘SoCs’), or other form of processing resources312. The storage system306may utilize the storage resources312to perform a variety of tasks including, but not limited to, supporting the execution of software resources314that will be described in greater detail below. The storage system306depicted inFIG.3Balso includes software resources314that, when executed by processing resources312within the storage system306, may perform a vast array of tasks. The software resources314may include, for example, one or more modules of computer program instructions that when executed by processing resources312within the storage system306are useful in carrying out various data protection techniques to preserve the integrity of data that is stored within the storage systems. Readers will appreciate that such data protection techniques may be carried out, for example, by system software executing on computer hardware within the storage system, by a cloud services provider, or in other ways. Such data protection techniques can include, for example, data archiving techniques that cause data that is no longer actively used to be moved to a separate storage device or separate storage system for long-term retention, data backup techniques through which data stored in the storage system may be copied and stored in a distinct location to avoid data loss in the event of equipment failure or some other form of catastrophe with the storage system, data replication techniques through which data stored in the storage system is replicated to another storage system such that the data may be accessible via multiple storage systems, data snapshotting techniques through which the state of data within the storage system is captured at various points in time, data and database cloning techniques through which duplicate copies of data and databases may be created, and other data protection techniques. The software resources314may also include software that is useful in implementing software-defined storage (‘SDS’). In such an example, the software resources314may include one or more modules of computer program instructions that, when executed, are useful in policy-based provisioning and management of data storage that is independent of the underlying hardware. Such software resources314may be useful in implementing storage virtualization to separate the storage hardware from the software that manages the storage hardware. The software resources314may also include software that is useful in facilitating and optimizing I/O operations that are directed to the storage resources308in the storage system306. For example, the software resources314may include software modules that perform carry out various data reduction techniques such as, for example, data compression, data deduplication, and others. The software resources314may include software modules that intelligently group together I/O operations to facilitate better usage of the underlying storage resource308, software modules that perform data migration operations to migrate from within a storage system, as well as software modules that perform other functions. Such software resources314may be embodied as one or more software containers or in many other ways. For further explanation,FIG.3Csets forth an example of a cloud-based storage system318in accordance with some embodiments of the present disclosure. In the example depicted inFIG.3C, the cloud-based storage system318is created entirely in a cloud computing environment316such as, for example, Amazon Web Services (‘AWS’), Microsoft Azure, Google Cloud Platform, IBM Cloud, Oracle Cloud, and others. The cloud-based storage system318may be used to provide services similar to the services that may be provided by the storage systems described above. For example, the cloud-based storage system318may be used to provide block storage services to users of the cloud-based storage system318, the cloud-based storage system318may be used to provide storage services to users of the cloud-based storage system318through the use of solid-state storage, and so on. The cloud-based storage system318depicted inFIG.3Cincludes two cloud computing instances320,322that each are used to support the execution of a storage controller application324,326. The cloud computing instances320,322may be embodied, for example, as instances of cloud computing resources (e.g., virtual machines) that may be provided by the cloud computing environment316to support the execution of software applications such as the storage controller application324,326. In one embodiment, the cloud computing instances320,322may be embodied as Amazon Elastic Compute Cloud (‘EC2’) instances. In such an example, an Amazon Machine Image (‘AMI’) that includes the storage controller application324,326may be booted to create and configure a virtual machine that may execute the storage controller application324,326. In the example method depicted inFIG.3C, the storage controller application324,326may be embodied as a module of computer program instructions that, when executed, carries out various storage tasks. For example, the storage controller application324,326may be embodied as a module of computer program instructions that, when executed, carries out the same tasks as the controllers110A,110B inFIG.1Adescribed above such as writing data received from the users of the cloud-based storage system318to the cloud-based storage system318, erasing data from the cloud-based storage system318, retrieving data from the cloud-based storage system318and providing such data to users of the cloud-based storage system318, monitoring and reporting of disk utilization and performance, performing redundancy operations, such as RAID or RAID-like data redundancy operations, compressing data, encrypting data, deduplicating data, and so forth. Readers will appreciate that because there are two cloud computing instances320,322that each include the storage controller application324,326, in some embodiments one cloud computing instance320may operate as the primary controller as described above while the other cloud computing instance322may operate as the secondary controller as described above. Readers will appreciate that the storage controller application324,326depicted inFIG.3Cmay include identical source code that is executed within different cloud computing instances320,322. Consider an example in which the cloud computing environment316is embodied as AWS and the cloud computing instances are embodied as EC2 instances. In such an example, the cloud computing instance320that operates as the primary controller may be deployed on one of the instance types that has a relatively large amount of memory and processing power while the cloud computing instance322that operates as the secondary controller may be deployed on one of the instance types that has a relatively small amount of memory and processing power. In such an example, upon the occurrence of a failover event where the roles of primary and secondary are switched, a double failover may actually be carried out such that: 1) a first failover event where the cloud computing instance322that formerly operated as the secondary controller begins to operate as the primary controller, and 2) a third cloud computing instance (not shown) that is of an instance type that has a relatively large amount of memory and processing power is spun up with a copy of the storage controller application, where the third cloud computing instance begins operating as the primary controller while the cloud computing instance322that originally operated as the secondary controller begins operating as the secondary controller again. In such an example, the cloud computing instance320that formerly operated as the primary controller may be terminated. Readers will appreciate that in alternative embodiments, the cloud computing instance320that is operating as the secondary controller after the failover event may continue to operate as the secondary controller and the cloud computing instance322that operated as the primary controller after the occurrence of the failover event may be terminated once the primary role has been assumed by the third cloud computing instance (not shown). Readers will appreciate that while the embodiments described above relate to embodiments where one cloud computing instance320operates as the primary controller and the second cloud computing instance322operates as the secondary controller, other embodiments are within the scope of the present disclosure. For example, each cloud computing instance320,322may operate as a primary controller for some portion of the address space supported by the cloud-based storage system318, each cloud computing instance320,322may operate as a primary controller where the servicing of I/O operations directed to the cloud-based storage system318are divided in some other way, and so on. In fact, in other embodiments where costs savings may be prioritized over performance demands, only a single cloud computing instance may exist that contains the storage controller application. The cloud-based storage system318depicted inFIG.3Cincludes cloud computing instances340a,340b,340nwith local storage330,334,338. The cloud computing instances340a,340b,340ndepicted inFIG.3Cmay be embodied, for example, as instances of cloud computing resources that may be provided by the cloud computing environment316to support the execution of software applications. The cloud computing instances340a,340b,340nofFIG.3Cmay differ from the cloud computing instances320,322described above as the cloud computing instances340a,340b,340nofFIG.3Chave local storage330,334,338resources whereas the cloud computing instances320,322that support the execution of the storage controller application324,326need not have local storage resources. The cloud computing instances340a,340b,340nwith local storage330,334,338may be embodied, for example, as EC2 M5 instances that include one or more SSDs, as EC2 R5 instances that include one or more SSDs, as EC2 I3 instances that include one or more SSDs, and so on. In some embodiments, the local storage330,334,338must be embodied as solid-state storage (e.g., SSDs) rather than storage that makes use of hard disk drives. In the example depicted inFIG.3C, each of the cloud computing instances340a,340b,340nwith local storage330,334,338can include a software daemon328,332,336that, when executed by a cloud computing instance340a,340b,340ncan present itself to the storage controller applications324,326as if the cloud computing instance340a,340b,340nwere a physical storage device (e.g., one or more SSDs). In such an example, the software daemon328,332,336may include computer program instructions similar to those that would normally be contained on a storage device such that the storage controller applications324,326can send and receive the same commands that a storage controller would send to storage devices. In such a way, the storage controller applications324,326may include code that is identical to (or substantially identical to) the code that would be executed by the controllers in the storage systems described above. In these and similar embodiments, communications between the storage controller applications324,326and the cloud computing instances340a,340b,340nwith local storage330,334,338may utilize iSCSI, NVMe over TCP, messaging, a custom protocol, or in some other mechanism. In the example depicted inFIG.3C, each of the cloud computing instances340a,340b,340nwith local storage330,334,338may also be coupled to block-storage342,344,346that is offered by the cloud computing environment316. The block-storage342,344,346that is offered by the cloud computing environment316may be embodied, for example, as Amazon Elastic Block Store (‘EBS’) volumes. For example, a first EBS volume may be coupled to a first cloud computing instance340a, a second EBS volume may be coupled to a second cloud computing instance340b, and a third EBS volume may be coupled to a third cloud computing instance340n. In such an example, the block-storage342,344,346that is offered by the cloud computing environment316may be utilized in a manner that is similar to how the NVRAM devices described above are utilized, as the software daemon328,332,336(or some other module) that is executing within a particular cloud computing instance340a,340b,340nmay, upon receiving a request to write data, initiate a write of the data to its attached EBS volume as well as a write of the data to its local storage330,334,338resources. In some alternative embodiments, data may only be written to the local storage330,334,338resources within a particular cloud computing instance340a,340b,340n. In an alternative embodiment, rather than using the block-storage342,344,346that is offered by the cloud computing environment316as NVRAM, actual RAM on each of the cloud computing instances340a,340b,340nwith local storage330,334,338may be used as NVRAM, thereby decreasing network utilization costs that would be associated with using an EBS volume as the NVRAM. In the example depicted inFIG.3C, the cloud computing instances340a,340b,340nwith local storage330,334,338may be utilized, by cloud computing instances320,322that support the execution of the storage controller application324,326to service I/O operations that are directed to the cloud-based storage system318. Consider an example in which a first cloud computing instance320that is executing the storage controller application324is operating as the primary controller. In such an example, the first cloud computing instance320that is executing the storage controller application324may receive (directly or indirectly via the secondary controller) requests to write data to the cloud-based storage system318from users of the cloud-based storage system318. In such an example, the first cloud computing instance320that is executing the storage controller application324may perform various tasks such as, for example, deduplicating the data contained in the request, compressing the data contained in the request, determining where to the write the data contained in the request, and so on, before ultimately sending a request to write a deduplicated, encrypted, or otherwise possibly updated version of the data to one or more of the cloud computing instances340a,340b,340nwith local storage330,334,338. Either cloud computing instance320,322, in some embodiments, may receive a request to read data from the cloud-based storage system318and may ultimately send a request to read data to one or more of the cloud computing instances340a,340b,340nwith local storage330,334,338. Readers will appreciate that when a request to write data is received by a particular cloud computing instance340a,340b,340nwith local storage330,334,338, the software daemon328,332,336or some other module of computer program instructions that is executing on the particular cloud computing instance340a,340b,340nmay be configured to not only write the data to its own local storage330,334,338resources and any appropriate block-storage342,344,346that are offered by the cloud computing environment316, but the software daemon328,332,336or some other module of computer program instructions that is executing on the particular cloud computing instance340a,340b,340nmay also be configured to write the data to cloud-based object storage348that is attached to the particular cloud computing instance340a,340b,340n. The cloud-based object storage348that is attached to the particular cloud computing instance340a,340b,340nmay be embodied, for example, as Amazon Simple Storage Service (‘S3’) storage that is accessible by the particular cloud computing instance340a,340b,340n. In other embodiments, the cloud computing instances320,322that each include the storage controller application324,326may initiate the storage of the data in the local storage330,334,338of the cloud computing instances340a,340b,340nand the cloud-based object storage348. In some embodiments, all data that is stored by the cloud-based storage system318may be stored in both: 1) the cloud-based object storage348, and 2) at least one of the local storage330,334,338resources or block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n. In such embodiments, the local storage330,334,338resources and block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340nmay effectively operate as cache that generally includes all data that is also stored in S3, such that all reads of data may be serviced by the cloud computing instances340a,340b,340nwithout requiring the cloud computing instances340a,340b,340nto access the cloud-based object storage348. Readers will appreciate that in other embodiments, however, all data that is stored by the cloud-based storage system318may be stored in the cloud-based object storage348, but less than all data that is stored by the cloud-based storage system318may be stored in at least one of the local storage330,334,338resources or block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n. In such an example, various policies may be utilized to determine which subset of the data that is stored by the cloud-based storage system318should reside in both: 1) the cloud-based object storage348, and 2) at least one of the local storage330,334,338resources or block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n. Readers will appreciate that the various components described above may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways. Readers will appreciate that the storage systems described above may be useful for supporting various types of software applications. For example, the storage system306may be useful in supporting artificial intelligence (‘AI’) applications, database applications, DevOps projects, electronic design automation tools, event-driven software applications, high performance computing applications, simulation applications, high-speed data capture and analysis applications, machine learning applications, media production applications, media serving applications, picture archiving and communication systems (‘PACS’) applications, software development applications, virtual reality applications, augmented reality applications, and many other types of applications by providing storage resources to such applications. In view of the fact that the storage systems include compute resources, storage resources, and a wide variety of other resources, the storage systems may be well suited to support applications that are resource intensive such as, for example, AI applications. AI applications may be deployed in a variety of fields, including: predictive maintenance in manufacturing and related fields, healthcare applications such as patient data & risk analytics, retail and marketing deployments (e.g., search advertising, social media advertising), supply chains solutions, fintech solutions such as business analytics & reporting tools, operational deployments such as real-time analytics tools, application performance management tools, IT infrastructure management tools, and many others. Such AI applications may enable devices to perceive their environment and take actions that maximize their chance of success at some goal. Examples of such AI applications can include IBM Watson, Microsoft Oxford, Google DeepMind, Baidu Minwa, and others. The storage systems described above may also be well suited to support other types of applications that are resource intensive such as, for example, machine learning applications. Machine learning applications may perform various types of data analysis to automate analytical model building. Using algorithms that iteratively learn from data, machine learning applications can enable computers to learn without being explicitly programmed. One particular area of machine learning is referred to as reinforcement learning, which involves taking suitable actions to maximize reward in a particular situation. Reinforcement learning may be employed to find the best possible behavior or path that a particular software application or machine should take in a specific situation. Reinforcement learning differs from other areas of machine learning (e.g., supervised learning, unsupervised learning) in that correct input/output pairs need not be presented for reinforcement learning and sub-optimal actions need not be explicitly corrected. In addition to the resources already described, the storage systems described above may also include graphics processing units (‘GPUs’), occasionally referred to as visual processing unit (‘VPUs’). Such GPUs may be embodied as specialized electronic circuits that rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. Such GPUs may be included within any of the computing devices that are part of the storage systems described above, including as one of many individually scalable components of a storage system, where other examples of individually scalable components of such storage system can include storage components, memory components, compute components (e.g., CPUs, FPGAs, ASICs), networking components, software components, and others. In addition to GPUs, the storage systems described above may also include neural network processors (‘NNPs’) for use in various aspects of neural network processing. Such NNPs may be used in place of (or in addition to) GPUs and may be also be independently scalable. As described above, the storage systems described herein may be configured to support artificial intelligence applications, machine learning applications, big data analytics applications, and many other types of applications. The rapid growth in these sort of applications is being driven by three technologies: deep learning (DL), GPU processors, and Big Data. Deep learning is a computing model that makes use of massively parallel neural networks inspired by the human brain. Instead of experts handcrafting software, a deep learning model writes its own software by learning from lots of examples. Such GPUs may include thousands of cores that are well-suited to run algorithms that loosely represent the parallel nature of the human brain. Data is the heart of modern AI and deep learning algorithms. Before training can begin, one problem that must be addressed revolves around collecting the labeled data that is crucial for training an accurate AI model. A full scale AI deployment may be required to continuously collect, clean, transform, label, and store large amounts of data. Adding additional high quality data points directly translates to more accurate models and better insights. Data samples may undergo a series of processing steps including, but not limited to: 1) ingesting the data from an external source into the training system and storing the data in raw form, 2) cleaning and transforming the data in a format convenient for training, including linking data samples to the appropriate label, 3) exploring parameters and models, quickly testing with a smaller dataset, and iterating to converge on the most promising models to push into the production cluster, 4) executing training phases to select random batches of input data, including both new and older samples, and feeding those into production GPU servers for computation to update model parameters, and 5) evaluating including using a holdback portion of the data not used in training in order to evaluate model accuracy on the holdout data. This lifecycle may apply for any type of parallelized machine learning, not just neural networks or deep learning. For example, standard machine learning frameworks may rely on CPUs instead of GPUs but the data ingest and training workflows may be the same. Readers will appreciate that a single shared storage data hub creates a coordination point throughout the lifecycle without the need for extra data copies among the ingest, preprocessing, and training stages. Rarely is the ingested data used for only one purpose, and shared storage gives the flexibility to train multiple different models or apply traditional analytics to the data. Although the preceding paragraphs discuss deep learning applications, readers will appreciate that the storage systems described herein may also be part of a distributed deep learning (‘DDL’) platform to support the execution of DDL algorithms. The storage systems described above may also be paired with other technologies such as TensorFlow, an open-source software library for dataflow programming across a range of tasks that may be used for machine learning applications such as neural networks, to facilitate the development of such machine learning models, applications, and so on. The storage systems described above may also implement AI platforms for delivering on the vision of self-driving storage. Such AI platforms may be configured to deliver global predictive intelligence by collecting and analyzing large amounts of storage system telemetry data points to enable effortless management, analytics and support. In fact, such storage systems may be capable of predicting both capacity and performance, as well as generating intelligent advice on workload deployment, interaction and optimization. Such AI platforms may be configured to scan all incoming storage system telemetry data against a library of issue fingerprints to predict and resolve incidents in real-time, before they impact customer environments, and captures hundreds of variables related to performance that are used to forecast performance load. The storage systems described above may also, either alone or in combination with other computing environments, be used to support a wide variety of digital platforms. Such digital platforms can include, for example, 5G wireless systems and platforms, digital twin platforms, edge computing platforms, IoT platforms, quantum computing platforms, serverless PaaS, software-defined security, neuromorphic computing platforms, and so on. The storage systems described above may also be part of a multi-cloud environment in which multiple cloud computing and storage services are deployed in a single heterogeneous architecture. In order to facilitate the operation of such a multi-cloud environment, DevOps tools may be deployed to enable orchestration across clouds. Likewise, continuous development and continuous integration tools may be deployed to standardize processes around continuous integration and delivery, new feature rollout and provisioning cloud workloads. By standardizing these processes, a multi-cloud strategy may be implemented that enables the utilization of the best provider for each workload. The storage systems described above may also be paired with FPGA-accelerated servers as part of a larger AI or ML infrastructure. Such FPGA-accelerated servers may reside near (e.g., in the same data center) the storage systems described above or even incorporated into an appliance that includes one or more storage systems, one or more FPGA-accelerated servers, networking infrastructure that supports communications between the one or more storage systems and the one or more FPGA-accelerated servers, as well as other hardware and software components. Alternatively, FPGA-accelerated servers may reside within a cloud computing environment that may be used to perform compute-related tasks for AI and ML jobs. Any of the embodiments described above may be used to collectively serve as a FPGA-based AI or ML platform. Readers will appreciate that, in some embodiments of the FPGA-based AI or ML platform, the FPGAs that are contained within the FPGA-accelerated servers may be reconfigured for different types of ML models (e.g., LSTMs, CNNs, GRUs). The ability to reconfigure the FPGAs that are contained within the FPGA-accelerated servers may enable the acceleration of a ML or AI application based on the most optimal numerical precision and memory model being used. Readers will appreciate that by treating the collection of FPGA-accelerated servers as a pool of FPGAs, any CPU in the data center may utilize the pool of FPGAs as a shared hardware microservice, rather than limiting a server to dedicated accelerators plugged into it. The FPGA-accelerated servers and the GPU-accelerated servers described above may implement a model of computing where, rather than keeping a small amount of data in a CPU and running a long stream of instructions over it as occurred in more traditional computing models, the machine learning model and parameters are pinned into the high-bandwidth on-chip memory with lots of data streaming through the high-bandwidth on-chip memory. FPGAs may even be more efficient than GPUs for this computing model, as the FPGAs can be programmed with only the instructions needed to run this kind of computing model. The systems described above may be deployed in a variety of ways, including being deployed in ways that support fifth generation (‘5G’) networks. 5G networks may support substantially faster data communications than previous generations of mobile communications networks and, as a consequence may lead to the disaggregation of data and computing resources as modern massive data centers may become less prominent and may be replaced, for example, by more-local, micro data centers that are close to the mobile-network towers. The systems described above may be included in such local, micro data centers and may be part of or paired to multi-access edge computing (‘MEC’) systems. Such MEC systems may enable cloud computing capabilities and an IT service environment at the edge of the cellular network. By running applications and performing related processing tasks closer to the cellular customer, network congestion may be reduced and applications may perform better. For further explanation,FIG.3Dillustrates an exemplary computing device350that may be specifically configured to perform one or more of the processes described herein. As shown inFIG.3D, computing device350may include a communication interface352, a processor354, a storage device356, and an input/output (“I/O”) module358communicatively connected one to another via a communication infrastructure360. While an exemplary computing device350is shown inFIG.3D, the components illustrated inFIG.3Dare not intended to be limiting. Additional or alternative components may be used in other embodiments. Components of computing device350shown inFIG.3Dwill now be described in additional detail. Communication interface352may be configured to communicate with one or more computing devices. Examples of communication interface352include, without limitation, a wired network interface (such as a network interface card), a wireless network interface (such as a wireless network interface card), a modem, an audio/video connection, and any other suitable interface. Processor354generally represents any type or form of processing unit capable of processing data and/or interpreting, executing, and/or directing execution of one or more of the instructions, processes, and/or operations described herein. Processor354may perform operations by executing computer-executable instructions362(e.g., an application, software, code, and/or other executable data instance) stored in storage device356. Storage device356may include one or more data storage media, devices, or configurations and may employ any type, form, and combination of data storage media and/or device. For example, storage device356may include, but is not limited to, any combination of the non-volatile media and/or volatile media described herein. Electronic data, including data described herein, may be temporarily and/or permanently stored in storage device356. For example, data representative of computer-executable instructions362configured to direct processor354to perform any of the operations described herein may be stored within storage device356. In some examples, data may be arranged in one or more databases residing within storage device356. I/O module358may include one or more I/O modules configured to receive user input and provide user output. I/O module358may include any hardware, firmware, software, or combination thereof supportive of input and output capabilities. For example, I/O module358may include hardware and/or software for capturing user input, including, but not limited to, a keyboard or keypad, a touchscreen component (e.g., touchscreen display), a receiver (e.g., an RF or infrared receiver), motion sensors, and/or one or more input buttons. I/O module358may include one or more devices for presenting output to a user, including, but not limited to, a graphics engine, a display (e.g., a display screen), one or more output drivers (e.g., display drivers), one or more audio speakers, and one or more audio drivers. In certain embodiments, I/O module358is configured to provide graphical data to a display for presentation to a user. The graphical data may be representative of one or more graphical user interfaces and/or any other graphical content as may serve a particular implementation. In some examples, any of the systems, computing devices, and/or other components described herein may be implemented by computing device350. For further explanation,FIG.4sets forth a flow chart illustrating an example method for executing a big data analytics pipeline in a storage system that includes compute resources and shared storage resources according to some embodiments of the present disclosure. Although depicted in less detail, the storage system (406) depicted inFIG.4may be similar to the storage systems described above with reference toFIGS.1A-1D,FIGS.2A-2G,FIGS.3A-3B, or any combination thereof. In fact, the storage system depicted inFIG.4may include the same, fewer, additional components as the storage systems described above. The storage system (406) depicted inFIG.4is illustrated as including compute resources in the form of processing resources (416,418,420). The processing resources (416,418,420) may be embodied, for example, as physical resources such as one or more computer processors or as virtualized resources such as a virtual machine, container, or some other virtualized component that can be used to execute a software application. The storage system (406) depicted inFIG.4is also illustrated as including shared storage resources in the form of storage devices (430,432,434). The storage devices (430,432,434) may be embodied, for example, as one or more SSDs, HDDs, or other storage device. The example method depicted inFIG.4includes: receiving (408), from a data producer (402), a dataset (404); storing (410), within the storage system (406), the dataset (404); allocating (412) processing resources (416) to an analytics application (422); and executing (414) the analytics application (422) on the processing resources (416) includes ingesting the dataset (404) from the storage system (406), as described in greater detail in the parent application(s). For further explanation,FIG.5sets forth a flow chart illustrating an additional example method for executing a big data analytics pipeline in a storage system that includes compute resources and shared storage resources according to some embodiments of the present disclosure. The example method depicted inFIG.5is similar to the example method depicted inFIG.4, as the example method depicted inFIG.5also includes many of the same steps. The example method depicted inFIG.5also includes: allocating (502) additional processing resources (418) to a real-time analytics application (506) and executing (504) the real-time analytics application (506) on the additional processing resources, which can include ingesting the dataset (404) prior to storing (410) the dataset (404) within the storage system (406), as described in greater detail in the parent application(s). For further explanation,FIG.6sets forth a flow chart illustrating an additional example method for executing a big data analytics pipeline in a storage system that includes compute resources and shared storage resources according to some embodiments of the present disclosure. The example method depicted inFIG.6is similar to the example method depicted inFIG.4, as the example method depicted inFIG.6also includes many of the same steps. In the example method depicted inFIG.6, the dataset (404) includes log files (602) describing one or more execution states of a computing system and executing (414) the analytics application (422) on the processing resources (416) can include evaluating (604) the log files (602) to identify one or more execution patterns associated with the computing system. In the example method depicted inFIG.6, evaluating (604) the log files (602) to identify one or more execution patterns associated with the computing system can include comparing (606) fingerprints associated with known execution patterns to information contained in the log files (602), all of which is described in greater detail in the parent application(s). For further explanation,FIG.7sets forth a flow chart illustrating an additional example method for executing a big data analytics pipeline in a storage system that includes compute resources and shared storage resources according to some embodiments of the present disclosure. The example method depicted inFIG.7is similar to the example method depicted inFIG.4, as the example method depicted inFIG.7also includes many of the same steps. In the example method depicted inFIG.7, storing (410) the dataset (404) within the storage system (406) can include organizing (708) the dataset into an indexed directory structure and receiving (408) a dataset (404) from a data producer (402) can include receiving (702) an unstructured dataset. The example method depicted inFIG.7also includes converting (704) the unstructured dataset into a structured dataset, as is described in greater detail in the parent application(s). For further explanation,FIG.8Asets forth a diagram illustrating an example computer architecture for implementing an artificial intelligence and machine learning infrastructure (800) that is configured to fit within a single chassis (not depicted) according to some embodiments of the present disclosure. While in this example, the communication fabric includes a set of network switches (803) for interconnecting a network appliance (800A) with the one or more GPU system(s) (801), and for the artificial intelligence and machine learning infrastructure (800) to communicate with one or more computing devices over one or more networks, in other implementations, the communication fabric may be architected to define different communication paths between the network appliance (800A) and the GPU system(s) (801), and one or more computing devices or host computer systems. In some examples, in addition to, or instead of, inclusion of one or more GPU system(s) (801), the artificial intelligence and machine learning infrastructure (800) may include one or more central processing units (not depicted for clarity) and/or one or more tensor processing units (not depicted for clarity). In this example artificial intelligence and machine learning infrastructure (800), the network appliance (800A) may be a storage system that includes one or more storage devices, and the GPU systems (801) may be, in this example, five (5) NVIDIA DGX-1 GPU systems. In this example, the network appliance (800A) may be connected to two switches (803) using, respectively, four, 100 GbE connections, where each switch (801) may be connected to each GPU system (801) by two 100 GbE connections—resulting in each of the GPU system (801) having four (4) 100 GbE connections to the network appliance (800A). For further explanation,FIG.8Bsets forth a flow chart illustrating an additional example method for executing a big data analytics pipeline in a storage system that includes compute resources and shared storage resources according to some embodiments of the present disclosure. The example method depicted inFIG.8Bis similar to the example method depicted inFIG.4, as the example method depicted inFIG.8Balso includes receiving (408) a dataset (404) from a data producer (402), storing (410) the dataset (404) within the storage system (406), allocating (412) processing resources (416) to an analytics application (422), and executing (414) the analytics application (422) on the processing resources (416), including ingesting the dataset (404) from the storage system (406). In the example method depicted inFIG.8B, receiving (408) a dataset (404) from a data producer (402) can include receiving (806), from a plurality of data producers (402,802), a dataset (404,804) that is unique to each data producer. The data producers (402,802) depicted inFIG.8Bmay be embodied, for example, as simulations of multiple storage system that is executed in order to test hardware and software components within the storage system that is being tested. For example, the first data producer (402) may be a simulated version of a first storage system and the second data producer (802) may be a simulation of a second storage system. In the example method depicted inFIG.8B, receiving (806) a dataset (404,804) that is unique to each data producer may be carried out, for example, by receiving each dataset as it is generated by the respective data producer (402,802), by periodically polling a location that each data producer (402,802) writes the dataset to, or in other ways. In fact, although the data producers (402,802) are depicted as residing outside of the storage system (406) in the embodiment depicted inFIG.8B, in other embodiments, one or more of the data producers (402,802) may actually be executing on the storage system (406) itself and may even write the dataset directly to storage resources within the storage system (406). In the example method depicted inFIG.8B, storing (410) the dataset (404) within the storage system (406) can include storing (808), within the storage system (406), each unique dataset (404,804). In the example method depicted inFIG.8B, each unique dataset (404,804) is depicted as being stored within the storage system (406) in multiple slices (424,426,428,816,818,820). For example, a first dataset (404) is stored as a first set of slices (424,426,428) and a second dataset (804) is stored as a second set of slices (816,818,820). In such an example, each slice may represent a distinct portion of the dataset, where RAID or RAID-like techniques are used to provide for data redundancy that one or more of the storage devices becomes unavailable. As such, parity data may also be maintained on the storage system (406), such that the dataset slices (424,426,428,816,818,820) and any parity data form a RAID stripe. Readers will appreciate that each dataset (404,804) may be stored in other ways and that each dataset (404,804) may be stored (808) within the storage system (406) by the data producer (402,802) itself accessing the storage system (406), by system software and system hardware on the storage system causing each dataset (404,804) (or the slices thereof) to be written to storage devices (430,432,434) in the storage system (406), or in some other way. In the example method depicted inFIG.8B, allocating (412) processing resources (416) to an analytics application (422) can include allocating (810) unique processing resources (416,418) to each of a plurality of analytics applications (422,814). In the example method depicted inFIG.8B, allocating (810) unique processing resources (416,418) to each of a plurality of analytics applications (422,814) may be carried out, for example, by allocating physical resources within the storage system (406) for use by the analytics applications (422,814). For example, a first computer processor may be allocated for use by a first analytics application (422) such that the analytics application (422) is executing on the first computer processor and a second computer processor may be allocated for use by a second analytics application (814) such that the analytics application (814) is executing on the second computer processor. Alternatively, allocating (810) unique processing resources (416,418) to each of a plurality of analytics applications (422,814) may be carried out by allocating virtualized physical resources within the storage system (406) for use by each of the analytics applications (422,814). For example, a first set of virtual machines may be allocated for use by a first analytics application (422) such that the analytics application (422) is executing on the first set of virtual machines and a second set of virtual machines may be allocated for use by a second analytics application (814) such that the analytics application (814) is executing on the second set of virtual machines. Likewise, allocating (810) unique processing resources (416,418) to each of a plurality of analytics applications (422,814) may be carried out through the use of containers, such that a first analytics application (422) is deployed and executed within a first container and a second analytics application (814) is deployed and executed within a second container. In the example method depicted inFIG.8B, executing (414) the analytics application (422) on the processing resources (416) can include executing (812) the plurality of analytics applications (422,814) on the processing resources (416,418), including ingesting each unique dataset (404,804) from the storage system (406). In such an example, a first analytics application (422) can ingest a first dataset (404) from the storage system (406) by reading the dataset (404) from the storage system (406) after it has been stored within the storage system (406) and a second analytics application (814) can ingest a second dataset (804) from the storage system (406) by reading the dataset (804) from the storage system (406) after it has been stored within the storage system (406). Readers will appreciate that, because the dataset (404) is stored within shared storage, neither analytics application (422,814) will need to retain a copy of the dataset in storage (e.g., direct-attached storage) that is only accessible by the processing resources that are being used to execute the analytics application (422,814). For further explanation,FIG.9sets forth a flow chart illustrating an additional example method for executing a big data analytics pipeline in a storage system that includes compute resources and shared storage resources according to some embodiments of the present disclosure. The example method depicted inFIG.9is similar to the example method depicted inFIG.4, as the example method depicted inFIG.9also includes receiving (408) a dataset (404) from a data producer (402), storing (410) the dataset (404) within the storage system (406), allocating (412) processing resources (416) to an analytics application (422), and executing (414) the analytics application (422) on the processing resources (416), including ingesting the dataset (404) from the storage system (406). The example method depicted inFIG.9also includes detecting (902) that the analytics application (422) has ceased executing properly. Detecting (902) that the analytics application (422) has ceased executing properly may be carried out, for example, by detecting that the analytics application (422) has crashed, by detecting that the analytics application (422) has become unresponsive, by detecting that the processing resources that the analytics application (422) is executing on have become unavailable, or in other ways. In such an example, the storage system (406) can detect (902) that the analytics application (422) has ceased executing properly through the use of a heartbeat mechanism, by detecting an absence of messaging or reporting from the analytics application (422), or through the use of a similar mechanism. The example method depicted inFIG.9also includes allocating (904) second processing resources (418) to the analytics application (422). In the example method depicted inFIG.9, allocating (904) second processing resources (418) to the analytics application (422) may be carried out, for example, by allocating physical resources within the storage system (406) for use by the analytics application (422). For example, one or more computer processors may be allocated for use by the analytics application (422) such that the analytics application (422) is executing on the one or more computer processors. Alternatively, allocating (904) second processing resources (418) to the analytics application (422) may be carried out by allocating virtualized physical resources within the storage system (406) for use by the analytics application (422). For example, one or more virtual machines may be allocated for use by the analytics application (422) such that the analytics application (422) is executing on the one or more virtual machines. Likewise, allocating (904) second processing resources (418) to the analytics application (422) may be carried out through the use of one or more containers, such that the analytics application (422) is deployed and executed within the one or more containers. The example method depicted inFIG.9also includes executing (906) the analytics application (422) on the second processing resources (418), including ingesting the dataset (404). In such an example, the analytics application (422) can ingest the dataset (404) from the storage system (406) by reading the dataset (404) from the storage system (406) after it has been stored within the storage system (406). Readers will appreciate that, because the dataset (404) is stored within shared storage, the analytics application (422) does not need to retain a copy of the dataset in storage (e.g., direct-attached storage) that is only accessible by the processing resources that are being used to execute the analytics application (422). For further explanation,FIG.10sets forth a flow chart illustrating an additional example method for executing a big data analytics pipeline in a storage system that includes compute resources and shared storage resources according to some embodiments of the present disclosure. The example method depicted inFIG.10is similar to the example method depicted inFIG.4, as the example method depicted inFIG.10also includes receiving (408) a dataset (404) from a data producer (402), storing (410) the dataset (404) within the storage system (406), allocating (412) processing resources (416) to an analytics application (422), and executing (414) the analytics application (422) on the processing resources (416), including ingesting the dataset (404) from the storage system (406). The example method depicted inFIG.10also includes detecting (1002) that the analytics application (422) needs additional processing resources. Detecting (1002) that the analytics application (422) needs additional processing resources may be carried out, for example, by detecting that the processing resources upon which the analytics application (422) is executing are fully utilized or that utilization has reached a threshold level, by detecting that the analytics application (422) has become unresponsive, slow to respond to messages, slow to report findings, or is otherwise exhibiting some behavior that is associated with a lack of sufficient processing resources, or in some other way. The example method depicted inFIG.10also includes allocating (1004) additional processing resources (418) to the analytics application (422). In the example method depicted inFIG.10, allocating (1004) additional processing resources (418) to the analytics application (422) may be carried out, for example, by allocating additional physical resources within the storage system (406) for use by the analytics applications (422). For example, a first computer processor may initially be allocated for use by the analytics application (422) such that the analytics application (422) is executing on the first computer processor. In such an example, a second computer processor may additionally be allocated for use by the analytics application (422) such that the analytics application (422) is executing on both the first computer processor and the second computer processor. Alternatively, allocating (1004) additional processing resources (418) to the analytics application (422) may be carried out by allocating additional virtualized physical resources within the storage system (406) for use by the analytics applications (422). For example, a first set of virtual machines may be initially allocated for use by the analytics application (422) such that the analytics application (422) is executing on the first set of virtual machines. In such an example, a second set of virtual machines may be additionally allocated for use by the analytics application (422) such that the analytics application (422) is executing on both the first set of virtual machines and the second set of virtual machines. Likewise, allocating (1004) additional processing resources (418) to the analytics application (422) may be carried out through the use of containers, such that an analytics application (422) is initially deployed and executed within a first container and a second container is subsequently utilized to support the analytics application (422). The example method depicted inFIG.10also includes executing (1006) the analytics application (422) on the additional processing resources (418). Readers will appreciate that although the embodiments described above relate to embodiments where instances of the analytics application (422) are executed on multiple processing resources (416,418), in other embodiments different processing resources (416,418) instead be used to execute various portions of the analytics application (422). For example, a first portion of the analytics application (422) may execute on a first set of processing resources (416) and a second portion of the analytics application (422) may execute on a second set of processing resources (418). Readers will further appreciate that the shared nature of the storage that is utilized by the analytics application (422) results in more efficient scalability, as the application can be scaled up (i.e., more processing resources can be given to the analytics application) without needing to copy the dataset, send the dataset over a network connection, and so on as would be required if the analytics application (422) were executing on a processing node with direct-attached storage where each node maintained its own copy of the dataset. As described above, the analytics application (422) may include artificial intelligence or machine learning components. In fact, the analytics application (422) may be an AI application. Data is the heart of modern AI and deep learning algorithms. Before training can begin, one problem that must be addressed revolves around collecting the labeled data that is crucial for training an accurate AI model. A full scale AI deployment may be required to continuously collect, clean, transform, label, and store large amounts of data. Adding additional high quality data points directly translates to more accurate models and better insights. Data samples may undergo a series of processing steps including, but not limited to: 1) ingesting the data from an external source into the training system and storing the data in raw form, 2) cleaning and transforming the data in a format convenient for training, including linking data samples to the appropriate label, 3) exploring parameters and models, quickly testing with a smaller dataset, and iterating to converge on the most promising models to push into the production cluster, 4) executing training phases to select random batches of input data, including both new and older samples, and feeding those into production GPU servers for computation to update model parameters, and 5) evaluating including using a holdback portion of the data not used in training in order to evaluate model accuracy on the holdout data. This lifecycle may apply for any type of parallelized machine learning, not just neural networks or deep learning. For example, standard machine learning frameworks may rely on CPUs instead of GPUs but the data ingest and training workflows may be the same. Readers will appreciate that a single shared storage data hub creates a coordination point throughout the lifecycle without the need for extra data copies among the ingest, preprocessing, and training stages. Rarely is the ingested data used for only one purpose, and shared storage gives the flexibility to train multiple different models or apply traditional analytics to the data. Readers will appreciate that each stage in the AI data pipeline may have varying requirements from the data hub (e.g., the storage system or collection of storage systems). Scale-out storage systems must deliver uncompromising performance for all manner of access types and patterns—from small, metadata-heavy to large files, from random to sequential access patterns, and from low to high concurrency. The storage systems described above may serve as an ideal AI data hub as the systems may service unstructured workloads. In the first stage, data is ideally ingested and stored on to the same data hub that following stages will use, in order to avoid excess data copying. The next two steps can be done on a standard compute server that optionally includes a GPU, and then in the fourth and last stage, full training production jobs are run on powerful GPU-accelerated servers. Often, there is a production pipeline alongside an experimental pipeline operating on the same dataset. Further, the GPU-accelerated servers can be used independently for different models or joined together to train on one larger model, even spanning multiple systems for distributed training. If the shared storage tier is slow, then data must be copied to local storage for each phase, resulting in wasted time staging data onto different servers. The ideal data hub for the AI training pipeline delivers performance similar to data stored locally on the server node while also having the simplicity and performance to enable all pipeline stages to operate concurrently. A data scientist works to improve the usefulness of the trained model through a wide variety of approaches: more data, better data, smarter training, and deeper models. In many cases, there will be teams of data scientists sharing the same datasets and working in parallel to produce new and improved training models. Often, there is a team of data scientists working within these phases concurrently on the same shared datasets. Multiple, concurrent workloads of data processing, experimentation, and full-scale training layer the demands of multiple access patterns on the storage tier. In other words, storage cannot just satisfy large file reads, but must contend with a mix of large and small file reads and writes. Finally, with multiple data scientists exploring datasets and models, it may be critical to store data in its native format to provide flexibility for each user to transform, clean, and use the data in a unique way. The storage systems described above may provide a natural shared storage home for the dataset, with data protection redundancy (e.g., by using RAID6) and the performance necessary to be a common access point for multiple developers and multiple experiments. Using the storage systems described above may avoid the need to carefully copy subsets of the data for local work, saving both engineering and GPU-accelerated servers use time. These copies become a constant and growing tax as the raw data set and desired transformations constantly update and change. Readers will appreciate that a fundamental reason why deep learning has seen a surge in success is the continued improvement of models with larger data set sizes. In contrast, classical machine learning algorithms, like logistic regression, stop improving in accuracy at smaller data set sizes. As such, the separation of compute resources and storage resources may also allow independent scaling of each tier, avoiding many of the complexities inherent in managing both together. As the data set size grows or new data sets are considered, a scale out storage system must be able to expand easily. Similarly, if more concurrent training is required, additional GPUs or other compute resources can be added without concern for their internal storage. Furthermore, the storage systems described above may make building, operating, and growing an AI system easier due to the random read bandwidth provided by the storage systems, the ability to of the storage systems to randomly read small files (50 KB) high rates (meaning that no extra effort is required to aggregate individual data points to make larger, storage-friendly files), the ability of the storage systems to scale capacity and performance as either the dataset grows or the throughput requirements grow, the ability of the storage systems to support files or objects, the ability of the storage systems to tune performance for large or small files (i.e., no need for the user to provision filesystems), the ability of the storage systems to support non-disruptive upgrades of hardware and software even during production model training, and for many other reasons. Small file performance of the storage tier may be critical as many types of inputs, including text, audio, or images will be natively stored as small files. If the storage tier does not handle small files well, an extra step will be required to pre-process and group samples into larger files. Storage, built on top of spinning disks, that relies on SSD as a caching tier, may fall short of the performance needed. Because training with random input batches results in more accurate models, the entire data set must be accessible with full performance. SSD caches only provide high performance for a small subset of the data and will be ineffective at hiding the latency of spinning drives. Readers will further appreciate that in some embodiments of the present disclosure, big data services may be built-in to the shared storage system such that big data analytics, machine learning, artificial intelligence, and other functionality can be offered as a service. In such an example, big data analytics applications, machine learning applications, artificial intelligence applications, and others may be incorporated into the same (or otherwise accessible) codebase as system software that controls the operation of the storage system, such that the interactions between system hardware, system software, and the additional applications can be optimized. Furthermore, these additional applications can be offered as cogs in an analytics stack to assist users of the storage system in the development and deployment of big data analytics applications, machine learning applications, artificial intelligence applications, and similar applications. Readers will further appreciate that in some embodiments of the present disclosure, idempotent operations may allow for arbitrary reruns and modification of the analytics pipeline. Through the use of orchestration and containerization related concepts described above, a storage system may present a software layer that runs in idempotent chunks such that a hands-off approach to recovery management may be taken. In such an example, if a dependency graph of jobs were in place where each job had some level of idempotency, changes could be made to a job anywhere in the graph and determinations could be made regarding what jobs would need to be rerun to complete recovery. Furthermore, because additional compute resources may be allocated, the system could automate data changes or execute them from a simple form. Readers will further appreciate that in some embodiments of the present disclosure, with the addition of heartbeat events or expected data patterns, a storage system could essentially run continuous testing on a data pipeline, take recovery actions, and rerun steps if heartbeats are missing. Because there are many things that can go wrong when analytics are being performed in an environment that includes many hosts with different network and rack configurations, errors can occur and may be hard to detect. Even if errors are not common, they may be hard to detect and hard to trace back to the root cause. As such, embodiments described herein may add continuous monitoring to the outputs of the pipeline by adding fingerprints to be expected, regular events that are expected to occur, and information may be persisted to capture actual system performance. Once anomalies are found, the storage system may attempt to re-collect data, rerun jobs, issue alerts if anomalies are still detected, and otherwise support a self-healing big data analytics pipeline. Readers will appreciate that although the embodiments described above relate to embodiments where steps may appear to occur according to some order, no ordering is actually required unless explicitly stated. Furthermore, in some embodiments, steps that appear in different figures may actually occur in a single embodiment. That is, the organization of steps that is included above is for ease of explanation, and in no way limits the various embodiments of the concepts described herein. In fact, embodiments of the present disclosure may include any combination of the steps described above and claimed herein. Likewise, embodiments of the present disclosure may be implemented on any of the storage systems, or any combination therefore, described herein. For further explanation,FIG.11Asets forth a diagram illustrating an example artificial intelligence and machine learning infrastructure (1100) according to some embodiments of the present disclosure. As depicted, the artificial and machine learning infrastructure (1100) may be embodied or implemented entirely within a single chassis (1101). In some examples, the chassis (1101) may be implemented according to the dimensions of a standard rack within a data center—where the single chassis (1101) includes the one or more storage systems (1120), such as any of the storage systems described above or any combination of such storage systems, and where the single chassis (1101) may further include one or more GPU systems (1130A-1130N). As one example embodiment, the chassis (1101) may include storage system(s) (1120) implemented as one or more Pure™ FlashBlade™ storage systems of flash storage devices or one or more other types of flash storage devices, and the one or more GPU systems (1130A-1130N) may be implemented as one or more NVIDIA™ DGX-1™ GPU architectures or as one or more other GPU architectures. In this example, the GPU architectures may further include multiple GPUs and one or more CPUs—where the GPU architecture may further include onboard system memory. However, in other examples, different combinations of storage systems and GPU architectures may be implemented as an integrated artificial intelligence and machine learning infrastructure within the single chassis (1101). Further, in some examples, the single chassis (1101) may include one or more length, width, and depth physical dimensions that are smaller or larger than a standard rack size—for example the single chassis (1101) may be a half rack or smaller. In this example, a rack may be about 42 U, or 6 feet (180 cm) in height, where a “U” unit of measure may be defined as 44.50 millimeters (1.752 in.), and where the rack width may be 19 inches (482.60 mm), and where the depth may be 36 inches (914.40 mm). In this embodiment, the height (1102) of the storage system(s) (1120) may be 4 U, where the width (1104) and depth (1106) are defined to fit within the physical dimensions of the chassis (1101). Similarly, each of the GPU system(s) (1130A-1130N) may be of the same or different dimensions, where an example height (1108) may be defined to be 1 U or 2 U, and where the width (1110) and depth (1112) may be defined to fit within the physical dimensions of the chassis (1101). For further explanation,FIG.11Bsets forth a diagram illustrating an example computer architecture for implementing an artificial intelligence and machine learning infrastructure (1100) within a single chassis (1101) according to some embodiments of the present disclosure. While in this example, the communication fabric includes a tiered set of network switches (1132A-1132C) for interconnecting the storage system(s) (1120) with the one or more GPU system(s) (1130A-1130N), and for the artificial intelligence and machine learning infrastructure (1100) to communicate with one or more computing devices (1129) over one or more networks (1131), in other implementations, the communication fabric may be architected to define different communication paths between the storage system(s) (1120) and the GPU system(s) (1130A-1130N), and one or more computing devices or host computer systems. In some implementations, the artificial intelligence and machine learning infrastructure (1100) communication fabric may implement a remote direct memory access (RDMA) protocol over converged ethernet (RoCE) fabric, where such a communication fabric implements direct memory access from a source computer system to a target computer system without involvement of an operating system on either the source or target computer system—where, depending on the direction of a communication path, the storage system(s) (1120) may be a source or target computer system and the GPU systems (1130A-1130N) may be a source or target computer system. In this example, given the communication fabric depicted in artificial intelligence and machine learning infrastructure (1100) —where the communication fabric may implement multiple parallel communication channels through each switch (1132A-1132C) —and based on the storage system(s) (1120) including multiple storage devices, where each storage device may include one or more controllers that may each communicate directly with one or more of the GPUs within GPU systems(s) (1130A-1130N), artificial intelligence and machine learning infrastructure (1100) may implement multiple, parallel high-speed communication paths between different combinations of storage devices within the storage system(s) (1120) and computing elements of the GPU system(s) (1130A-1130N). In other example implementations, the communication fabric may implement other network communication protocols, including the communication protocols discussed above with respect to the storage system (340) described inFIGS.1A-3B, including InfiniBand and iWARP. In some implementations, artificial intelligence and machine learning infrastructure (1100) may be scaled to include additional storage systems or additional GPU systems within the same chassis (1101), where the communication fabric may be similarly scaled to connect the additional storage systems and/or GPU systems via network switches (1132A-1132C). In other cases, the communication fabric may be scaled to include additional network switches or additional tiers to the communication fabric. For further explanation,FIG.11Csets forth a diagram illustrating an example implementation of an artificial intelligence and machine learning infrastructure software stack (1105) according to some embodiments of the present disclosure. As depicted inFIG.11C, the artificial intelligence and machine learning infrastructure software stack (1105) may be implemented entirely within the artificial intelligence and machine learning infrastructure (110) depicted inFIGS.11A and11B. Further, the artificial intelligence and machine learning infrastructure software stack (1105) may include multiple software layers, including a multi-node training (1107A) layer, a deep learning framework (1107B) layer, a containerization (1107C) layer, a scale-out GPU compute (1107D) layer, a scale-out files/object protocol (1107E) layer, and a scale-out storage (1107F) layer, among other potential software layers not depicted inFIG.11C. The multi-node training (1107A) layer may implement a scaling toolkit, or a configuration interface, that provides specifications for multi-node training within the artificial intelligence and machine learning infrastructure (1100). The scaling toolkit may be used to specify configuration settings between the storage system(s) (1120), the GPU systems (1130A-1130N), and network components, including network switches (1132A-132C) of the communication fabric. The deep learning framework (1107B) layer may implement deep learning frameworks such as Caffe, Caffe2, mxnet, pytorch, torch, among other deep learning frameworks. Further, each deep learning framework implemented at the deep learning framework (1107B) layer may be delivered as a container to the containerization (1107C) layer. Further, the containerization (1107C) layer may implement GPU drivers for communicating with the GPUs of the scale-out GPU compute (1107D) layer, and the containerization (1107C) layer may also implement NVIDIA™ Docker™. The scale-out GPU compute (1107D) layer may be implemented by the GPU systems (1130A-1130N), and the scale-out GPU compute (1107D) layer may provide an interface for assigning jobs, sending or receiving data, adding or removing GPU systems, or for configuring one or more of the GPUs within the GPU systems (1130A-1130N). In some examples, the functionality provided by the scale-out GPU compute (1107D) layer may be provided to layers above and below via an API specifying commands and parameters for each supported functionality for the corresponding layer interface. The scale-out file/object protocols (1107E) layer may provide an API for a logical data handling layer, such as a file system that provides file systems operations for creating, deleting, moving, copying, or other standard file system operations. In some examples, the scale-out file/objects protocols (1107E) layer may provide block level access, or data access according to a specified range or ranges of bytes. The scale-out storage (1107F) layer may be implemented by the storage system(s) (1130), and the scale-out storage (1107F) layer may provide an interface for any storage system functionality described above with respect toFIGS.1A-3B, including reading, writing, erasing, or configurating storage device settings, or configuring garbage collection, or for programming the one or more controllers implemented by each of the included storage systems or storage devices. For example, the scale-out storage (1107F) layer may provide an API for performing input/output operations on physical data stored within the memory components of the storage system. In some examples, the scale-out file/object protocol (1107E) layer and the scale-out storage (1107F) layer, individually or in combination, may provide for implementations of a virtual memory environment, memory management, or one or more types of files systems or methods for creating, deleting, copying, reading, or writing files or objects. For further explanation,FIG.11Dsets forth a flow chart illustrating an example method for interconnecting a graphical processing unit layer and a storage layer of an artificial intelligence and machine learning infrastructure according to some embodiments of the present disclosure. Although depicted in less detail, the example artificial intelligence and machine learning infrastructure (1100) may be similar to the implementations described above with reference toFIGS.11A-11C, or any combination thereof. In this example, a data path may be considered use of one or more protocols for a communication path directly between the scale-out GPU compute (1107D) layer and the scale-out storage (1107F) layer. In other examples, the data path may be considered use of one or more protocols for implementing a communication path between the scale-out GPU compute (1107D) layer, the scale-out files/object protocols (1107E) layer, and the scale-out storage (1107F) layer—where the scale-out GPU compute (1107D) layer communicates to and from the scale-out files/object protocols (1107E) layer via one or more APIs, and where the scale-out files/object protocols (1107E) layer communicates with the scale-out storage (1107F) layer via one or more APIs. While in this example, the data path includes the bottom three layers of the artificial intelligence and machine learning infrastructure software stack (1107D,1107E,1107F), in other examples, the data path may include one or more other software layers, including the multi-node training (1107A) layer, the deep learning framework (1107B) layer, and/or the containerization (1107C) layer. In this example, a definition of a data path may be based on the integration of the software stack as depicted and described above with respect toFIGS.11A-11C. For example, the scale-out storage (1107F) may be configured to provide an API call that specifies for the scale-out storage (1107F) layer to implement a data transformation or data analysis on stored data—where the result of the API call is a result of the data transformation or data analysis performed by the scale-out storage (1107F) layer, and where the scale-out storage (1107F) layer implements the data analysis or data transformation using one or more controllers for one or more storage devices. In some examples, the API provided by the scale-out storage (1107F) layer may provide data analysis or data transformation functionality or routines that include one or more of: JPEG decode, shuffle, combining files, and/or reshaping matrices/tensors. In general, and in dependence upon the controllers of the storage devices of the storage system (1130) being configured to perform any type of general computing functionality as described above with reference toFIGS.1A-3B, the API provided by the scale-out storage (1107F) layer may provide an API interface for any type of data analysis or data transformation. As one example, the scale-out storage (1107F) layer may provide an API call that instructs the scale-out storage (1107F) layer to select a subset of data that matches a particular category. Further, in some examples, the API provided by the scale-out storage (1107F) layer may include an API call that takes as a parameter function code, or a reference to function code, where one or more controllers of the storage system(s) (1130) of the scale-out storage (1107F) layer may execute the function code to perform a specified data analysis or data transformation. In this way, the scale-out GPU compute (1107D) layer may offload to the scale-out storage (1107F) layer some of the computational tasks that would otherwise be performed by the scale-out GPU compute (1107D) layer. In some examples, the scale-out storage (1107F) layer may manage a compute cluster so that data analysis and/or data transformation happen under a centralized management plane. In other examples, the scale-out storage (1107F) layer may initiate data analysis and/or data transformation or data management operation without any instruction or command from the scale-out GPU compute (1107D) layer, where the initiation of a data analysis and/or data transformation, or data management operation may be based at least in part on the one or more controllers identifying a pattern within the operations requested from the scale-out GPU compute (1107D) layer via the API. In some examples, a given GPU within the scale-out GPU compute (1107D) layer may communicate directly with a storage device of the scale-out storage (1107F) layer without the intervention of an operating system. In some implementations, the scale-out GPU compute (1107D) layer may make calls to the API of the scale-out files/objects protocols (1107E) layer or the scale-out GPU compute (1107D) layer may make calls directly to the scale-out storage (1107F) layer. Similarly, the scale-out storage (1107F) layer may generate results directly to the system memory of one or more GPUs within the scale-out GPU compute (1107D) layer. For example, the scale-out storage (1107E) layer may write results from an API call directly into a cache or other memory component of one or more GPUs of the scale-out GPU compute (1107D) layer. As depicted inFIG.11D, the example method includes generating (1152), at a graphical processing unit of a computer system, a function call (1152A) specifying one or more operations to be performed by a storage system of the computer system; transmitting (1154), across a communication fabric of the computer system, the function call (1152A) from the graphical processing unit to the storage system (1154); generating (1156), at the storage system of the computer system and based on the function call (1152A), one or more results (1156A); and transmitting (1158), across the communication fabric, the one or more results (1156A) from the storage system to the graphical processing unit. In this example, the graphical processing unit may be any of the graphical processing units of the GPU system(s)1130A-1130N, the computer system may be a computer system comprising the artificial intelligence and machine learning infrastructure (1100), and the storage system may be any storage system of the storage systems of storage system(s) (1120). Further, in this example, the artificial intelligence and machine learning infrastructure system (1100) may be operating to perform one or more machine learning tasks received from a cloud AI service (1171) implemented as a cloud service within a cloud services provider (1173A, where the cloud AI service (1171) receives tasks from a host computer (1170) across a network (not depicted), where the tasks may be specified via a user interface provided by the cloud AI service (1171). Further, the artificial intelligence and machine learning infrastructure system (1100) may be implemented within a data center (not depicted) or on site at a client location. Generating (1152),1152), at the graphical processing unit of the computer system, the function call (1152A) specifying one or more operations to be performed by a storage system of the computer system may be implemented as described above with reference toFIGS.11A-11C, where given a specific task, the GPU identifies a corresponding API call, and generates parameters for the API call. Transmitting (1154), across a communication fabric of the computer system, the function call (1152A) from the graphical processing unit to the storage system (1154) may be implemented as described above with reference toFIGS.11A-11C, where the function call (1152A) is transmitted across a communication port to one a network switch, and where the network switch routs the function call to a network port on at the storage system(s) (1120). Generating (1156), at the storage system of the computer system and based on the function call (1152A), one or more results (1156A) may be implemented as described above with reference toFIGS.11A-11C, where one or more controllers on the storage system(s) (1120) may perform the function call according to the operation and parameters specified by the function call. Transmitting (1158), across the communication fabric, the one or more results (1156A) from the storage system to the graphical processing unit may be implemented as described above with reference toFIGS.11A-11C, where the results (1156A) are transmitted across a communication port to a network switch, and where the network switch routs the results (1156A) to a network port on at the GPU system(s) (1130A-1130N). For further explanation,FIG.12Asets forth a flow chart illustrating an example method of monitoring an artificial intelligence and machine learning infrastructure (1100) according to some embodiments of the present disclosure. The artificial intelligence and machine learning infrastructure (1100) described above may include one or more monitoring modules (1202a,1202b,1202n) or may be otherwise coupled to one or more monitoring modules. The monitoring modules (1202a,1202b,1202n) may be embodied, for example, computer program instructions executing on computer hardware such as a CPU. Such computer program instructions may be stored, for example, within memory that is contained in one or more of the blades that is included within a storage system that is included within the artificial intelligence and machine learning infrastructure (1100) and executed by one or more CPUs that are included within the storage system that is included within the artificial intelligence and machine learning infrastructure (1100). Readers will appreciate that other embodiments are contemplated such as, for example, the one or more monitoring modules (1202a,1202b,1202n) residing within and being executed by a server that is included within the artificial intelligence and machine learning infrastructure (1100), the one or more monitoring modules (1202a,1202b,1202n) residing within and being executed by cloud computing resources that the artificial intelligence and machine learning infrastructure (1100) is in communications with, or in some other way. The example method depicted inFIG.12Aincludes identifying (1203), by the one or more monitoring modules (1202a,1202b,1202n), a bottleneck in an execution pipeline. The execution pipeline may be embodied, for example, as an artificial intelligence or machine learning pipeline in which various stages of executing an artificial intelligence or machine learning application are carried out. Such an execution pipeline can include, for example, identifying a particular dataset to use as input to the artificial intelligence or machine learning application, reading such a dataset from storage that is contained within the artificial intelligence and machine learning infrastructure (1100), performing a series of transformations to the dataset, running the dataset through a plurality of artificial intelligence or machine learning models, retaining auditing information describing the steps performed and the content of the dataset during the various stages of execution, and many other steps. In the example method depicted inFIG.12A, a bottleneck can occur for a variety of reasons. For example, a bottleneck can occur when insufficient resources are allocated to one portion of the execution pipeline, thereby causing one portion of the execution pipeline to create a bottleneck for the remaining portions of the execution pipeline. Consider an example in which one portion of the execution pipeline includes a series of transformations to the dataset, where each transformation in the series of transformations is performed by a distinct module of computer program instructions. In such an example, assume that when a first module of computer program instructions has completed a first transformation, the first module of computer program instructions sends the transformed data to a second module of computer program instructions which will perform a second transformation. Further assume that when the second module of computer program instructions has completed the second transformation, the second module of computer program instructions sends the transformed data to a third module of computer program instructions which will perform a third transformation. In such an example, assume that the second transformation is more complex than the other transformations and further assume that each module of computer program instructions is given an identical amount of processing resources upon which the modules will execute. In such an example, the performance of the second transformation could create a bottleneck as the second transformation may take more time to complete given that it is the most complex transformation and further given that the second module of computer program instructions only has access to the same amount of computing resources as the first module of computer program instructions and the third module of computer program instructions. The example method depicted inFIG.12Aalso includes initiating (1204), by the one or more monitoring modules (1202a,1202b,1202n), reconfiguration of the artificial intelligence and machine learning infrastructure (1100) to resolve the bottleneck in the execution pipeline. Initiating, by the one or more monitoring modules (1202a,1202b,1202n), reconfiguration of the artificial intelligence and machine learning infrastructure (1100) to resolve the bottleneck in the execution pipeline may be carried out, for example, by reallocating resources to resolve the bottleneck in the execution pipeline. Continuing with the example described above, initiating reconfiguration of the artificial intelligence and machine learning infrastructure (1100) to resolve the bottleneck in the execution pipeline may be carried out, for example, by the one or more monitoring modules (1202a,1202b,1202n) allocating additional compute resources to support the execution of the second module of computer program instructions. Readers will appreciate that the example described above is just one of many bottlenecks that can occur and the actions taken to resolve such bottlenecks can take many other forms. For example, bottlenecks may occur as the result of processing bottlenecks, scheduling bottlenecks, workload allocation and distribution bottlenecks, and many others. As such, the actions taken to resolve such bottlenecks can include splitting a single step into multiple steps and vice versa, changing the manner in which operations are scheduled, moving workloads around to different physical or virtual resources, and so on. The example method depicted inFIG.12Acan also include monitoring (1206) access patterns to one or more of the storage systems contained in the artificial intelligence and machine learning infrastructure (1100). Monitoring (1206) access patterns to one or more of the storage systems contained in the artificial intelligence and machine learning infrastructure (1100) may be carried out, for example, by tracking the location of accesses to the storage systems, by tracking the types of accesses (e.g., reads, writes) to the storage systems, and so on. In such an example, the access patterns to one or more of the storage systems contained in the artificial intelligence and machine learning infrastructure (1100) may be used to gain certain insights into the execution of the artificial intelligence or machine learning pipeline. Consider an example in which a time-series database is being built off of the I/O access patterns of the training data and a time-series database is also being built off of the scheduler and the GPUs. In such an example, this information could be used to determine how to schedule things in a way to make best use of the artificial intelligence and machine learning infrastructure's (1100) resources. In such an example, the artificial intelligence or machine learning pipeline may be represented by a complicated execution graph and a scheduler must decide what to run when. In such an example, feedback loops from storage, networking, compute, and any other parts of the system stack may be used to inform the scheduler and enable the scheduler to make better scheduling decisions. In fact, all of this information could be maintained in a centralized time-series database that includes all of this information. As such, information from a first training run can be used to make better decisions on a second training run. Readers will appreciate that although depicted as a distinct step, in some embodiments, monitoring (1206) access patterns to one or more of the storage systems contained in the artificial intelligence and machine learning infrastructure (1100) may be part of identifying (1203) a bottleneck in an execution pipeline, as described above. The example method depicted inFIG.12Aalso includes monitoring (1208) data-related aspects of the artificial intelligence or machine learning pipeline. Monitoring (1208) data-related aspects of the artificial intelligence or machine learning pipeline can include not only monitoring whether some data that is needed by one or more of the GPUs is available for use by the GPUs, but also monitoring the nature of the data. For example, during each training run of a particular AI or machine learning model, data may be ingested as training data for the AI or machine learning model. In such an example, monitoring the nature of the data can include, for example, monitoring the training data that is ingested during each training run to identify exceptional data (i.e., data that is dissimilar to data that was previously received training data for the AI or machine learning model). In such an example, by monitoring (1208) data-related aspects of the artificial intelligence or machine learning pipeline, changes to the input data to the artificial intelligence or machine learning pipeline can be identified. Readers will appreciate that while the previous sentences relate to the monitoring of training data, in a production environment, data-related aspects of the artificial intelligence or machine learning pipeline may similarly be monitored (1208). The example method depicted inFIG.12Aalso includes creating (1210) auditing information for the artificial intelligence or machine learning pipeline. The auditing information for the artificial intelligence or machine learning pipeline may include, for example, information describing the data that was fed into the artificial intelligence or machine learning pipeline, the source code that was used when executing the artificial intelligence or machine learning pipeline, and so on. Consider an example in which the pipeline is an artificial intelligence pipeline for a self-driving car. In such an example, auditing information may be maintained to capture what data was fed into the artificial intelligence pipeline (e.g., what data was received from the self-driving car's sensors at various points in time), what code was executed to control the operation of the self-driving car, and so on. The auditing information may be creating, for example, by applying a hash function to representations of the data and code to create a hash value that captures the data and code, by storing such information in a blockchain, by storing such information in a database, and so on. Readers will appreciate that creating (1210) auditing information for the artificial intelligence or machine learning pipeline may also take advantage of an approach to only retain the deltas each time auditing information is created. For example, if auditing information is created at time 0 and auditing information is subsequently created at time 1, any audit information that has not changed between time 1 and time 0 may not need to be retained. For example, if the code that was used at time 0 is captured in the auditing information for time 0, and such code does not change at time 1, then the code that was used at time 1 need not be included in the auditing information for time 1. In such an example, a pointer or other instrument can be included in the auditing information for time 1 to indicate that the code used at time 1 was identical to the code used at a previous point in time. The example method depicted inFIG.12Aalso includes creating (1212) trending information for the artificial intelligence or machine learning pipeline. The trending information for the artificial intelligence or machine learning pipeline may include, for example, information describing improvements in the models over time, information describing changes to the data that is input into the models over time, and so on. In such an example, the trending information for the artificial intelligence or machine learning pipeline may be used to validate certain models, identify data drift, or used for a variety of other purposes. In such an example, the trending information for the artificial intelligence or machine learning pipeline may be displayed and presented to a user, for example, via a tool that shows the improvement of a particular model over time. Readers will appreciate that although the embodiment depicted inFIG.12Aillustrates an embodiment where the one or more monitoring modules (1202a,1202b,1202n) reside within the artificial intelligence and machine learning infrastructure (1100), other embodiments can exist. In fact, in an alternative embodiment the one or more monitoring modules (1202a,1202b,1202n) may reside outside of the artificial intelligence and machine learning infrastructure (1100). The one or more monitoring modules (1202a,1202b,1202n) may reside, for example, on one or more remote servers that communicate with one or more artificial intelligence and machine learning infrastructures (1100). Alternatively, the one or more monitoring modules (1202a,1202b,1202n) may reside within a cloud environment that includes resources that can communicate with one or more artificial intelligence and machine learning infrastructures (1100). In such embodiments, the one or more artificial intelligence and machine learning infrastructures (1100) may periodically send telemetry data to the one or more monitoring modules (1202a,1202b,1202n) that includes, for example, data telemetry, storage telemetry, networking telemetry, compute telemetry, and so on. For further explanation,FIG.12Bsets forth a flow chart illustrating an example method of optimizing an artificial intelligence and machine learning infrastructure (1100) according to some embodiments of the present disclosure. The artificial intelligence and machine learning infrastructure (1100) described above may include one or more optimization modules (1252a,1252b,1252n) or may be otherwise coupled to one or more optimization modules. The optimization modules (1252a,1252b,1252n) may be embodied, for example, computer program instructions executing on computer hardware such as a CPU. Such computer program instructions may be stored, for example, within memory that is contained in one or more of the blades that is included within a storage system that is included within the artificial intelligence and machine learning infrastructure (1100) and executed by one or more CPUs that are included within the storage system that is included within the artificial intelligence and machine learning infrastructure (1100). Readers will appreciate that other embodiments are contemplated such as, for example, the one or more optimization modules (1252a,1252b,1252n) residing within and being executed by a server that is included within the artificial intelligence and machine learning infrastructure (1100), the one or more optimization modules (1252a,1252b,1252n) residing within and being executed by cloud computing resources that the artificial intelligence and machine learning infrastructure (1100) is in communications with, or in some other way. The example method depicted inFIG.12Bincludes determining (1254) whether a particular artificial intelligence or machine learning pipeline will fit on a particular artificial intelligence and machine learning infrastructure (1100). Readers will appreciate that multiple artificial intelligence or machine learning pipelines may be executed on a particular artificial intelligence and machine learning infrastructure (1100). Each artificial intelligence or machine learning pipeline that is being executed on a particular artificial intelligence and machine learning infrastructure (1100) will consume resources (e.g., storage, compute, networking). Given that each artificial intelligence and machine learning infrastructure (1100) has finite resources, each artificial intelligence and machine learning infrastructure (1100) cannot support an infinite number of artificial intelligence or machine learning pipelines. As such, a determination (1254) may need to be made as to whether a particular artificial intelligence or machine learning pipeline will fit on a particular artificial intelligence and machine learning infrastructure (1100). Determining (1254) whether a particular artificial intelligence or machine learning pipeline will fit on a particular artificial intelligence and machine learning infrastructure (1100) may be carried out, for example, by determining an amount of resources that are expected to be required to execute a particular artificial intelligence or machine learning pipeline and determining whether the artificial intelligence and machine learning infrastructure (1100) has an amount of available resources to satisfy the expected demand for resources from the particular artificial intelligence or machine learning pipeline. Readers will appreciate that determining (1254) whether a particular artificial intelligence or machine learning pipeline will fit on a particular artificial intelligence and machine learning infrastructure (1100) can be more complicated than a simple comparison of available resources to expected demand for resources by the particular artificial intelligence or machine learning pipeline. For example, the optimization modules (1252a,1252b,1252n) may take into consideration the performance impact on other artificial intelligence or machine learning pipelines that are currently executing on the particular artificial intelligence and machine learning infrastructure (1100) to determine whether satisfactory performance metrics could be maintained even with the addition of the particular artificial intelligence or machine learning pipeline to the particular artificial intelligence and machine learning infrastructure (1100). In such an example, other artificial intelligence or machine learning pipelines that are currently executing on the particular artificial intelligence and machine learning infrastructure (1100) may be subject to various service level agreements, quality of service requirements, and so on that may be violated with the addition of the particular artificial intelligence or machine learning pipeline to the particular artificial intelligence and machine learning infrastructure (1100) —even if the particular artificial intelligence and machine learning infrastructure (1100) could technically support the particular artificial intelligence or machine learning pipeline. Likewise, the particular artificial intelligence or machine learning pipeline may itself have various performance and service requirements/expectations that are attached to the particular artificial intelligence or machine learning pipeline, such that the mere ability to support the execution of the particular artificial intelligence or machine learning pipeline may be insufficient. Readers will further appreciate that trending information, including the expected increase or decrease in resource consumption of the particular artificial intelligence or machine learning pipeline, as well as the expected increase or decrease in resource consumption of the other artificial intelligence or machine learning pipelines that are currently executing on the particular artificial intelligence and machine learning infrastructure (1100) may be taken into consideration when determining (1254) whether a particular artificial intelligence or machine learning pipeline will fit on a particular artificial intelligence and machine learning infrastructure (1100). In such a way, the determination (1254) may be forward looking and avoid a predictable exhaustion of resources. Readers will further appreciate that determining (1254) whether a particular artificial intelligence or machine learning pipeline will fit on a particular artificial intelligence and machine learning infrastructure (1100) may be of particular interest in embodiments where a cluster of artificial intelligence and machine learning infrastructures (1100) are available. In such an example, although a plurality of the artificial intelligence and machine learning infrastructures (1100) may be able to support the execution of the particular artificial intelligence or machine learning pipeline, a best fit analysis may be performed to identify the artificial intelligence and machine learning infrastructures (1100) that may best support the particular artificial intelligence or machine learning pipeline. In such a way, loading balancing objectives may be met, higher service levels may be afforded to the other artificial intelligence or machine learning pipelines that are currently executing on the cluster of artificial intelligence and machine learning infrastructures (1100), and so on. The example method depicted inFIG.12Balso includes, responsive to affirmatively determining that the particular artificial intelligence or machine learning pipeline will fit on the particular artificial intelligence and machine learning infrastructure (1100), initiating (1256) execution of the particular artificial intelligence or machine learning pipeline on the particular artificial intelligence and machine learning infrastructure (1100). Readers appreciate that in embodiments where a cluster of artificial intelligence and machine learning infrastructures (1100) are available, execution of the particular artificial intelligence or machine learning pipeline may be initiated (1256) on a particular artificial intelligence and machine learning infrastructure (1100) that was selected using a best fit analysis. The example method depicted inFIG.12Balso includes determining (1258) an estimated time for completion for a particular artificial intelligence or machine learning job. Determining (1258) an estimated time for completion for a particular artificial intelligence or machine learning job may be carried out, for example, by estimating an amount of time required to complete a particular artificial intelligence or machine learning job in view of the amount of resources that may be made available for use by the particular artificial intelligence or machine learning job. In such an example, users in a multi-tenant environment may even be provided with the estimated time for completion for a particular artificial intelligence or machine learning job, so that a user may determine whether to actually submit the particular artificial intelligence or machine learning job. Likewise, the estimated time for completion for a particular artificial intelligence or machine learning job may be given to a scheduler or other module of computer program instructions that can gather such information from a plurality of artificial intelligence and machine learning infrastructures (1100) (e.g., in a clustered environment) in order to identify which particular artificial intelligence and machine learning infrastructure (1100) the particular artificial intelligence or machine learning job should be submitted to. The example method depicted inFIG.12Balso includes determining (1260) the extent to which one or more artificial intelligence or machine learning models are improving over time. Determining (1260) the extent to which one or more artificial intelligence or machine learning models are improving over time may be carried out, for example, through the use of trending information for a particular artificial intelligence or machine learning job. In fact, determining (1260) the extent to which one or more artificial intelligence or machine learning models are improving over time can include performing things like A/B testing between different models or transformations, performing canary testing to quickly and automatically verify that everything that a particular model depends on is ready before other time-consuming tests are conducted, and so on. In fact, in context of canary testing, a deeply learned model may be used that predicts if the learned model passed A/B testing using a history of previous A/B tests, particular for a continuous integration pipeline. In such an example, weighted scores may be created to show if the output is likely to pass. Through the use of such techniques, historical trending of various models may be maintained and tracked such that the details and outcomes of steps in a pipeline may be maintained. The example method depicted inFIG.12Balso includes generating (1262) model recommendations. Readers will appreciate that, in view of the fact that many artificial intelligence or machine learning pipelines may be executed a single artificial intelligence and machine learning infrastructure (1100) and further in view of the fact that multiple artificial intelligence and machine learning infrastructures (1100) may be included in a single cluster, a substantial amount of information related to the execution of artificial intelligence or machine learning pipelines may be available. Such information may be mined to identify, for example, models that worked well on various datasets, transformations that led to improvements for a particular pipeline and dataset, and so on. As such, model recommendations may be generated (1262) to recommend that a particular model be alerted in some particular way, particular transformations be excluded from or included in a particular, transformations be modified in some way, and so on. In the example method depicted inFIG.12B, generating (1262) model recommendations may be carried out through the fingerprints or similar mechanisms that describe various aspects of a particular artificial intelligence or machine learning pipeline, the data ingested by the particular artificial intelligence or machine learning pipeline, and so on. In such a way, recommendations may only be generated based on information gathered from artificial intelligence or machine learning pipelines and datasets with similar fingerprints. For example, if a particular transformation was particularly useful in an image recognition machine learning pipeline that ingested images with certain characteristics, such a transformation may only be recommended for owners of other image recognition machine learning pipelines that ingest images with similar characteristics, whereas such a recommendation would not be generated a speech processing artificial intelligence pipeline. Readers will appreciate that such recommendations could be anonymized so as to shield another user's data, specific information about their model, and so on. In the example method depicted inFIG.12B, embodiments may make use of auto-indexing techniques through which the artificial intelligence and machine learning infrastructure (1100) can, for example, generate vectors for data to quickly and effectively index and understand large amounts of data. Such auto-indexing techniques may be used to identify cold data that should be tiered off of the artificial intelligence and machine learning infrastructure (1100), to migrate data to a cache (e.g., for data that is being heavily used), and so on. Through the use of such auto-indexing techniques, insights into the content of the data may cause the artificial intelligence and machine learning infrastructure (1100) to automatically tier some less useful data to slower storage as part of a migration process, rather than migrating the data and subsequently determining that the data that has already been stored in the artificial intelligence and machine learning infrastructure (1100) should be tiered away. The example method depicted inFIG.12Balso includes tuning (1212) an artificial intelligence or machine learning pipeline. In the example method depicted inFIG.12B, tuning (1212) an artificial intelligence or machine learning pipeline may be carried out, for example, in a manner that is automated and/or predictive based on an examination of the workloads placed on the artificial intelligence and machine learning infrastructure (1100) as well as the attributes of one or more artificial intelligence or machine learning pipelines. For example, the ratios of compute-to-storage may be modified based on characteristics of the workload, pipelines could be rebalanced based on an identification of bottlenecks (e.g., a bottleneck is identified, a solution is identified indicating that additional stream-processing servers are needed, and additional stream-processing servers are automatically spun up). Likewise, workloads or pipelines could be moved around and various other actions could be taken to tune (1212) the artificial intelligence or machine learning pipeline. Embodiments of the artificial intelligence and machine learning infrastructure (1100) may also make use of a job scheduler and a resource management tool that can reside within the storage system(s) that are contained in the artificial intelligence and machine learning infrastructure (1100). In such an embodiment, the storage system(s) may be responsible for managing the scheduling of jobs to the GPU and other types of resource management, where such management is carried out by the storage system(s) under a single management plane. Furthermore, such management may be carried out in an automated fashion, including automated scheduling based on various factors (e.g., the influx of some data, data contents, and so on). For example, pre-merge tests should see what code has changed and run tests based on those changes. Furthermore, the storage systems(s) could implement management in by making decisions such as, for example, selecting a particular dataset to train against, the appropriate interval to run tests and continuously re-train with new data, and so on. In some embodiments, a storage system or other management entity within the artificial intelligence and machine learning infrastructure (1100) may also implement automated training with continuous learning based on some triggers (e.g., new data, exceptional data). Furthermore, auto-indexing could be used to identify the particular categories of data within a dataset. For example, a user of an image processing pipeline may want to train against images of dogs and cats, with no understanding the dataset actually includes images of dogs, cats, birds, worms, and so on. An automated indexing solution, however, would detect each of the categories of data actually contained within the dataset. In some embodiments, a storage system or other management entity within the artificial intelligence and machine learning infrastructure (1100) may also implement the real-time coordination of workflows. Readers will appreciate that the artificial intelligence and machine learning infrastructure (1100) do not just execute artificial intelligence and machine learning pipelines, as the artificial intelligence and machine learning infrastructure (1100) may also run message queue systems, data cleansing modules, and so on. As such, the artificial intelligence and machine learning infrastructure (1100) may be configured to handle the coordination of all of the resources under a single management plane. For further explanation,FIG.13sets forth a flow chart illustrating an example method of storage system query processing within an artificial intelligence and machine learning infrastructure (1100) according to some embodiments of the present disclosure. The artificial intelligence and machine learning infrastructure (1100) described above may include one or more storage system(s) (1120) and one or more computing devices, such as one or more GPU system(s) (1130). Traditional machine learning frameworks access storage by using a file system application programming interface provided by an operating system, where the operating system is layered on top of a physical storage device or physical storage system. Further, this traditional configuration—where an operations system is layered on top of a storage system—makes use of traditional storage system operations, such as reading, writing, erasing, etc. By contrast to traditional machine learning frameworks, the artificial intelligence and machine learning infrastructure (1100) depicted withinFIG.13implements a storage system, or storage systems (1120), that are configured with an application programming interface (API) that is directly accessible to a machine learning framework operating on one or more GPU systems (1130). Further, API provided by the storage system (1120) provide more than simply a standard array of traditional storage system operations, the API provided by the storage system (1120) is configured to provide a full range of query functionality that enables queries that operate on metadata describing one or more attributes of stored data. As an example of the API functionality, the storage system(s) (1120) may support queries structured as database queries, such as:Example 1: “select pathname from VOLUME where pathname starts with PREFIX”Example 2: “select pathname, size from VOLUME where pathname starts with PREFIX and size >1 GB sort by size descending”Example 3: “select sum(size), owner from VOLUME group by owner sort by size ascending” In example 1, the “select” query is parsed and interpreted to retrieve all files with a root directory of PREFIX from a file system location specified by VOLUME—where PREFIX may specify a portion of a directory path, and where the VOLUME may include specifications for one or more of a particular storage system, file system, volume, or more generally, an indication of an address or location that corresponds to a particular file system or memory address space. In this example, where the storage systems (1120) include one or more Pure™ FlashBlade™ storage systems, VOLUME may be “flashblade1://vip/file_system_name” and PREFIX may be “test1/results/”. Example 2, similar to example 1, when received by a storage system controller or by a query processing module implemented by software and/or hardware within the one or more storage systems (1120), retrieves files from VOLUME with a root directory of PREFIX, but in this second example, additional parameters further specify attributes of files to be retrieved, where the additional parameters include a size of a data file specified be greater than 1 gigabyte, and a parameter specifying that results of the query are to be sorted in descending order. Example 3 depicts an example query that selects all files stored at VOLUME, where the additional parameters instruct the storage system to process the query results of all files at VOLUME by grouping the files by an “owner” attribute, and further where—for each set of files owned by a given owner—a sum is calculated, thereby producing a list of all owners of files within VOLUME where the list shows, for each owner, a sum of file storage sizes for all files owned, and where the list of owners is in ascending order according to their respective sum of file storage sizes. As another example, the API implemented by the storage system(s) (1120) may provide a call to enumerate a collection of file objects, or data objects, stored within a given file system, or within a particular directory of a file system. In this way, the storage system(s) (1120) may perform the computational workload that would otherwise be performed by one or more of the GPU systems (1130), which in the case of millions of data objects, quickly becomes a significant amount of processing time. In some examples, where the storage system(s) (1120) provide an accessible file system without an intervening operating system layer, the storage system(s) (1120) may be configured as a multi-path client, where multiple ones of the GPU system(s) (1130) may then concurrently access data stored on the storage system(s) (1120). More generally, for multiple data objects stored within storage system (1120), and for any given set of data object attributes described with corresponding metadata for the given data object, a query may specify parameters, commands, attributes, and logical operators that, applied in combination, select a subset of data objects from among all of the multiple data objects such that the metadata for each of the subset of data objects satisfies the query specifications. In this way, the storage system (1120) API may support multiple types of queries for selecting data objects from among stored data objects. The example method depicted inFIG.13includes storing (1302), at storage system(s) (1120), multiple data objects (1352a-1352n) that respectively include data and metadata, where the metadata describes one or more attributes of the data. As an example implementation, storage system(s) (1120) may store data objects (1352a-1352n), where each given data object includes respective data and respective metadata, where as depicted inFIG.13, data object (1352a) includes data (1360a) and metadata (1362a), data object (1352b) includes data (1360b) and metadata (1362b), and so on, until data object (1352n), which includes data (1360n) and metadata (1362n), where n is an arbitrary value limited only by an amount of available storage space within storage system(s) (1120). The example method depicted inFIG.13also includes receiving (1304), at the storage system(s) (1120) from a computing device, a query (1354) that specifies one or more attributes of data. Receiving (1304), at the storage system(s) (1120) from a computing device, the query (1354) that specifies the one or more attributes of data may be implemented by a query processing module, or a controller, receiving, over a communication fabric, a message that includes the query (1354) and one or more parameters such that the message conforms to an API provided by the storage system(s) (1120) as described above. Further, in this example, the computing device may be a GPU from among the GPU system(s) (1130); however, in other examples, the computing device may be a general processing CPU. The communication fabric, not depicted, may be a collection of connected network devices configured to implement a communication network, as described above with reference toFIGS.11A-11D. The example method depicted inFIG.13also includes generating (1306), at the storage system(s) (1120), a dataset (1356) that includes one or more of the multiple data objects (1352a-1352n) such that each data object in the dataset (1356) shares the one or more attributes of data specified by the query (1354). Generating (1306), at the storage system(s) (1120), the dataset (1356) that includes the one or more of the multiple data objects (1352a-1352n) such that each data object in the dataset (1356) shares the one or more attributes of data specified by the query (1354) may be implemented by a query processing module or controller of the storage system(s) (1120) searching through an index of every stored data object, where the metadata for each given data object is accessed to determine whether the metadata describes attributes of the corresponding data for the given data object satisfy the query in accordance with the one or more attributes of data, where the one or more attributes of data may be parameters of the received (1304) message. In this way, for each data object that satisfies the query, the data object may be added to the dataset (1356) —where the addition of the data object may be implemented through creation of metadata corresponding to the dataset (1356) that references each data object that has been added to the dataset (1356). After iterating through each stored data object, the dataset (1356) may be defined and ready to transmit. In some examples, partial results may be transmitted in response to one or more portions of the results being generated prior to completion of the entire set of results (1454). The example method depicted inFIG.13also includes transmitting (1308), from the storage system(s) (1120) to the computing device, the dataset (1356) of the one or more of the multiple data objects. Transmitting (1356), from the storage system(s) (1120) to the computing device, the dataset (1356) of the one or more of the multiple data objects (1352a-1352n) may be implemented by a query processing module, or a controller, transmitting, over a communication fabric, a message that includes the dataset (1356) to the computing device, which, as described above, in this example may be one or more of the GPU system(s) (1130). The communication fabric is described above with reference to receiving (1304) the query (1354) and with further reference toFIGS.11A-11D. For further explanation,FIG.14sets forth a flow chart illustrating an example method of storage system query processing within an artificial intelligence and machine learning infrastructure (1100) according to some embodiments of the present disclosure. The example method depicted inFIG.14is similar to the example method depicted inFIG.13, as the example method depicted inFIG.14also includes storing (1302), at storage system(s) (1120), multiple data objects (1352a-1352n) that respectively include data and metadata, where the metadata describes one or more attributes of the data; receiving (1304), at the storage system(s) (1120) from a computing device, a query (1354) that specifies one or more attributes of data; generating (1306), at the storage system(s) (1120), a dataset (1356) that includes one or more of the multiple data objects (1352a-1352n) such that each data object in the dataset (1356) shares the one or more attributes of data specified by the query (1354); and transmitting (1308), from the storage system(s) (1120) to the computing device, the dataset (1356) of the one or more of the multiple data objects. However, the example method depicted inFIG.14further includes, in response to receiving a function (1452) specifying one or more parameters and an indication to cache results, caching (1402) results of applying the function (1452) to the one or more of the multiple data objects. Receiving the function (1452) may be implemented by receiving a message across the communication fabric as described above with reference toFIG.13, where the message is in accordance with the supported API provided by the storage system(s) (1120). Further, in response to receiving the function (1452), a controller or query processing module of the storage system(s) (1120) may identify one or more computational operations, storage system operations, to perform the specified function (1452) in accordance with the one or more parameters for the function (1452) —where, responsive to the indication to cache the results, the query processing module or controller of the storage system(s) (1120) may cache the results of performing the function (1452). As one example, the function (1452) may be a JPEG decode, shuffle, file combination, reshaping matrices/tensors, among any other general function for transforming data for use by a machine learning system. In this example, the indication may be an API parameter that, when present, instructs the storage system(s) (1120) to cache results of the query or function being passed. Given such an indication the storage system(s) (1120) may cache the results (1454), and track any modifications to the one or more data objects which served as the basis for the application of the function—where if the one or more data objects remain unchanged, then the cached results (1454) remain valid, and if the one or more data objects are modified in a manner that would change the results (1454), then the cached results (1454) are invalidated or flushed. In other examples, the storage system(s) (1120) may determine a pattern of queries or access to datasets, and predictively determine to cache corresponding datasets. For example, the storage system(s) (1120), may recognize a sequence of operations that correspond to the beginning of a machine learning training session, and predict that, based on one or more queries or accesses for data for the training session, that one or more dataset results may be subsequently requested, and in response, cache the one or more dataset results. In some implementations, the cached results (1454) may include a duplicate copy of the one or more data objects. However, in other examples, the cached results (1454) may be metadata that references the one or more data objects that are included in the cached results (1454). Further, in some examples, the cached results (1454) may be updated dynamically—where a controller of the storage system may maintain or store an association between cached results and underlying one or more data objects from which the cached results were generated such that in the event that a modification is made to the one or more data objects, the controller of the storage system re-applies a query, or function, used to generate the existing cached results to generate an updated set of cached results based on the modified one or more data objects. In this way, if a query or function is received at some later point in time, then the storage system may have results corresponding to the received query or function available within the cache, without accessing the stored one or more data objects and without generating the results of the query or function in response to receiving the query or function. As an example, the storage system may receive a query, such as the queries described above, where the query requests a sum of storage space for a particular user, and the query may also include a flag indicating to keep the results updated—in response the storage system may update the query results responsive to each change in size to files for the particular user. In other implementations, the API provided by the storage system may provide for input indicating information describing a rule or event that may be used as a basis for generating and caching a results set. For example, a query or function call may be received, and the query or function call may indicate information that updated results will be requested multiple times, where the indication may further specify one or more of a periodicity of request, a number of expected future requests, a window of time after which no further updates need to be generated, or a general indication of a schedule that may include a schedule of times at which specific events are to be performed. In other examples, the query or function call may specify a rule that defines a threshold value such that if the threshold value is exceeded, then a particular event may occur. In this way, for example, if a GPU is using a TensorFlow library, the GPU may provide an indication to the storage system that a given query of function may be expected repeatedly in the future, allowing for the GPU the schedule work in advance, thereby allowing the GPU to maintain a full buffer of results without any delays from the storage system. The example method ofFIG.14also includes receiving (1404), via the API, another invocation of the function (1452) and the one or more parameters—where the function (1452) is applied to the same one or more data objects are the previous invocation of the function (1452). Receiving (1404), via the API, the other or additional invocation of the function (1452) and the one or more parameters may be implemented as discussed above with regard to receiving (1402) the function (1452) the first time. The example method ofFIG.14also includes, in response to determining that the one or more of the multiple data objects have not changed, transmitting the cached results (1454) of applying the function (1452) to the one or more of the multiple data objects. Determining that the one or more of the multiple data sets have not changed may be implemented by the storage system(s) (1120) checking to see if previously cached results are valid, where the storage system(s) (1120) may determine validity by tracking whether any modifications to any stored data objects affect any of the data objects from which any cached results have been previously generated. In this example, if the cached results (1454) are valid, then the storage system(s) (1120) may transmit (1406) the cached results (1454) instead of generating the results by applying the function to the one or more of the data objects. Transmitting (1406) the cached results (1454) may be implemented above as described with transmitting (1308) a dataset. In this way, if a function is applied to a dataset, and the dataset is immutable or has not changed, then the storage system(s) (1120) may avoid re-computing the requested results by using previously cached results. For further explanation,FIG.15sets a forth flow chart illustrating an example method of storage system query processing within an artificial intelligence and machine learning infrastructure (1100) according to some embodiments of the present disclosure. The example method depicted inFIG.15is similar to the example method depicted inFIG.13, as the example method depicted inFIG.15also includes storing (1302), at storage system(s) (1120), multiple data objects (1352a-1352n) that respectively include data and metadata, where the metadata describes one or more attributes of the data; receiving (1304), at the storage system(s) (1120) from a computing device, a query (1354) that specifies one or more attributes of data; generating (1306), at the storage system(s) (1120), a dataset (1356) that includes one or more of the multiple data objects (1352a-1352n) such that each data object in the dataset (1356) shares the one or more attributes of data specified by the query (1354); and transmitting (1308), from the storage system(s) (1120) to the computing device, the dataset (1356) of the one or more of the multiple data objects. However, the example method depicted inFIG.15further includes caching (1502), at the storage system(s) (1120), the dataset (1356). Caching (1502) the dataset (1356) may be implemented by the storage system(s) (1120) responding to a parameter provided with the query (1354), where the parameter indicates to the storage system(s) (1120) that the results of the query are to be cached. Further, caching (1502) the dataset (1356) may include the storage system(s) (1120) creating an index of metadata that corresponds queries with dataset results, and where the query metadata further includes indications of whether or not any of the data objects included within the dataset have been modified since the query was processed. Further, the storage system(s) (1120) may, in response to receiving operations that modify any stored data objects, may refer to the index of metadata to update the metadata to indicate whether a modification results in invalidation of a given cached dataset. For example, a deduplication operation may result in a modified data object, but without any modification of the underlying stored data, and consequently, the deduplication operation would not invalidate the cached datasets that include the deduplicated data object. However, if a data object is at least partially overwritten, then the cached datasets that include the data object may be invalidated. Further, in some examples, the storage system(s) (1120) may cache dataset results for each query by default, without an indication from a calling computing device. The example method ofFIG.15also includes receiving (1504), at the storage system(s) (1120), another invocation of the query that specifies the one or more attributes of the data. Receiving (1504), at the storage system(s) (1120), another invocation of the query that specifies the one or more attributes of the data may be implemented similarly to receiving (1304), at the storage system(s) (1120), the query (1354) that specifies the one or more attributes of data as described with reference toFIG.13. The example method ofFIG.15also includes, in response to determining that the one or more of the multiple data objects have not changed such that they do not match the one or more attributes of data specified by the query, transmitting (1506) the cached dataset (1356). Determining that the one or more of the multiple data sets have not changed may be implemented by the storage system(s) (1120) checking the above-described index of metadata to see if previously cached results are valid, where the storage system(s) (1120) may determine validity by tracking whether any modifications to any stored data objects affect any of the data objects from which any cached results have been previously generated. In this example, if the cached dataset (1356) is valid, then the storage system(s) (1120) may transmit (1506) the cached dataset (1356) instead of generating the dataset by performing the query (1354). Transmitting (1506) the cached dataset (1356) may be implemented above as described with transmitting (1308) a dataset. In this way, if a same query is received, and the resulting dataset from a previous query is immutable or has not changed, then the storage system(s) (1120) may avoid re-computing the requested query by using previously cached results. For further explanation,FIG.16sets forth a flow chart illustrating an example method of accelerating workflows that depend upon large quantities of data stored on remote network locations. In some implementations, the below described workflow acceleration may be implemented within the computing environment of an artificial intelligence and machine learning infrastructure, such as any of the example embodiments of an artificial intelligence and machine learning infrastructures1100described above with reference toFIGS.4-15. In other embodiments, the disclosed workflow acceleration may be implemented within a computing environment that supports a computational workflow that relies on large quantities of data stored on remote network locations, where data stored at the remote network locations is accessible via a network protocol that operates serially at the operating system level or kernel level. However, in some examples, to overcome the serial limitation of using operating system level remote calls to access the remote network location, the workflow acceleration may implement parallel remote calls from user space of the computing environment—where the computing environment may be a stand-alone computer or compute node, or where the computing environment may be a networked computing environment that implements a client-server model. In some examples, an artificial intelligence workflow may rely on datasets that are hundreds or thousands of gigabytes for training, where a given dataset may be stored within a file system, and where the dataset may be stored among multiple directories, and where each directory may include multiple files and one or more subdirectories. Further, in some examples, the directory structure may include, aside from physical limitations, an unlimited number of directory levels and/or an unlimited quantity of subdirectories. In some examples, a dataset may be stored across multiple different servers or multiple different storage systems across a distributed network. Consequently, in some training scenarios, the time to simply access training data may be a bottleneck in generating a trained model due to the massive scale of training datasets. As noted above, training data access time may be reduced, in some cases, by orders of magnitude, by parallelizing access to a remote network location storing the training data, where parallelization occurs at a user space level using a network protocol that operates serially if the network protocol is used at the operating system level. Further, in some implementations, it may be useful to train an artificial intelligence model on a subset of training data from among an entire dataset of training data. However, unless some kind of selectivity is applied to how the subset is generated, the subset may not be representative of the entire dataset. Generally, given an indication of the directory structure of the dataset, one or more randomization techniques may be used to select a representative subset of the entire dataset where the representative subset is populated with data selected from both a subset of files within different directories and/or from randomly selected ones of the different directories. In some implementations, given a selected subset of training data, additional processing of the subset of training data may be performed. For example, in some cases, a subset of training data may be shuffled to reduce bias in the artificial intelligence model in training. In other words, given multiple different types of data within the subset of training data, an artificial intelligence model may benefit from being training on a heterogeneous selection of the subset of training data being consumed. In some examples, a trained artificial intelligence model may be evaluated against the entire dataset. In some implementations, a network protocol used to access data stored at a remote network location may be a network file system (NFS) protocol within a client-server model within a distributed network. In this example, remote procedure calls may be used on a client computing system to access stored data on a server, where the server may be a storage system such as the storage systems described above with reference toFIGS.1A-3B. In some implementations, multiple different operations may be supported, where the operations may include accessing remotely stored data, and where such access may be parallelized in the user space of an operating system as described above. For example, a client computing system may support command line operations that are based on accessing file system information. For example, the client computing system may implement a UNIX, or UNIX-based, operating system, including supporting UNIX-based command line operations such as “ls”, “du”, “find”, and other commands or operations that provide a response based at least in part on information describing characteristics of files and/or directories within a file system. In some cases, obtaining such information describing characteristics of files and/or directories within a file system may include a directory walk, where one remote procedure call may provide information on a directory structure that then may serve as a basis for generating additional remote procedure calls to obtain additional information on other files or directories where the additional remote procedure calls may be placed in a queue from which parallel remote procedure calls may be issued to a server. In other examples, instead of command line operations or commands, an application or other computing process may issue remote procedure calls during the generation of a subset of training data from among a full dataset. For example, in specifying an artificial intelligence model, a user may use an application's graphical user interface to specify a size of a training data subset and/or specify a dataset. The computation system1600depicted inFIG.16may be implemented by different forms of hardware, either directly or indirectly. In some examples, the computation system1600may be implemented by hardware directly, such as by one or more graphical processing units (GPUs)801within an artificial intelligence and machine learning infrastructure1600. In other examples, the computation system1600may be implemented by hardware indirectly, such as by one or more compute nodes within a virtual computing environment provided by a cloud services provider. The example method depicted inFIG.16includes: receiving (1602), from a computing process of an artificial intelligence workflow, a request1652for information stored on a data repository; issuing (1604) from a user space of an operating system environment, parallel requests1654to the data repository using a network protocol that operates serially at the kernel level of the operating system environment; receiving (1606), from the data repository, one or more responses1656to the parallel requests1654; and providing (1608), to the computing process of the artificial intelligence workflow and based on the one or more responses1656to the parallel requests1654, a response1658to the request1652for information. Receiving (1602), from the computing process of an artificial intelligence workflow, a request1652for information stored on a data repository may be carried out as described above, where, for example, a request1652may be a command line command or a request1652may be issued by application providing computing services to a user or other application. Issuing (1604), from a user space of an operating system environment, parallel requests1654to the data repository using a network protocol that operates serially at the kernel level of the operating system environment may be carried out as described above, where, for example, the network protocol may be NFS, or some other network-based protocol for remotely accessing data within a distributed network, and where the parallel requests1654may include remote procedure calls. In other examples, the parallel requests1654may be other types of messages that specify a request for data or metadata from another computing device reachable by the distributed network. Receiving (1606), from the data repository, one or more responses1656to the parallel requests1654may be carried out by in accordance to one or more network protocols. For example, a same network protocol used above for issuing (1604) parallel requests may be used for receiving (1606) responses to the parallel requests1654, where in one example, the network protocol may be NFS. Providing (1608), to the computing process of the artificial intelligence workflow and based on the one or more responses to the parallel requests1654, a response1658to the request1652for information may be carried as described above. For example, the computation system1600may apply one or more techniques for generating a subset of training data, where the subset of training data may be provided as the response1658to the request1652. For further explanation,FIG.17sets forth a flow chart illustrating an example method of accelerating workflows that depend upon large quantities of data stored on remote network locations. The example method depicted inFIG.17is similar to the example method depicted inFIG.16, as the example method depicted inFIG.17also includes receiving (1602), from a computing process of an artificial intelligence workflow, a request1652for information stored on a data repository; issuing (1604) from a user space of an operating system environment, parallel requests1654to the data repository using a network protocol that operates serially at the kernel level of the operating system environment; receiving (1606), from the data repository, one or more responses1656to the parallel requests1654; and providing (1608), to the computing process of the artificial intelligence workflow and based on the one or more responses1656to the parallel requests1654, a response1658to the request1652for information. However, the example method depicted inFIG.17further includes: issuing (1702) one or more requests1752for file information and directory information to the data repository; selecting (1704), based on the file information and directory information received from the data repository, a subset1756of files; and further specifying that issuing (1604), from the user space of the operating system environment, parallel requests to the data repository using a network protocol that operates serially at the kernel level of the operating system environment includes generating (1706) the parallel requests such that the parallel requests comprise respective remote procedure calls for respective files of the subset1756of files. Issuing (1702) one or more requests1752for file information and directory information to the data repository may be carried out as described above, where steps for performing a directory walk are provided. Selecting (1704), based on the file information and directory information received from the data repository, a subset1756of files may also be carried out as described above, where steps for performing a directory walk are provided, and where the subset1756of files may be based on one or more responses1754received from the data repository. Generating (1706) the parallel requests such that the parallel requests comprise respective remote procedure calls for respective files of the subset1756of files may be carried out as described above, where steps for performing a directory walk are provided. Readers will appreciate that although the steps described above are depicted as occurring within some order, no ordering of the steps is required unless stated otherwise. Furthermore, some embodiment steps that are even depicted in distinct figures may be combined. Example embodiments are described largely in the context of a fully functional computer system. Readers of skill in the art will recognize, however, that the present disclosure also may be embodied in a computer program product disposed upon computer readable storage media for use with any suitable data processing system. Such computer readable storage media may be any storage medium for machine-readable information, including magnetic media, optical media, or other suitable media. Examples of such media include magnetic disks in hard drives or diskettes, compact disks for optical drives, magnetic tape, and others as will occur to those of skill in the art. Persons skilled in the art will immediately recognize that any computer system having suitable programming means will be capable of executing the steps of the method as embodied in a computer program product. Persons skilled in the art will recognize also that, although some of the example embodiments described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative embodiments implemented as firmware or as hardware are well within the scope of the present disclosure. It will be further understood from the foregoing description that modifications and changes may be made in various embodiments of the present disclosure without departing from its true spirit. The descriptions in this specification are for purposes of illustration only and are not to be construed in a limiting sense. The scope of the present disclosure is limited only by the language of the following claims. | 243,544 |
11861424 | DETAILED DESCRIPTION OF THE DRAWINGS While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described herein in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims. References in the specification to “one embodiment,” “an embodiment,” “an illustrative embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Additionally, it should be appreciated that items included in a list in the form of “at least one A, B, and C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C). Similarly, items listed in the form of “at least one of A, B, or C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C). The disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on a transitory or non-transitory machine-readable (e.g., computer-readable) storage medium, which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device). In the drawings, some structural or method features may be shown in specific arrangements and/or orderings. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or method feature in a particular figure is not meant to imply that such feature is required in all embodiments and, in some embodiments, may not be included or may be combined with other features. Referring now toFIG.1, a data center100in which disaggregated resources may cooperatively execute one or more workloads (e.g., applications on behalf of customers) includes multiple pods110,120,130,140, each of which includes one or more rows of racks. As described in more detail herein, each rack houses multiple sleds, which each may be embodied as a compute device, such as a server, that is primarily equipped with a particular type of resource (e.g., memory devices, data storage devices, accelerator devices, general purpose processors). In the illustrative embodiment, the sleds in each pod110,120,130,140are connected to multiple pod switches (e.g., switches that route data communications to and from sleds within the pod). The pod switches, in turn, connect with spine switches150that switch communications among pods (e.g., the pods110,120,130,140) in the data center100. In some embodiments, the sleds may be connected with a fabric using Intel Omni-Path technology. As described in more detail herein, resources within sleds in the data center100may be allocated to a group (referred to herein as a “managed node”) containing resources from one or more other sleds to be collectively utilized in the execution of a workload. The workload can execute as if the resources belonging to the managed node were located on the same sled. The resources in a managed node may even belong to sleds belonging to different racks, and even to different pods110,120,130,140. Some resources of a single sled may be allocated to one managed node while other resources of the same sled are allocated to a different managed node (e.g., one processor assigned to one managed node and another processor of the same sled assigned to a different managed node). By disaggregating resources to sleds comprised predominantly of a single type of resource (e.g., compute sleds comprising primarily compute resources, memory sleds containing primarily memory resources), and selectively allocating and deallocating the disaggregated resources to form a managed node assigned to execute a workload, the data center100provides more efficient resource usage over typical data centers comprised of hyperconverged servers containing compute, memory, storage and perhaps additional resources). As such, the data center100may provide greater performance (e.g., throughput, operations per second, latency, etc.) than a typical data center that has the same number of resources. Referring now toFIG.2, the pod110, in the illustrative embodiment, includes a set of rows200,210,220,230of racks240. Each rack240may house multiple sleds (e.g., sixteen sleds) and provide power and data connections to the housed sleds, as described in more detail herein. In the illustrative embodiment, the racks in each row200,210,220,230are connected to multiple pod switches250,260. The pod switch250includes a set of ports252to which the sleds of the racks of the pod110are connected and another set of ports254that connect the pod110to the spine switches150to provide connectivity to other pods in the data center100. Similarly, the pod switch260includes a set of ports262to which the sleds of the racks of the pod110are connected and a set of ports264that connect the pod110to the spine switches150. As such, the use of the pair of switches250,260provides an amount of redundancy to the pod110. For example, if either of the switches250,260fails, the sleds in the pod110may still maintain data communication with the remainder of the data center100(e.g., sleds of other pods) through the other switch250,260. Furthermore, in the illustrative embodiment, the switches150,250,260may be embodied as dual-mode optical switches, capable of routing both Ethernet protocol communications carrying Internet Protocol (IP) packets and communications according to a second, high-performance link-layer protocol (e.g., Intel's Omni-Path Architecture's, Infiniband) via optical signaling media of an optical fabric. It should be appreciated that each of the other pods120,130,140(as well as any additional pods of the data center100) may be similarly structured as, and have components similar to, the pod110shown in and described in regard toFIG.2(e.g., each pod may have rows of racks housing multiple sleds as described above). Additionally, while two pod switches250,260are shown, it should be understood that in other embodiments, each pod110,120,130,140may be connected to different number of pod switches (e.g., providing even more failover capacity). Referring now toFIGS.3-5, each illustrative rack240of the data center100includes two elongated support posts302,304, which are arranged vertically. For example, the elongated support posts302,304may extend upwardly from a floor of the data center100when deployed. The rack240also includes one or more horizontal pairs310of elongated support arms312(identified inFIG.3via a dashed ellipse) configured to support a sled of the data center100as discussed below. One elongated support arm312of the pair of elongated support arms312extends outwardly from the elongated support post302and the other elongated support arm312extends outwardly from the elongated support post304. In the illustrative embodiments, each sled of the data center100is embodied as a chassis-less sled. That is, each sled has a chassis-less circuit board substrate on which physical resources (e.g., processors, memory, accelerators, storage, etc.) are mounted as discussed in more detail below. As such, the rack240is configured to receive the chassis-less sleds. For example, each pair310of elongated support arms312defines a sled slot320of the rack240, which is configured to receive a corresponding chassis-less sled. To do so, each illustrative elongated support arm312includes a circuit board guide330configured to receive the chassis-less circuit board substrate of the sled. Each circuit board guide330is secured to, or otherwise mounted to, a top side332of the corresponding elongated support arm312. For example, in the illustrative embodiment, each circuit board guide330is mounted at a distal end of the corresponding elongated support arm312relative to the corresponding elongated support post302,304. For clarity of the Figures, not every circuit board guide330may be referenced in each Figure. Each circuit board guide330includes an inner wall that defines a circuit board slot380configured to receive the chassis-less circuit board substrate of a sled400when the sled400is received in the corresponding sled slot320of the rack240. To do so, as shown inFIG.4, a user (or robot) aligns the chassis-less circuit board substrate of an illustrative chassis-less sled400to a sled slot320. The user, or robot, may then slide the chassis-less circuit board substrate forward into the sled slot320such that each side edge414of the chassis-less circuit board substrate is received in a corresponding circuit board slot380of the circuit board guides330of the pair310of elongated support arms312that define the corresponding sled slot320as shown inFIG.4. By having robotically accessible and robotically manipulable sleds comprising disaggregated resources, each type of resource can be upgraded independently of each other and at their own optimized refresh rate. Furthermore, the sleds are configured to blindly mate with power and data communication cables in each rack240, enhancing their ability to be quickly removed, upgraded, reinstalled, and/or replaced. As such, in some embodiments, the data center100may operate (e.g., execute workloads, undergo maintenance and/or upgrades, etc.) without human involvement on the data center floor. In other embodiments, a human may facilitate one or more maintenance or upgrade operations in the data center100. It should be appreciated that each circuit board guide330is dual sided. That is, each circuit board guide330includes an inner wall that defines a circuit board slot380on each side of the circuit board guide330. In this way, each circuit board guide330can support a chassis-less circuit board substrate on either side. As such, a single additional elongated support post may be added to the rack240to turn the rack240into a two-rack solution that can hold twice as many sled slots320as shown inFIG.3. The illustrative rack240includes seven pairs310of elongated support arms312that define a corresponding seven sled slots320, each configured to receive and support a corresponding sled400as discussed above. Of course, in other embodiments, the rack240may include additional or fewer pairs310of elongated support arms312(i.e., additional or fewer sled slots320). It should be appreciated that because the sled400is chassis-less, the sled400may have an overall height that is different than typical servers. As such, in some embodiments, the height of each sled slot320may be shorter than the height of a typical server (e.g., shorter than a single rank unit, “1U”). That is, the vertical distance between each pair310of elongated support arms312may be less than a standard rack unit “1U.” Additionally, due to the relative decrease in height of the sled slots320, the overall height of the rack240in some embodiments may be shorter than the height of traditional rack enclosures. For example, in some embodiments, each of the elongated support posts302,304may have a length of six feet or less. Again, in other embodiments, the rack240may have different dimensions. Further, it should be appreciated that the rack240does not include any walls, enclosures, or the like. Rather, the rack240is an enclosure-less rack that is opened to the local environment. Of course, in some cases, an end plate may be attached to one of the elongated support posts302,304in those situations in which the rack240forms an end-of-row rack in the data center100. In some embodiments, various interconnects may be routed upwardly or downwardly through the elongated support posts302,304. To facilitate such routing, each elongated support post302,304includes an inner wall that defines an inner chamber in which the interconnect may be located. The interconnects routed through the elongated support posts302,304may be embodied as any type of interconnects including, but not limited to, data or communication interconnects to provide communication connections to each sled slot320, power interconnects to provide power to each sled slot320, and/or other types of interconnects. The rack240, in the illustrative embodiment, includes a support platform on which a corresponding optical data connector (not shown) is mounted. Each optical data connector is associated with a corresponding sled slot320and is configured to mate with an optical data connector of a corresponding sled400when the sled400is received in the corresponding sled slot320. In some embodiments, optical connections between components (e.g., sleds, racks, and switches) in the data center100are made with a blind mate optical connection. For example, a door on each cable may prevent dust from contaminating the fiber inside the cable. In the process of connecting to a blind mate optical connector mechanism, the door is pushed open when the end of the cable enters the connector mechanism. Subsequently, the optical fiber inside the cable enters a gel within the connector mechanism and the optical fiber of one cable comes into contact with the optical fiber of another cable within the gel inside the connector mechanism. The illustrative rack240also includes a fan array370coupled to the cross-support arms of the rack240. The fan array370includes one or more rows of cooling fans372, which are aligned in a horizontal line between the elongated support posts302,304. In the illustrative embodiment, the fan array370includes a row of cooling fans372for each sled slot320of the rack240. As discussed above, each sled400does not include any on-board cooling system in the illustrative embodiment and, as such, the fan array370provides cooling for each sled400received in the rack240. Each rack240, in the illustrative embodiment, also includes a power supply associated with each sled slot320. Each power supply is secured to one of the elongated support arms312of the pair310of elongated support arms312that define the corresponding sled slot320. For example, the rack240may include a power supply coupled or secured to each elongated support arm312extending from the elongated support post302. Each power supply includes a power connector configured to mate with a power connector of the sled400when the sled400is received in the corresponding sled slot320. In the illustrative embodiment, the sled400does not include any on-board power supply and, as such, the power supplies provided in the rack240supply power to corresponding sleds400when mounted to the rack240. Referring now toFIG.6, the sled400, in the illustrative embodiment, is configured to be mounted in a corresponding rack240of the data center100as discussed above. In some embodiments, each sled400may be optimized or otherwise configured for performing particular tasks, such as compute tasks, acceleration tasks, data storage tasks, etc. For example, the sled400may be embodied as a compute sled800as discussed below in regard toFIGS.8-9, an accelerator sled1000as discussed below in regard toFIGS.10-11, a storage sled1200as discussed below in regard toFIGS.12-13, or as a sled optimized or otherwise configured to perform other specialized tasks, such as a memory sled1400, discussed below in regard toFIG.14. As discussed above, the illustrative sled400includes a chassis-less circuit board substrate602, which supports various physical resources (e.g., electrical components) mounted thereon. It should be appreciated that the circuit board substrate602is “chassis-less” in that the sled400does not include a housing or enclosure. Rather, the chassis-less circuit board substrate602is open to the local environment. The chassis-less circuit board substrate602may be formed from any material capable of supporting the various electrical components mounted thereon. For example, in an illustrative embodiment, the chassis-less circuit board substrate602is formed from an FR-4 glass-reinforced epoxy laminate material. Of course, other materials may be used to form the chassis-less circuit board substrate602in other embodiments. As discussed in more detail below, the chassis-less circuit board substrate602includes multiple features that improve the thermal cooling characteristics of the various electrical components mounted on the chassis-less circuit board substrate602. As discussed, the chassis-less circuit board substrate602does not include a housing or enclosure, which may improve the airflow over the electrical components of the sled400by reducing those structures that may inhibit air flow. For example, because the chassis-less circuit board substrate602is not positioned in an individual housing or enclosure, there is no backplane (e.g., a backplate of the chassis) to the chassis-less circuit board substrate602, which could inhibit air flow across the electrical components. Additionally, the chassis-less circuit board substrate602has a geometric shape configured to reduce the length of the airflow path across the electrical components mounted to the chassis-less circuit board substrate602. For example, the illustrative chassis-less circuit board substrate602has a width604that is greater than a depth606of the chassis-less circuit board substrate602. In one particular embodiment, for example, the chassis-less circuit board substrate602has a width of about 21 inches and a depth of about 9 inches, compared to a typical server that has a width of about 17 inches and a depth of about 39 inches. As such, an airflow path608that extends from a front edge610of the chassis-less circuit board substrate602toward a rear edge612has a shorter distance relative to typical servers, which may improve the thermal cooling characteristics of the sled400. Furthermore, although not illustrated inFIG.6, the various physical resources mounted to the chassis-less circuit board substrate602are mounted in corresponding locations such that no two substantively heat-producing electrical components shadow each other as discussed in more detail below. That is, no two electrical components, which produce appreciable heat during operation (i.e., greater than a nominal heat sufficient enough to adversely impact the cooling of another electrical component), are mounted to the chassis-less circuit board substrate602linearly in-line with each other along the direction of the airflow path608(i.e., along a direction extending from the front edge610toward the rear edge612of the chassis-less circuit board substrate602). As discussed above, the illustrative sled400includes one or more physical resources620mounted to a top side650of the chassis-less circuit board substrate602. Although two physical resources620are shown inFIG.6, it should be appreciated that the sled400may include one, two, or more physical resources620in other embodiments. The physical resources620may be embodied as any type of processor, controller, or other compute circuit capable of performing various tasks such as compute functions and/or controlling the functions of the sled400depending on, for example, the type or intended functionality of the sled400. For example, as discussed in more detail below, the physical resources620may be embodied as high-performance processors in embodiments in which the sled400is embodied as a compute sled, as accelerator co-processors or circuits in embodiments in which the sled400is embodied as an accelerator sled, storage controllers in embodiments in which the sled400is embodied as a storage sled, or a set of memory devices in embodiments in which the sled400is embodied as a memory sled. The sled400also includes one or more additional physical resources630mounted to the top side650of the chassis-less circuit board substrate602. In the illustrative embodiment, the additional physical resources include a network interface controller (NIC) as discussed in more detail below. Of course, depending on the type and functionality of the sled400, the physical resources630may include additional or other electrical components, circuits, and/or devices in other embodiments. The physical resources620are communicatively coupled to the physical resources630via an input/output (I/O) subsystem622. The I/O subsystem622may be embodied as circuitry and/or components to facilitate input/output operations with the physical resources620, the physical resources630, and/or other components of the sled400. For example, the I/O subsystem622may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, integrated sensor hubs, firmware devices, communication links (e.g., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.), and/or other components and subsystems to facilitate the input/output operations. In the illustrative embodiment, the I/O subsystem622is embodied as, or otherwise includes, a double data rate 4 (DDR4) data bus or a DDR5 data bus. In some embodiments, the sled400may also include a resource-to-resource interconnect624. The resource-to-resource interconnect624may be embodied as any type of communication interconnect capable of facilitating resource-to-resource communications. In the illustrative embodiment, the resource-to-resource interconnect624is embodied as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem622). For example, the resource-to-resource interconnect624may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to resource-to-resource communications. The sled400also includes a power connector640configured to mate with a corresponding power connector of the rack240when the sled400is mounted in the corresponding rack240. The sled400receives power from a power supply of the rack240via the power connector640to supply power to the various electrical components of the sled400. That is, the sled400does not include any local power supply (i.e., an on-board power supply) to provide power to the electrical components of the sled400. The exclusion of a local or on-board power supply facilitates the reduction in the overall footprint of the chassis-less circuit board substrate602, which may increase the thermal cooling characteristics of the various electrical components mounted on the chassis-less circuit board substrate602as discussed above. In some embodiments, power is provided to the processors820through vias directly under the processors820(e.g., through the bottom side750of the chassis-less circuit board substrate602), providing an increased thermal budget, additional current and/or voltage, and better voltage control over typical boards. In some embodiments, the sled400may also include mounting features642configured to mate with a mounting arm, or other structure, of a robot to facilitate the placement of the sled600in a rack240by the robot. The mounting features642may be embodied as any type of physical structures that allow the robot to grasp the sled400without damaging the chassis-less circuit board substrate602or the electrical components mounted thereto. For example, in some embodiments, the mounting features642may be embodied as non-conductive pads attached to the chassis-less circuit board substrate602. In other embodiments, the mounting features may be embodied as brackets, braces, or other similar structures attached to the chassis-less circuit board substrate602. The particular number, shape, size, and/or make-up of the mounting feature642may depend on the design of the robot configured to manage the sled400. Referring now toFIG.7, in addition to the physical resources630mounted on the top side650of the chassis-less circuit board substrate602, the sled400also includes one or more memory devices720mounted to a bottom side750of the chassis-less circuit board substrate602. That is, the chassis-less circuit board substrate602is embodied as a double-sided circuit board. The physical resources620are communicatively coupled to the memory devices720via the I/O subsystem622. For example, the physical resources620and the memory devices720may be communicatively coupled by one or more vias extending through the chassis-less circuit board substrate602. Each physical resource620may be communicatively coupled to a different set of one or more memory devices720in some embodiments. Alternatively, in other embodiments, each physical resource620may be communicatively coupled to each memory devices720. The memory devices720may be embodied as any type of memory device capable of storing data for the physical resources620during operation of the sled400, such as any type of volatile (e.g., dynamic random access memory (DRAM), etc.) or non-volatile memory. Volatile memory may be a storage medium that requires power to maintain the state of data stored by the medium. Non-limiting examples of volatile memory may include various types of random access memory (RAM), such as dynamic random access memory (DRAM) or static random access memory (SRAM). One particular type of DRAM that may be used in a memory module is synchronous dynamic random access memory (SDRAM). In particular embodiments, DRAM of a memory component may comply with a standard promulgated by JEDEC, such as JESD79F for DDR SDRAM, JESD79-2F for DDR2 SDRAM, JESD79-3F for DDR3 SDRAM, JESD79-4A for DDR4 SDRAM, JESD209 for Low Power DDR (LPDDR), JESD209-2 for LPDDR2, JESD209-3 for LPDDR3, and JESD209-4 for LPDDR4 (these standards are available at www.jedec.org). Such standards (and similar standards) may be referred to as DDR-based standards and communication interfaces of the storage devices that implement such standards may be referred to as DDR-based interfaces. In one embodiment, the memory device is a block addressable memory device, such as those based on NAND or NOR technologies. A memory device may also include next-generation nonvolatile devices, such as Intel 3D XPoint™ memory or other byte addressable write-in-place nonvolatile memory devices. In one embodiment, the memory device may be or may include memory devices that use chalcogenide glass, multi-threshold level NAND flash memory, NOR flash memory, single or multi-level Phase Change Memory (PCM), a resistive memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), anti-ferroelectric memory, magnetoresistive random access memory (MRAM) memory that incorporates memristor technology, resistive memory including the metal oxide base, the oxygen vacancy base and the conductive bridge Random Access Memory (CB-RAM), or spin transfer torque (STT)-MRAM, a spintronic magnetic junction memory based device, a magnetic tunneling junction (MTJ) based device, a DW (Domain Wall) and SOT (Spin Orbit Transfer) based device, a thyristor based memory device, or a combination of any of the above, or other memory. The memory device may refer to the die itself and/or to a packaged memory product. In some embodiments, the memory device may comprise a transistor-less stackable cross point architecture in which memory cells sit at the intersection of word lines and bit lines and are individually addressable and in which bit storage is based on a change in bulk resistance. Referring now toFIG.8, in some embodiments, the sled400may be embodied as a compute sled800. The compute sled800is optimized, or otherwise configured, to perform compute tasks. Of course, as discussed above, the compute sled800may rely on other sleds, such as acceleration sleds and/or storage sleds, to perform such compute tasks. The compute sled800includes various physical resources (e.g., electrical components) similar to the physical resources of the sled400, which have been identified inFIG.8using the same reference numbers. The description of such components provided above in regard toFIGS.6and7applies to the corresponding components of the compute sled800and is not repeated herein for clarity of the description of the compute sled800. In the illustrative compute sled800, the physical resources620are embodied as processors820. Although only two processors820are shown inFIG.8, it should be appreciated that the compute sled800may include additional processors820in other embodiments. Illustratively, the processors820are embodied as high-performance processors820and may be configured to operate at a relatively high power rating. Although the processors820generate additional heat operating at power ratings greater than typical processors (which operate at around 155-230 W), the enhanced thermal cooling characteristics of the chassis-less circuit board substrate602discussed above facilitate the higher power operation. For example, in the illustrative embodiment, the processors820are configured to operate at a power rating of at least 250 W. In some embodiments, the processors820may be configured to operate at a power rating of at least 350 W. In some embodiments, the compute sled800may also include a processor-to-processor interconnect842. Similar to the resource-to-resource interconnect624of the sled400discussed above, the processor-to-processor interconnect842may be embodied as any type of communication interconnect capable of facilitating processor-to-processor interconnect842communications. In the illustrative embodiment, the processor-to-processor interconnect842is embodied as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem622). For example, the processor-to-processor interconnect842may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to processor-to-processor communications. The compute sled800also includes a communication circuit830. The illustrative communication circuit830includes a network interface controller (NIC)832, which may also be referred to as a host fabric interface (HFI). The NIC832may be embodied as, or otherwise include, any type of integrated circuit, discrete circuits, controller chips, chipsets, add-in-boards, daughtercards, network interface cards, other devices that may be used by the compute sled800to connect with another compute device (e.g., with other sleds400). In some embodiments, the NIC832may be embodied as part of a system-on-a-chip (SoC) that includes one or more processors, or included on a multichip package that also contains one or more processors. In some embodiments, the NIC832may include a local processor (not shown) and/or a local memory (not shown) that are both local to the NIC832. In such embodiments, the local processor of the NIC832may be capable of performing one or more of the functions of the processors820. Additionally or alternatively, in such embodiments, the local memory of the NIC832may be integrated into one or more components of the compute sled at the board level, socket level, chip level, and/or other levels. The communication circuit830is communicatively coupled to an optical data connector834. The optical data connector834is configured to mate with a corresponding optical data connector of the rack240when the compute sled800is mounted in the rack240. Illustratively, the optical data connector834includes a plurality of optical fibers which lead from a mating surface of the optical data connector834to an optical transceiver836. The optical transceiver836is configured to convert incoming optical signals from the rack-side optical data connector to electrical signals and to convert electrical signals to outgoing optical signals to the rack-side optical data connector. Although shown as forming part of the optical data connector834in the illustrative embodiment, the optical transceiver836may form a portion of the communication circuit830in other embodiments. In some embodiments, the compute sled800may also include an expansion connector840. In such embodiments, the expansion connector840is configured to mate with a corresponding connector of an expansion chassis-less circuit board substrate to provide additional physical resources to the compute sled800. The additional physical resources may be used, for example, by the processors820during operation of the compute sled800. The expansion chassis-less circuit board substrate may be substantially similar to the chassis-less circuit board substrate602discussed above and may include various electrical components mounted thereto. The particular electrical components mounted to the expansion chassis-less circuit board substrate may depend on the intended functionality of the expansion chassis-less circuit board substrate. For example, the expansion chassis-less circuit board substrate may provide additional compute resources, memory resources, and/or storage resources. As such, the additional physical resources of the expansion chassis-less circuit board substrate may include, but is not limited to, processors, memory devices, storage devices, and/or accelerator circuits including, for example, field programmable gate arrays (FPGA), application-specific integrated circuits (ASICs), security co-processors, graphics processing units (GPUs), machine learning circuits, or other specialized processors, controllers, devices, and/or circuits. Referring now toFIG.9, an illustrative embodiment of the compute sled800is shown. As shown, the processors820, communication circuit830, and optical data connector834are mounted to the top side650of the chassis-less circuit board substrate602. Any suitable attachment or mounting technology may be used to mount the physical resources of the compute sled800to the chassis-less circuit board substrate602. For example, the various physical resources may be mounted in corresponding sockets (e.g., a processor socket), holders, or brackets. In some cases, some of the electrical components may be directly mounted to the chassis-less circuit board substrate602via soldering or similar techniques. As discussed above, the individual processors820and communication circuit830are mounted to the top side650of the chassis-less circuit board substrate602such that no two heat-producing, electrical components shadow each other. In the illustrative embodiment, the processors820and communication circuit830are mounted in corresponding locations on the top side650of the chassis-less circuit board substrate602such that no two of those physical resources are linearly in-line with others along the direction of the airflow path608. It should be appreciated that, although the optical data connector834is in-line with the communication circuit830, the optical data connector834produces no or nominal heat during operation. The memory devices720of the compute sled800are mounted to the bottom side750of the of the chassis-less circuit board substrate602as discussed above in regard to the sled400. Although mounted to the bottom side750, the memory devices720are communicatively coupled to the processors820located on the top side650via the I/O subsystem622. Because the chassis-less circuit board substrate602is embodied as a double-sided circuit board, the memory devices720and the processors820may be communicatively coupled by one or more vias, connectors, or other mechanisms extending through the chassis-less circuit board substrate602. Of course, each processor820may be communicatively coupled to a different set of one or more memory devices720in some embodiments. Alternatively, in other embodiments, each processor820may be communicatively coupled to each memory device720. In some embodiments, the memory devices720may be mounted to one or more memory mezzanines on the bottom side of the chassis-less circuit board substrate602and may interconnect with a corresponding processor820through a ball-grid array. Each of the processors820includes a heatsink850secured thereto. Due to the mounting of the memory devices720to the bottom side750of the chassis-less circuit board substrate602(as well as the vertical spacing of the sleds400in the corresponding rack240), the top side650of the chassis-less circuit board substrate602includes additional “free” area or space that facilitates the use of heatsinks850having a larger size relative to traditional heatsinks used in typical servers. Additionally, due to the improved thermal cooling characteristics of the chassis-less circuit board substrate602, none of the processor heatsinks850include cooling fans attached thereto. That is, each of the heatsinks850is embodied as a fan-less heatsinks. Referring now toFIG.10, in some embodiments, the sled400may be embodied as an accelerator sled1000. The accelerator sled1000is optimized, or otherwise configured, to perform specialized compute tasks, such as machine learning, encryption, hashing, or other computational-intensive task. In some embodiments, for example, a compute sled800may offload tasks to the accelerator sled1000during operation. The accelerator sled1000includes various components similar to components of the sled400and/or compute sled800, which have been identified inFIG.10using the same reference numbers. The description of such components provided above in regard toFIGS.6,7, and8apply to the corresponding components of the accelerator sled1000and is not repeated herein for clarity of the description of the accelerator sled1000. In the illustrative accelerator sled1000, the physical resources620are embodied as accelerator circuits1020. Although only two accelerator circuits1020are shown inFIG.10, it should be appreciated that the accelerator sled1000may include additional accelerator circuits1020in other embodiments. For example, as shown inFIG.11, the accelerator sled1000may include four accelerator circuits1020in some embodiments. The accelerator circuits1020may be embodied as any type of processor, co-processor, compute circuit, or other device capable of performing compute or processing operations. For example, the accelerator circuits1020may be embodied as, for example, field programmable gate arrays (FPGA), application-specific integrated circuits (ASICs), security co-processors, graphics processing units (GPUs), machine learning circuits, or other specialized processors, controllers, devices, and/or circuits. In some embodiments, the accelerator sled1000may also include an accelerator-to-accelerator interconnect1042. Similar to the resource-to-resource interconnect624of the sled600discussed above, the accelerator-to-accelerator interconnect1042may be embodied as any type of communication interconnect capable of facilitating accelerator-to-accelerator communications. In the illustrative embodiment, the accelerator-to-accelerator interconnect1042is embodied as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem622). For example, the accelerator-to-accelerator interconnect1042may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to processor-to-processor communications. In some embodiments, the accelerator circuits1020may be daisy-chained with a primary accelerator circuit1020connected to the NIC832and memory720through the I/O subsystem622and a secondary accelerator circuit1020connected to the NIC832and memory720through a primary accelerator circuit1020. Referring now toFIG.11, an illustrative embodiment of the accelerator sled1000is shown. As discussed above, the accelerator circuits1020, communication circuit830, and optical data connector834are mounted to the top side650of the chassis-less circuit board substrate602. Again, the individual accelerator circuits1020and communication circuit830are mounted to the top side650of the chassis-less circuit board substrate602such that no two heat-producing, electrical components shadow each other as discussed above. The memory devices720of the accelerator sled1000are mounted to the bottom side750of the of the chassis-less circuit board substrate602as discussed above in regard to the sled600. Although mounted to the bottom side750, the memory devices720are communicatively coupled to the accelerator circuits1020located on the top side650via the I/O subsystem622(e.g., through vias). Further, each of the accelerator circuits1020may include a heatsink1070that is larger than a traditional heatsink used in a server. As discussed above with reference to the heatsinks870, the heatsinks1070may be larger than tradition heatsinks because of the “free” area provided by the memory devices750being located on the bottom side750of the chassis-less circuit board substrate602rather than on the top side650. Referring now toFIG.12, in some embodiments, the sled400may be embodied as a storage sled1200. The storage sled1200is optimized, or otherwise configured, to store data in a data storage1250local to the storage sled1200. For example, during operation, a compute sled800or an accelerator sled1000may store and retrieve data from the data storage1250of the storage sled1200. The storage sled1200includes various components similar to components of the sled400and/or the compute sled800, which have been identified inFIG.12using the same reference numbers. The description of such components provided above in regard toFIGS.6,7, and8apply to the corresponding components of the storage sled1200and is not repeated herein for clarity of the description of the storage sled1200. In the illustrative storage sled1200, the physical resources620are embodied as storage controllers1220. Although only two storage controllers1220are shown inFIG.12, it should be appreciated that the storage sled1200may include additional storage controllers1220in other embodiments. The storage controllers1220may be embodied as any type of processor, controller, or control circuit capable of controlling the storage and retrieval of data into the data storage1250based on requests received via the communication circuit830. In the illustrative embodiment, the storage controllers1220are embodied as relatively low-power processors or controllers. For example, in some embodiments, the storage controllers1220may be configured to operate at a power rating of about 75 watts. In some embodiments, the storage sled1200may also include a controller-to-controller interconnect1242. Similar to the resource-to-resource interconnect624of the sled400discussed above, the controller-to-controller interconnect1242may be embodied as any type of communication interconnect capable of facilitating controller-to-controller communications. In the illustrative embodiment, the controller-to-controller interconnect1242is embodied as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem622). For example, the controller-to-controller interconnect1242may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to processor-to-processor communications. Referring now toFIG.13, an illustrative embodiment of the storage sled1200is shown. In the illustrative embodiment, the data storage1250is embodied as, or otherwise includes, a storage cage1252configured to house one or more solid state drives (SSDs)1254. To do so, the storage cage1252includes a number of mounting slots1256, each of which is configured to receive a corresponding solid state drive1254. Each of the mounting slots1256includes a number of drive guides1258that cooperate to define an access opening1260of the corresponding mounting slot1256. The storage cage1252is secured to the chassis-less circuit board substrate602such that the access openings face away from (i.e., toward the front of) the chassis-less circuit board substrate602. As such, solid state drives1254are accessible while the storage sled1200is mounted in a corresponding rack204. For example, a solid state drive1254may be swapped out of a rack240(e.g., via a robot) while the storage sled1200remains mounted in the corresponding rack240. The storage cage1252illustratively includes sixteen mounting slots1256and is capable of mounting and storing sixteen solid state drives1254. Of course, the storage cage1252may be configured to store additional or fewer solid state drives1254in other embodiments. Additionally, in the illustrative embodiment, the solid state drivers are mounted vertically in the storage cage1252, but may be mounted in the storage cage1252in a different orientation in other embodiments. Each solid state drive1254may be embodied as any type of data storage device capable of storing long term data. To do so, the solid state drives1254may include volatile and non-volatile memory devices discussed above. As shown inFIG.13, the storage controllers1220, the communication circuit830, and the optical data connector834are illustratively mounted to the top side650of the chassis-less circuit board substrate602. Again, as discussed above, any suitable attachment or mounting technology may be used to mount the electrical components of the storage sled1200to the chassis-less circuit board substrate602including, for example, sockets (e.g., a processor socket), holders, brackets, soldered connections, and/or other mounting or securing techniques. As discussed above, the individual storage controllers1220and the communication circuit830are mounted to the top side650of the chassis-less circuit board substrate602such that no two heat-producing, electrical components shadow each other. For example, the storage controllers1220and the communication circuit830are mounted in corresponding locations on the top side650of the chassis-less circuit board substrate602such that no two of those electrical components are linearly in-line with other along the direction of the airflow path608. The memory devices720of the storage sled1200are mounted to the bottom side750of the of the chassis-less circuit board substrate602as discussed above in regard to the sled400. Although mounted to the bottom side750, the memory devices720are communicatively coupled to the storage controllers1220located on the top side650via the I/O subsystem622. Again, because the chassis-less circuit board substrate602is embodied as a double-sided circuit board, the memory devices720and the storage controllers1220may be communicatively coupled by one or more vias, connectors, or other mechanisms extending through the chassis-less circuit board substrate602. Each of the storage controllers1220includes a heatsink1270secured thereto. As discussed above, due to the improved thermal cooling characteristics of the chassis-less circuit board substrate602of the storage sled1200, none of the heatsinks1270include cooling fans attached thereto. That is, each of the heatsinks1270is embodied as a fan-less heatsink. Referring now toFIG.14, in some embodiments, the sled400may be embodied as a memory sled1400. The storage sled1400is optimized, or otherwise configured, to provide other sleds400(e.g., compute sleds800, accelerator sleds1000, etc.) with access to a pool of memory (e.g., in two or more sets1430,1432of memory devices720) local to the memory sled1200. For example, during operation, a compute sled800or an accelerator sled1000may remotely write to and/or read from one or more of the memory sets1430,1432of the memory sled1200using a logical address space that maps to physical addresses in the memory sets1430,1432. The memory sled1400includes various components similar to components of the sled400and/or the compute sled800, which have been identified inFIG.14using the same reference numbers. The description of such components provided above in regard toFIGS.6,7, and8apply to the corresponding components of the memory sled1400and is not repeated herein for clarity of the description of the memory sled1400. In the illustrative memory sled1400, the physical resources620are embodied as memory controllers1420. Although only two memory controllers1420are shown inFIG.14, it should be appreciated that the memory sled1400may include additional memory controllers1420in other embodiments. The memory controllers1420may be embodied as any type of processor, controller, or control circuit capable of controlling the writing and reading of data into the memory sets1430,1432based on requests received via the communication circuit830. In the illustrative embodiment, each storage controller1220is connected to a corresponding memory set1430,1432to write to and read from memory devices720within the corresponding memory set1430,1432and enforce any permissions (e.g., read, write, etc.) associated with sled400that has sent a request to the memory sled1400to perform a memory access operation (e.g., read or write). In some embodiments, the memory sled1400may also include a controller-to-controller interconnect1442. Similar to the resource-to-resource interconnect624of the sled400discussed above, the controller-to-controller interconnect1442may be embodied as any type of communication interconnect capable of facilitating controller-to-controller communications. In the illustrative embodiment, the controller-to-controller interconnect1442is embodied as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem622). For example, the controller-to-controller interconnect1442may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to processor-to-processor communications. As such, in some embodiments, a memory controller1420may access, through the controller-to-controller interconnect1442, memory that is within the memory set1432associated with another memory controller1420. In some embodiments, a scalable memory controller is made of multiple smaller memory controllers, referred to herein as “chiplets”, on a memory sled (e.g., the memory sled1400). The chiplets may be interconnected (e.g., using EMIB (Embedded Multi-Die Interconnect Bridge)). The combined chiplet memory controller may scale up to a relatively large number of memory controllers and I/O ports, (e.g., up to 16 memory channels). In some embodiments, the memory controllers1420may implement a memory interleave (e.g., one memory address is mapped to the memory set1430, the next memory address is mapped to the memory set1432, and the third address is mapped to the memory set1430, etc.). The interleaving may be managed within the memory controllers1420, or from CPU sockets (e.g., of the compute sled800) across network links to the memory sets1430,1432, and may improve the latency associated with performing memory access operations as compared to accessing contiguous memory addresses from the same memory device. Further, in some embodiments, the memory sled1400may be connected to one or more other sleds400(e.g., in the same rack240or an adjacent rack240) through a waveguide, using the waveguide connector1480. In the illustrative embodiment, the waveguides are 64 millimeter waveguides that provide 16 Rx (i.e., receive) lanes and 16 Rt (i.e., transmit) lanes. Each lane, in the illustrative embodiment, is either 16 Ghz or 32 Ghz. In other embodiments, the frequencies may be different. Using a waveguide may provide high throughput access to the memory pool (e.g., the memory sets1430,1432) to another sled (e.g., a sled400in the same rack240or an adjacent rack240as the memory sled1400) without adding to the load on the optical data connector834. Referring now toFIG.15, a system for executing one or more workloads (e.g., applications) may be implemented in accordance with the data center100. In the illustrative embodiment, the system1510includes an orchestrator server1520, which may be embodied as a managed node comprising a compute device (e.g., a compute sled800) executing management software (e.g., a cloud operating environment, such as OpenStack) that is communicatively coupled to multiple sleds400including a large number of compute sleds1530(e.g., each similar to the compute sled800), memory sleds1540(e.g., each similar to the memory sled1400), accelerator sleds1550(e.g., each similar to the memory sled1000), and storage sleds1560(e.g., each similar to the storage sled1200). One or more of the sleds1530,1540,1550,1560may be grouped into a managed node1570, such as by the orchestrator server1520, to collectively perform a workload (e.g., an application1532executed in a virtual machine or in a container). The managed node1570may be embodied as an assembly of physical resources620, such as processors820, memory resources720, accelerator circuits1020, or data storage1250, from the same or different sleds400. Further, the managed node may be established, defined, or “spun up” by the orchestrator server1520at the time a workload is to be assigned to the managed node or at any other time, and may exist regardless of whether any workloads are presently assigned to the managed node. In the illustrative embodiment, the orchestrator server1520may selectively allocate and/or deallocate physical resources620from the sleds400and/or add or remove one or more sleds400from the managed node1570as a function of quality of service (QoS) targets (e.g., performance targets associated with a throughput, latency, instructions per second, etc.) associated with a service level agreement for the workload (e.g., the application1532). In doing so, the orchestrator server1520may receive telemetry data indicative of performance conditions (e.g., throughput, latency, instructions per second, etc.) in each sled400of the managed node1570and compare the telemetry data to the quality of service targets to determine whether the quality of service targets are being satisfied. If the so, the orchestrator server1520may additionally determine whether one or more physical resources may be deallocated from the managed node1570while still satisfying the QoS targets, thereby freeing up those physical resources for use in another managed node (e.g., to execute a different workload). Alternatively, if the QoS targets are not presently satisfied, the orchestrator server1520may determine to dynamically allocate additional physical resources to assist in the execution of the workload (e.g., the application1532) while the workload is executing Additionally, in some embodiments, the orchestrator server1520may identify trends in the resource utilization of the workload (e.g., the application1532), such as by identifying phases of execution (e.g., time periods in which different operations, each having different resource utilizations characteristics, are performed) of the workload (e.g., the application1532) and pre-emptively identifying available resources in the data center100and allocating them to the managed node1570(e.g., within a predefined time period of the associated phase beginning). In some embodiments, the orchestrator server1520may model performance based on various latencies and a distribution scheme to place workloads among compute sleds and other resources (e.g., accelerator sleds, memory sleds, storage sleds) in the data center100. For example, the orchestrator server1520may utilize a model that accounts for the performance of resources on the sleds400(e.g., FPGA performance, memory access latency, etc.) and the performance (e.g., congestion, latency, bandwidth) of the path through the network to the resource (e.g., FPGA). As such, the orchestrator server1520may determine which resource(s) should be used with which workloads based on the total latency associated with each potential resource available in the data center100(e.g., the latency associated with the performance of the resource itself in addition to the latency associated with the path through the network between the compute sled executing the workload and the sled400on which the resource is located). In some embodiments, the orchestrator server1520may generate a map of heat generation in the data center100using telemetry data (e.g., temperatures, fan speeds, etc.) reported from the sleds400and allocate resources to managed nodes as a function of the map of heat generation and predicted heat generation associated with different workloads, to maintain a target temperature and heat distribution in the data center100. Additionally or alternatively, in some embodiments, the orchestrator server1520may organize received telemetry data into a hierarchical model that is indicative of a relationship between the managed nodes (e.g., a spatial relationship such as the physical locations of the resources of the managed nodes within the data center100and/or a functional relationship, such as groupings of the managed nodes by the customers the managed nodes provide services for, the types of functions typically performed by the managed nodes, managed nodes that typically share or exchange workloads among each other, etc.). Based on differences in the physical locations and resources in the managed nodes, a given workload may exhibit different resource utilizations (e.g., cause a different internal temperature, use a different percentage of processor or memory capacity) across the resources of different managed nodes. The orchestrator server1520may determine the differences based on the telemetry data stored in the hierarchical model and factor the differences into a prediction of future resource utilization of a workload if the workload is reassigned from one managed node to another managed node, to accurately balance resource utilization in the data center100. To reduce the computational load on the orchestrator server1520and the data transfer load on the network, in some embodiments, the orchestrator server1520may send self-test information to the sleds400to enable each sled400to locally (e.g., on the sled400) determine whether telemetry data generated by the sled400satisfies one or more conditions (e.g., an available capacity that satisfies a predefined threshold, a temperature that satisfies a predefined threshold, etc.). Each sled400may then report back a simplified result (e.g., yes or no) to the orchestrator server1520, which the orchestrator server1520may utilize in determining the allocation of resources to managed nodes. Referring now toFIG.16, a system1610for providing efficient reprovisioning (e.g., of kernels) in an accelerator device may be implemented in accordance with the data center100described above with reference toFIG.1. In the illustrative embodiment, the system1610includes an orchestrator server1620communicatively coupled to multiple sleds including a compute sled1630and an accelerator sled1640. One or more of the sleds1630,1640may be grouped into a managed node, such as by the orchestrator server1620, to collectively perform a workload (e.g., an application1632). A managed node may be embodied as an assembly of resources, such as compute resources, memory resources, storage resources, or other resources, from the same or different sleds or racks. Further, a managed node may be established, defined, or “spun up” by the orchestrator server1620at the time a workload is to be assigned to the managed node or at any other time, and may exist regardless of whether any workloads are presently assigned to the managed node. The system1610may be located in a data center and provide storage and compute services (e.g., cloud services) to a client device1614that is in communication with the system1610through a network1612. The orchestrator server1620may support a cloud operating environment, such as OpenStack, and managed nodes established by the orchestrator server1620may execute one or more applications or processes (i.e., workloads), such as in virtual machines or containers, on behalf of a user of the client device1614. In the illustrative embodiment, the compute sled1630is similar to the sled205-4ofFIG.2, and, in operation, executes the application1632(e.g., a workload). The accelerator sled1640includes one or more accelerator devices1642coupled to a memory1644(e.g., random access memory (RAM)) which may temporarily store one or more bit streams1646and parameter data1648. Each bit stream1646may be embodied as any data that defines a kernel that is executable by the accelerator device(s)1642to perform one or more functions (e.g., portions of a workload). For example, each bit stream1646may be embodied as a set of instructions for performing a cryptographic function, an arithmetic function, a hashing function, and/or other functions performable by an accelerator device1642. The bit streams1646, in the illustrative embodiment, include a bit stream1650that defines one kernel (e.g., kernel A) and another bit stream1652that defines a different kernel (e.g., kernel B). Further, in the illustrative embodiment, kernel A and kernel B are to be executed in sequence, as successive portions of the same workload (e.g., the application1632). The parameter data1648may be embodied as any data (e.g., input data) usable by a kernel in the execution of an associated function. As described in more detail below, in operation, the accelerator sled1640may configure the accelerator device1642with one bit stream (e.g., the bit stream1650) to establish kernel A on the accelerator device1642, execute kernel A on input data in the parameter data1648, write an output data set resulting from the execution of the kernel A to the parameter data1648, reconfigure the accelerator device1642with the bit stream1652to establish kernel B, and use the output data previously written to the memory1644as input data to kernel B. By temporarily retaining the output of kernel A in memory and reusing it as input to kernel B, rather than sending, through the network1612, the output data to the compute sled1630, which then would send the output data back through the network to an accelerator device (e.g., the same accelerator device or a different accelerator device) to execute the subsequent kernel (e.g., kernel B), the accelerator sled1640may significantly reduce the overall latency incurred in accelerating a sequence of portions of a workload that have data dependence between them (e.g., using output data of one kernel as input data for the successive kernel). Referring now toFIG.17, the accelerator sled1640may be embodied as any type of compute device capable of performing the functions described herein, including configuring an accelerator device with a bit stream to establish a kernel, executing the kernel to produce output data, writing the output data to onboard memory (e.g., memory located on the accelerator sled1640), configuring the accelerator device with a second bit stream to establish a second kernel, and executing, with the output data in the memory as input data, the second kernel. As shown inFIG.17, the illustrative accelerator sled1640includes a compute engine1702, an input/output (I/O) subsystem1706, communication circuitry1708, and the one or more accelerator devices1642. Of course, in other embodiments, the accelerator sled1640may include other or additional components, such as those commonly found in a computer (e.g., display, peripheral devices, etc.). Additionally, in some embodiments, one or more of the illustrative components may be incorporated in, or otherwise form a portion of, another component. The compute engine1702may be embodied as any type of device or collection of devices capable of performing various compute functions described below. In some embodiments, the compute engine1702may be embodied as a single device such as an integrated circuit, an embedded system, a field-programmable gate array (FPGA), a system-on-a-chip (SOC), or other integrated system or device. In the illustrative embodiment, the compute engine1702includes or is embodied as a processor1704and the memory1644. The processor1704may be embodied as any type of device or circuitry capable of performing the functions described herein. For example, the processor1704may be embodied as a microcontroller, a single or multi-core processor(s), or other processor or processing/controlling circuit. In some embodiments, the processor1704may be embodied as, include, or be coupled to an FPGA, an application specific integrated circuit (ASIC), reconfigurable hardware or hardware circuitry, or other specialized hardware to facilitate performance of the functions described herein. The memory1644may be embodied as any type of volatile (e.g., dynamic random access memory (DRAM), etc.) or non-volatile memory or data storage capable of performing the functions described herein. Volatile memory may be a storage medium that requires power to maintain the state of data stored by the medium. Non-limiting examples of volatile memory may include various types of random access memory (RAM), such as dynamic random access memory (DRAM) or static random access memory (SRAM). One particular type of DRAM that may be used in a memory module is synchronous dynamic random access memory (SDRAM). In particular embodiments, DRAM of a memory component may comply with a standard promulgated by JEDEC, such as JESD79F for DDR SDRAM, JESD79-2F for DDR2 SDRAM, JESD79-3F for DDR3 SDRAM, JESD79-4A for DDR4 SDRAM, JESD209 for Low Power DDR (LPDDR), JESD209-2 for LPDDR2, JESD209-3 for LPDDR3, and JESD209-4 for LPDDR4 (these standards are available at www.jedec.org). Such standards (and similar standards) may be referred to as DDR-based standards and communication interfaces of the storage devices that implement such standards may be referred to as DDR-based interfaces. In one embodiment, the memory device is a block addressable memory device, such as those based on NAND or NOR technologies. A memory device may also include future generation nonvolatile devices, such as a three dimensional crosspoint memory device (e.g., Intel 3D XPoint™ memory), or other byte addressable write-in-place nonvolatile memory devices. In one embodiment, the memory device may be or may include memory devices that use chalcogenide glass, multi-threshold level NAND flash memory, NOR flash memory, single or multi-level Phase Change Memory (PCM), a resistive memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), anti-ferroelectric memory, magnetoresistive random access memory (MRAM) memory that incorporates memristor technology, resistive memory including the metal oxide base, the oxygen vacancy base and the conductive bridge Random Access Memory (CB-RAM), or spin transfer torque (STT)-MRAM, a spintronic magnetic junction memory based device, a magnetic tunneling junction (MTJ) based device, a DW (Domain Wall) and SOT (Spin Orbit Transfer) based device, a thyristor based memory device, or a combination of any of the above, or other memory. The memory device may refer to the die itself and/or to a packaged memory product. In some embodiments, 3D crosspoint memory (e.g., Intel 3D XPoint™ memory) may comprise a transistor-less stackable cross point architecture in which memory cells sit at the intersection of word lines and bit lines and are individually addressable and in which bit storage is based on a change in bulk resistance. In some embodiments, all or a portion of the memory1644may be integrated into the processor1704. In operation, the memory1644may store various software and data used during operation such as sequence data, bit stream data, parameter data, applications, programs, and libraries. The compute engine1702is communicatively coupled to other components of the accelerator sled1640via the I/O subsystem1706, which may be embodied as circuitry and/or components to facilitate input/output operations with the compute engine1702(e.g., with the processor1704and/or the memory1644) and other components of the accelerator sled1640. For example, the I/O subsystem1706may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, integrated sensor hubs, firmware devices, communication links (e.g., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.), and/or other components and subsystems to facilitate the input/output operations. In some embodiments, the I/O subsystem1706may form a portion of a system-on-a-chip (SoC) and be incorporated, along with one or more of the processor1704, the memory1644, and other components of the accelerator sled1640, into the compute engine1702. The communication circuitry1708may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications over the network1612between the accelerator sled1640and another compute device (e.g., the compute sled1630, the orchestrator server1620). The communication circuitry1708may be configured to use any one or more communication technology (e.g., wired or wireless communications) and associated protocols (e.g., Ethernet, Bluetooth®, Wi-Fi®, WiMAX, etc.) to effect such communication. The communication circuitry1708may include a network interface controller (NIC)1710, which may also be referred to as a host fabric interface (HFI). The NIC1710may be embodied as one or more add-in-boards, daughter cards, network interface cards, controller chips, chipsets, or other devices that may be used by the accelerator sled1640to connect with another compute device (e.g., the compute sled1630, the orchestrator server1620, etc.). In some embodiments, the NIC1710may be embodied as part of a system-on-a-chip (SoC) that includes one or more processors, or included on a multichip package that also contains one or more processors. In some embodiments, the NIC1710may include a local processor (not shown) and/or a local memory (not shown) that are both local to the NIC1710. In such embodiments, the local processor of the NIC1710may be capable of performing one or more of the functions of the compute engine1702described herein. Additionally or alternatively, in such embodiments, the local memory of the NIC1710may be integrated into one or more components of the accelerator sled1640at the board level, socket level, chip level, and/or other levels. The accelerator devices1642, may include an FPGA1712. In the illustrative embodiment, the FPGA1712includes one or more slots1714, each of which may be embodied as a portion of the logic or circuitry (e.g., logic gates) present on the FPGA1712and which may be programmed with a bit stream to provide a kernel capable of accelerating a particular function. While one FPGA1712is shown, it should be appreciated that in other embodiments, multiple FPGAs may be included in the accelerator sled1640. Further, the accelerator sled1640may include one or more other accelerator devices1716, which may be embodied as any circuitry or devices (e.g., co-processor(s), graphics processing units (GPUs), etc.) capable of executing one or more functions faster than a general purpose processor. The accelerator sled1640may also include one or more data storage devices1718, which may be embodied as any type of devices configured for short-term or long-term storage of data such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, or other data storage devices. Each data storage device1718may include a system partition that stores data and firmware code for the data storage device1718. Each data storage device1718may also include one or more operating system partitions that store data files and executables for operating systems. The orchestrator server1620, the compute sled1630, and the client device1614may have components similar to those described inFIG.17, with the exception that, in some embodiments, orchestrator server1620, the compute sled1630, and/or the client device1614may not include the accelerator devices1642. The description of those components of the accelerator sled1640is equally applicable to the description of components of those devices and is not repeated herein for clarity of the description. Further, it should be appreciated that any of the accelerator sled1640, the compute sled1630, the orchestrator server1620, or the client device1614may include other components, sub-components, and devices commonly found in a computing device, which are not discussed above in reference to the accelerator sled1640and not discussed herein for clarity of the description. As described above, the orchestrator server1620, the sleds1630,1640, and the client device1614are illustratively in communication via the network1612, which may be embodied as any type of wired or wireless communication network, including global networks (e.g., the Internet), local area networks (LANs) or wide area networks (WANs), cellular networks (e.g., Global System for Mobile Communications (GSM), 3G, Long Term Evolution (LTE), Worldwide Interoperability for Microwave Access (WiMAX), etc.), digital subscriber line (DSL) networks, cable networks (e.g., coaxial networks, fiber networks, etc.), or any combination thereof. Referring now toFIG.18, the accelerator sled1640may establish an environment1800during operation. The illustrative environment1800includes a network communicator1820and a kernel execution manager1830. Each of the components of the environment1800may be embodied as hardware, firmware, software, or a combination thereof. As such, in some embodiments, one or more of the components of the environment1800may be embodied as circuitry or a collection of electrical devices (e.g., network communicator circuitry1820, kernel execution manager circuitry1830, etc.). It should be appreciated that, in such embodiments, one or more of the network communicator circuitry1820or kernel execution manager circuitry1830may form a portion of one or more of the compute engine1702, accelerator devices1642, the I/O subsystem1706, the communication circuitry1708, and/or other components of the accelerator sled1640. In the illustrative embodiment, the environment1800includes sequence data1802, which may be embodied as any data indicative of a sequence in which kernels are to be executed to accelerate the performance of a workload (e.g., the application1632). The accelerator sled1640may receive the sequence data1802from a remote compute device (e.g., the compute sled1630) through the network1612(e.g., as part of a request to execute the kernels). Further, the illustrative embodiment includes bit stream data1804, which may be embodied as one or more bit streams (e.g., the bit streams1646). In the illustrative embodiment, the accelerator sled1640may also receive the bit stream data from a remote compute device (e.g., the compute sled1630) through the network1612. Additionally, the illustrative environment1800includes parameter data1806, which is similar to the parameter data1648described above with reference toFIG.16. In the illustrative environment1800, the network communicator1820, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to facilitate inbound and outbound network communications (e.g., network traffic, network packets, network flows, etc.) to and from the accelerator sled1640, respectively. To do so, the network communicator1820is configured to receive and process data packets from one system or computing device (e.g., the compute sled1630, the orchestrator server1620, etc.) and to prepare and send data packets to a computing device or system (e.g., the compute sled1630, the orchestrator server1620, etc.). Accordingly, in some embodiments, at least a portion of the functionality of the network communicator1820may be performed by the communication circuitry1708, and, in the illustrative embodiment, by the NIC1710. The kernel execution manager1830, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof, is to configure an accelerator device of the accelerator sled1640with a bit stream associated with a kernel defined in a sequence of kernels (e.g., in the sequence data1802), execute the kernel to produce output data, store the output data in the memory (e.g., as the parameter data1806), configure the accelerator device with a second bit stream associated with a second kernel in the sequence, and execute the second kernel using the output data produced form the first kernel as input to the second kernel (e.g., by reading the parameter data1806from the memory). To do so, in the illustrative embodiment, the kernel execution manager1830includes a sequence controller1832, an accelerator device configurator1834, and a parameter manager1836. The sequence controller1832, in the illustrative embodiment, is configured to obtain the sequence data1802(e.g., from a remote compute device such as the compute sled1630) and determine, as function of the sequence data1802and the present position in the sequence data1802at any given time, which kernel should be executed by the accelerator device (e.g., the FPGA1712). In the illustrative embodiment, the accelerator device configurator1834is configured to read a bit stream from the bit stream data1804and configure the accelerator device (e.g., the FPGA1712) with the read bit stream, such as by programming logic gates in a slot (e.g., the slot1714) of the accelerator device (e.g., the FPGA1712), to establish the corresponding kernel. In the illustrative embodiment, the accelerator device configurator1834configures the accelerator device in response to a request to do so by the sequence controller1832, such as when one kernel in the sequence data1802has completed and the next kernel indicated in the sequence data1802is to be executed. The parameter manager1836, in the illustrative embodiment, is configured provide input parameters (e.g., an input data set) to a kernel for execution thereon (e.g., compressing an input data set, encrypting the input data set, etc.) and storing (e.g., in the parameter data1806in the memory1644) output data produced by the kernel for use as input data to a subsequently executed kernel. It should be appreciated that each of the sequence controller1832, the accelerator device configurator1834, and the parameter manager1836may be separately embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof. For example, the sequence controller1832may be embodied as a hardware component, while the accelerator device configurator1834and the parameter manager1836are embodied as virtualized hardware components or as some other combination of hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof. Referring now toFIG.19, the accelerator sled1640, in operation, may execute a method1900for providing efficient reprovisioning of kernels. The method1900begins with block1902, in which the accelerator sled1640determines whether to execute a kernel. In doing so, the accelerator sled1640may obtain a request to execute one or more kernels, as indicated in block1904. The accelerator sled1640may obtain a request to execute a batch of kernels in a predefined sequence (e.g., the sequence data1802), as indicated in block1906. For example, the predefined sequence may be to execute kernel A, followed by kernel B. In some embodiments, the predefined sequence may include one or more conditions under which one kernel (e.g., kernel B) is to be executed after a previous kernel (e.g., kernel A), and other conditions in which an alternative kernel (e.g., a kernel C) is to be executed after the previous kernel (e.g., kernel A), such as when a value in the output data satisfies a predefined threshold value. Regardless, as indicated in block1908, the accelerator sled may obtain the request from a remote compute device, such as the compute sled1630or the orchestrator server1620. As indicated in block1910, the accelerator sled1640may obtain one or more bit streams (e.g., the bit stream data1804) and parameter data (e.g., the parameter data1806) associated with the kernel(s) to be executed. For example, the request from the remote compute device (e.g., the compute sled1630or the orchestrator server1620) may include the bit streams and parameter data (e.g., input data). As indicated in block1912, in the illustrative embodiment, the accelerator sled1640writes the bit stream(s) and parameter data associated with the kernel(s) to the memory1644. Additionally, the accelerator sled1640may write the predefined sequence (e.g., from block1906) to the memory1644, as indicated in block1914. In later iterations of block1902, the accelerator sled1640may determine whether to execute a subsequent kernel in the predefined sequence (e.g., after kernel A has finished executing), as indicated in block1916. In block1918, the accelerator sled1640determines the subsequent course of action based on whether the accelerator sled1640has determined whether to execute a kernel (e.g., a kernel identified in the request from block1904or a subsequent kernel identified in the predefined sequence from block1906). In response to a determination to execute a kernel, the method1900advances to block1920ofFIG.20, in which the accelerator sled1640executes the kernel with an accelerator device1642(e.g., the FPGA1712) of the accelerator sled1640. Otherwise, the method1900loops back to block1902in which the accelerator sled1640again determines whether to execute a kernel. Referring now toFIG.20, in executing the kernel with the accelerator device1642, the accelerator sled1640, in the illustrative embodiment, reads the bit stream associated with the present kernel from the memory1644, as indicated in block1922. Additionally, in the illustrative embodiment, the accelerator sled1640configures the accelerator device1642with the bit stream (e.g., programs the logic gates) corresponding to the kernel to establish the kernel (e.g., enable the accelerator device1642to execute the function associated with the kernel), as indicated in block1924. In configuring the accelerator device1642, the accelerator sled1640may configure a slot of an FPGA (e.g., the slot1714of the FPGA1712) to establish the kernel, as indicated in block1926. In the illustrative embodiment, the accelerator sled1640executes, with the accelerator device1642, the kernel on input data (e.g., an input data set in the parameter data1806) present in the memory1644, as indicated in block1928. In doing so, in the illustrative embodiment, the accelerator sled1640executes the kernel on input data received in the request (e.g., the request from block1904), as indicated in block1930. In later iterations of the method1900, the accelerator device1642may execute the kernel using output data written to the memory1644by a previously-executed kernel, as input data (e.g., decrypting data that was decompressed by the previous kernel), as indicated in block1932. Afterwards, in block1934, the accelerator sled1640writes output data resulting from the execution of the kernel to the memory1644(e.g., writing a decompressed version of a data set that was decompressed by the present kernel). Further, and as indicated in block1936, the accelerator sled1640may send the output data to a remote compute device (e.g., the compute sled1630), as indicated in block1936. Afterwards, the method1900loops back to block1902ofFIG.19to again determine whether to execute a kernel. EXAMPLES Illustrative examples of the technologies disclosed herein are provided below. An embodiment of the technologies may include any one or more, and any combination of, the examples described below. Example 1 includes an accelerator sled comprising a memory; an accelerator device coupled to the memory, wherein the accelerator device is to (i) configure the accelerator device with a first bit stream to establish a first kernel; (ii) execute the first kernel to produce output data; (iii) write the output data to the memory; (iv) configure the accelerator device with a second bit stream to establish a second kernel; and (v) execute the second kernel with the output data in the memory used as input data to the second kernel. Example 2 includes the subject matter of Example 1, and wherein the accelerator device is further to obtain a request to execute a batch of kernels in a predefined sequence and wherein to configure the accelerator device with a first bit stream comprises to configure, in response to the request to execute a batch of kernels in a predefined sequence, the accelerator device with the first bit stream. Example 3 includes the subject matter of any of Examples 1 and 2, and wherein to configure the accelerator device with the second bit stream comprises to determine whether to execute a subsequent kernel in the predefined sequence; and configure, in response to a determination to execute a subsequent kernel in the predefined sequence, the accelerator device with the second bit stream. Example 4 includes the subject matter of any of Examples 1-3, and wherein the accelerator device is to write the predefined sequence to the memory. Example 5 includes the subject matter of any of Examples 1-4, and wherein the accelerator device is a field programmable gate array with a slot and wherein to configure the accelerator device with the first bit stream comprises to configure the slot with the first bit stream. Example 6 includes the subject matter of any of Examples 1-5, and wherein the accelerator device is further to receive a request to execute the first kernel, wherein the request includes input data, and wherein to execute the first kernel comprises to execute the first kernel on the input data included in the request. Example 7 includes the subject matter of any of Examples 1-6, and wherein the accelerator device is to write, to the memory, the input data received from the request. Example 8 includes the subject matter of any of Examples 1-7, and wherein the accelerator device is further to receive a request that includes the first bit stream and the second bit stream; and write the first bit stream and second bit stream to the memory. Example 9 includes the subject matter of any of Examples 1-8, and wherein to configure the accelerator device with the first bit stream comprises to read the first bit stream from the memory. Example 10 includes the subject matter of any of Examples 1-9, and wherein the output data is first output data, wherein to execute the second kernel comprises to produce second output data, and the accelerator device is further to send the second output data to a remote compute device. Example 11 includes the subject matter of any of Examples 1-10, and wherein the accelerator device is further to send the first output data to the remote compute device. Example 12 includes the subject matter of any of Examples 1-11, and wherein to send second output data to a remote compute device comprises to send the second output data to a compute sled. Example 13 includes a method comprising configuring, by an accelerator sled, an accelerator device of the accelerator sled with a first bit stream to establish a first kernel; executing, by the accelerator sled, the first kernel to produce output data; writing, by the accelerator sled, the output data to a memory of the accelerator sled; configuring, by the accelerator sled, the accelerator device with a second bit stream to establish a second kernel; and executing, by the accelerator sled, the second kernel with the output data in the memory used as input data to the second kernel. Example 14 includes the subject matter of Example 13, and further including obtaining, by the accelerator sled, a request to execute a batch of kernels in a predefined sequence and wherein configuring the accelerator device with a first bit stream comprises configuring, in response to the request to execute a batch of kernels in a predefined sequence, the accelerator device with the first bit stream. Example 15 includes the subject matter of any of Examples 13 and 14, and wherein configuring the accelerator device with the second bit stream comprises determining whether to execute a subsequent kernel in the predefined sequence; and configuring, in response to a determination to execute a subsequent kernel in the predefined sequence, the accelerator device with the second bit stream. Example 16 includes the subject matter of any of Examples 13-15, and further including writing, by the accelerator sled, the predefined sequence to the memory. Example 17 includes the subject matter of any of Examples 13-16, and wherein the accelerator device is a field programmable gate array with a slot and wherein configuring the accelerator device with the first bit stream comprises configuring the slot with the first bit stream. Example 18 includes the subject matter of any of Examples 13-17, and further including receiving, by the accelerator sled, a request to execute the first kernel, wherein the request includes input data, and wherein executing the first kernel comprises executing the first kernel on the input data included in the request. Example 19 includes the subject matter of any of Examples 13-18, and further including writing, by the accelerator sled and to the memory, the input data received from the request. Example 20 includes the subject matter of any of Examples 13-19, and further including receiving, by the accelerator sled, a request that includes the first bit stream and the second bit stream; and writing, by the accelerator sled, the first bit stream and second bit stream to the memory. Example 21 includes the subject matter of any of Examples 13-20, and wherein configuring the accelerator device with the first bit stream comprises reading the first bit stream from the memory. Example 22 includes the subject matter of any of Examples 13-21, and wherein the output data is first output data and executing the second kernel comprises producing second output data, the method further comprising sending, by the accelerator sled, the second output data to a remote compute device. Example 23 includes the subject matter of any of Examples 13-22, and further including sending, by the accelerator sled, the first output data to the remote compute device. Example 24 includes the subject matter of any of Examples 13-23, and wherein sending the second output data to a remote compute device comprises sending the second output data to a compute sled. Example 25 includes an accelerator sled comprising means for performing the method of any of Examples 13-24. Example 26 includes one or more machine-readable storage media comprising a plurality of instructions stored thereon that, in response to being executed, cause an accelerator sled to perform the method of any of Examples 13-24. Example 27 includes an accelerator sled comprising a compute engine to perform the method of any of Examples 13-24. Example 28 includes an accelerator sled comprising a memory; an accelerator device coupled to the memory; and kernel execution manager circuitry to (i) configure the accelerator device with a first bit stream to establish a first kernel; (ii) execute, with the accelerator device, the first kernel to produce output data; (iii) write the output data to the memory; (iv) configure the accelerator device with a second bit stream to establish a second kernel; and (v) execute, with the accelerator device, the second kernel with the output data in the memory used as input data to the second kernel. Example 29 includes the subject matter of Example 28, and further including network communicator circuitry to obtain a request to execute a batch of kernels in a predefined sequence; and wherein to configure the accelerator device with a first bit stream comprises to configure, in response to the request to execute a batch of kernels in a predefined sequence, the accelerator device with the first bit stream. Example 30 includes the subject matter of any of Examples 28 and 29, and wherein to configure the accelerator device with the second bit stream comprises to determine whether to execute a subsequent kernel in the predefined sequence; and configure, in response to a determination to execute a subsequent kernel in the predefined sequence, the accelerator device with the second bit stream. Example 31 includes the subject matter of any of Examples 28-30, and wherein the kernel execution manager circuitry is further to write the predefined sequence to the memory. Example 32 includes the subject matter of any of Examples 28-31, and wherein the accelerator device is a field programmable gate array with a slot and wherein to configure the accelerator device with the first bit stream comprises to configure the slot with the first bit stream. Example 33 includes the subject matter of any of Examples 28-32, and further including network communicator circuitry to receive a request to execute the first kernel, wherein the request includes input data, and wherein to execute the first kernel comprises to execute the first kernel on the input data included in the request. Example 34 includes the subject matter of any of Examples 28-33, and wherein the kernel execution manager circuitry is to write, to the memory, the input data received from the request. Example 35 includes the subject matter of any of Examples 28-34, and further including network communicator circuitry to receive a request that includes the first bit stream and the second bit stream; wherein the kernel execution manager circuitry is further to write the first bit stream and second bit stream to the memory. Example 36 includes the subject matter of any of Examples 28-35, and wherein to configure the accelerator device with the first bit stream comprises to read the first bit stream from the memory. Example 37 includes the subject matter of any of Examples 28-36, and wherein the output data is first output data, wherein to execute the second kernel comprises to produce second output data, and the kernel execution manager circuitry is further to send the second output data to a remote compute device. Example 38 includes the subject matter of any of Examples 28-37, and wherein the kernel execution manager circuitry is further to send the first output data to the remote compute device. Example 39 includes the subject matter of any of Examples 28-38, and wherein to send second output data to a remote compute device comprises to send the second output data to a compute sled. Example 40 includes an accelerator sled comprising circuitry for configuring, by an accelerator sled, an accelerator device of the accelerator sled with a first bit stream to establish a first kernel; circuitry for executing, by the accelerator sled, the first kernel to produce output data; circuitry for writing, by the accelerator sled, the output data to a memory of the accelerator sled; circuitry for configuring, by the accelerator sled, the accelerator device with a second bit stream to establish a second kernel; and means for executing, by the accelerator sled, the second kernel with the output data in the memory used as input data to the second kernel. Example 41 includes the subject matter of Example 40, and further including circuitry for obtaining a request to execute a batch of kernels in a predefined sequence and wherein the circuitry for configuring the accelerator device with a first bit stream comprises circuitry for configuring, in response to the request to execute a batch of kernels in a predefined sequence, the accelerator device with the first bit stream. Example 42 includes the subject matter of any of Examples 40 and 41, and wherein the circuitry for configuring the accelerator device with the second bit stream comprises circuitry for determining whether to execute a subsequent kernel in the predefined sequence; and circuitry for configuring, in response to a determination to execute a subsequent kernel in the predefined sequence, the accelerator device with the second bit stream. Example 43 includes the subject matter of any of Examples 40-42, and further including circuitry for writing the predefined sequence to the memory. Example 44 includes the subject matter of any of Examples 40-43, and wherein the accelerator device is a field programmable gate array with a slot and wherein the circuitry for configuring the accelerator device with the first bit stream comprises circuitry for configuring the slot with the first bit stream. Example 45 includes the subject matter of any of Examples 40-44, and further including circuitry for receiving a request to execute the first kernel, wherein the request includes input data, and wherein the circuitry for executing the first kernel comprises circuitry for executing the first kernel on the input data included in the request. Example 46 includes the subject matter of any of Examples 40-45, and further including circuitry for writing, to the memory, the input data received from the request. Example 47 includes the subject matter of any of Examples 40-46, and further including circuitry for receiving a request that includes the first bit stream and the second bit stream; and circuitry for writing the first bit stream and second bit stream to the memory. Example 48 includes the subject matter of any of Examples 40-47, and wherein the circuitry for configuring the accelerator device with the first bit stream comprises circuitry for reading the first bit stream from the memory. Example 49 includes the subject matter of any of Examples 40-48, and wherein the output data is first output data and the means for executing the second kernel comprises circuitry for producing second output data, the accelerator sled further comprising circuitry for sending the second output data to a remote compute device. Example 50 includes the subject matter of any of Examples 40-49, and further including circuitry for sending the first output data to the remote compute device. Example 51 includes the subject matter of any of Examples 40-50, and wherein the circuitry for sending the second output data to a remote compute device comprises circuitry for sending the second output data to a compute sled. | 99,002 |
11861425 | DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS Event-driven architectures (EDAs) employ asynchronous communication. In EDAs, entities generating data are referred to as “publishers,” while the recipients/consumers of such data are referred to as “subscribers.” Communication in EDAs is referred to as asynchronous because a publisher need not wait for a response from any subscribers prior to generating/publishing additional data. By contrast, in a synchronous messaging architecture such as those provided by REST APIs, a sender waits for a response from a recipient prior to sending additional data. EDAs typically employ infrastructure called a “message broker” that receives messages (data) from publishers and delivers the messages to subscribers that have registered to receive such messages. Examples of message brokers include RabbitMQ, Apache Kafka, JBoss Messaging, Solace, etc. Accordingly, publishers may be data generating software/hardware that sends messages to the message broker. For example, a publisher may be a smart thermostat that sends temperature data to a message broker, a social media network that sends new subscriber data to a message broker, a smart refrigerator that sends data regarding food stored in the refrigerator to a message broker, etc. Publishers may be any type of application and/or embedded systems that generate and send data using an EDA. Subscribers may be applications that connect to the message broker, manifest an interest in a certain type of message (e.g., messages assigned to a particular “topic”), and maintains the connection with the message broker so that the message broker is able to push the messages to the subscriber. Messages are data that are sent by publishers to the message broker, and which are pushed to the relevant subscribers. The content of messages can be any data. Such messages are often described as events or commands. Events communicate a fact (e.g., a temperature detected by the thermostat), while commands provide executable instructions to cause the subscriber application to take a particular action. Message brokers support communication through a number of different channels, referred to herein as “topics.” A topic may include a name, a version number, metadata describing the topic, etc. Publishers send messages that are organized into particular topics. Subscribers are able to subscribe to topics of interest in order to receive messages that are of interest to the subscriber while excluding messages of other topics which may not be of interest to the subscriber. Accordingly, subscribers may subscribe to a particular topic with the message broker. Additionally, publishers may publish message data that is organized into a topic. Upon receipt of such message data, the message broker may determine the topic, may determine the subscribers that are subscribed to the topic, and may send the message data to those subscribers. In EDAs, messages may be sent from publishers using a variety of different protocols. Examples of such protocols may include, but are not limited to, message queuing telemetry transport (MQTT), constrained application protocol (CoAP), advanced message queuing protocol (AMQP), hypertext transfer protocol (HTTP), etc. Accordingly, asynchronous application programming interfaces (APIs) (e.g., AsyncAPI) may be similar to synchronous APIs (e.g., OpenAPI), but may include different content/organization. For example, asynchronous APIs may include metadata indicating the protocol being used, metadata indicating one or more topic names, server data, schema data (describing a content and/or organization of the message data), etc. Schema data for a particular topic and/or message of an EDA can be used to develop applications and/or systems that can ingest and use data received from the particular message type with which the schema is associated. For example, schema data may be used to programmatically populate fields of an ingesting application and/or to label and/or use portions of the data received as part of the message. However, EDA topics are byte-oriented and do not keep track of the type of data that is written. Knowing the type of data being written on specific topics is needed by consumers of the topics to correctly interpret the information. However, such information is not provided by the EDA architecture (or by the message broker) and is typically tracked externally by the consumers (e.g., subscribers). Described herein are systems and techniques that may be used to automatically generate mappings between EDA topics and schemas that define the content and/or organization of data included in the messages of the topics. In various examples, a topic mapping component may query the message broker of an EDA for each topic handled by the message broker. Then, for each topic handled by the message broker, the topic mapping component may subscribe to the topic and may sample messages from each topic. The content of the messages may be parsed to identify schema data. The schema data may be identified in the header of the message (e.g., by reference to an external schema registry or other remote location) and/or the schema data may be present in the message payload. The topic mapping component may generate a database (or other data structure) that associates each topic (e.g., using a unique topic identifier) with the schema data defining the content and/or organization of messages for that topic. In some examples, there may be multiple message types being sent on a particular topic—each with its own respective schema. In such cases, the topic mapping component may map the topic identifier to multiple message-type identifiers, and may further map each message-type identifier to the appropriate schema. The mapping database may be updated over time to ensure that the relevant schema data is maintained for the topic/message. The logical mapping database may then be used by external client devices (such as user-facing software) to expose to the users a list of Kafka topics to write to and/or read from, together with the details of the schema data that such topics are using. In various examples, such topic/schema mappings may be generated for each message broker of an EDA. FIG.1is a block diagram of a system100comprising a topic mapping component122configured in communication with an event-driven architecture124, according to various examples of the present disclosure. The topic mapping component122may be implemented using software, hardware, and/or some combination thereof. In the example topic mapping component122depicted inFIG.1, the topic mapping component122may include one or more physical host(s), including physical host110A. Physical host110A may in turn include one or more physical processor(s) (e.g., CPU112A) communicatively coupled to one or more memory device(s) (e.g., MDs114A-B) and one or more input/output device(s) (e.g., I/O116A). As used herein, physical processor or processors112A refer to devices capable of executing instructions encoding arithmetic, logical, and/or I/O operations. In one illustrative example, a processor may follow Von Neumann architectural model and may include an arithmetic logic unit (ALU), a control unit, and a plurality of registers. In an example, a processor may be a single core processor which is typically capable of executing one instruction at a time (or process a single pipeline of instructions), or a multi-core processor which may simultaneously execute multiple instructions and/or threads. In another example, a processor may be implemented as a single integrated circuit, two or more integrated circuits, or may be a component of a multi-chip module (e.g., in which individual microprocessor dies are included in a single integrated circuit package and hence share a single socket). A processor may also be referred to as a central processing unit (“CPU”). As discussed herein, memory devices114A-B refer to volatile or non-volatile memory devices, such as RAM, ROM, EEPROM, or any other device capable of storing data. In an example, memory devices114A may be persistent storage devices such as hard drive disks (“HDD”), solid state drives (“SSD”), and/or persistent memory (e.g., Non-Volatile Dual In-line Memory Module (“NVDIMM”)). Memory devices114A-B may additionally include replication of data to prevent against data loss due to a failure in any one device. This replication may be implemented through, for example, a redundant array of independent disks (“RAID”) setup. RAID arrays may be designed to increase performance, to provide live data backup, or a combination of both. As discussed herein, I/O device(s)116A refer to devices capable of providing an interface between one or more processor pins and an external device, the operation of which is based on the processor inputting and/or outputting binary data. CPU(s)112A may be interconnected using a variety of techniques, ranging from a point-to-point processor interconnect, to a system area network, such as an Ethernet-based network. Local connections within physical hosts110A, including the connections between processors112A and memory devices114A-B and between processors112A and I/O device116A may be provided by one or more local buses of suitable architecture, for example, peripheral component interconnect (PCI). In an example, physical host110A may run one or more isolated guests, for example, VM155, which may in turn host additional virtual environments (e.g., VMs and/or containers). In an example, a container (e.g., storage container160, service containers150A-B) may be an isolated guest using any form of operating system level virtualization, for example, Red Hat® OpenShift®, Docker® containers, chroot, Linux®-VServer, FreeBSD® Jails, HP-UX® Containers (SRP), VMware ThinApp®, etc. Storage container160and/or service containers150A-B may run directly on a host operating system (e.g., host OS118) or run within another layer of virtualization, for example, in a virtual machine (e.g., VM155). In an example, containers that perform a unified function may be grouped together in a container cluster that may be deployed together (e.g., in a Kubernetes® pod). In an example, a given service may require the deployment of multiple VMs, containers and/or pods in multiple physical locations. In an example, VM155may be a VM executing on physical host110A. Topic mapping component122may run one or more VMs (e.g., VMs155), by executing a software layer (e.g., hypervisor120) above the hardware and below the VM155, as schematically shown inFIG.1. In an example, the hypervisor120may be a component of respective host operating system118executed on physical host110A, for example, implemented as a kernel based virtual machine function of host operating system118. In another example, the hypervisor120may be provided by an application running on host operating system118A. In an example, hypervisor120may run directly on physical host110A without an operating system beneath hypervisor120. Hypervisor120may virtualize the physical layer, including processors, memory, and I/O devices, and present this virtualization to VM155as devices, including virtual central processing unit (“VCPU”)190A, virtual memory devices (“VMD”)192A, virtual input/output (“VI/O”) device194A, and/or guest memory195A. In an example, another virtual guest (e.g., a VM or container) may execute directly on host OSs118without an intervening layer of virtualization. In an example, a VM155may be a virtual machine and may execute a guest operating system196A which may utilize the underlying VCPU190A, VMD192A, and VI/O194A. Processor virtualization may be implemented by the hypervisor120scheduling time slots on physical CPUs112A such that from the guest operating system's perspective those time slots are scheduled on a virtual processor190A. VM155may run on any type of dependent, independent, compatible, and/or incompatible applications on the underlying hardware and host operating system118. The hypervisor120may manage memory for the host operating system118as well as memory allocated to the VM155and guest operating system196A such as guest memory195A provided to guest OS196A. In an example, storage container160and/or service containers150A,150B are similarly implemented. In an example, in addition to distributed storage provided by storage container160, storage controller142may additionally manage storage in dedicated storage nodes (e.g., NAS, SAN, etc.). In an example, storage controller142may deploy storage in large logical units with preconfigured performance characteristics (e.g., storage nodes170A). In an example, access to a given storage node (e.g., storage node170A) may be controlled on an account and/or tenant level. In an example, a service container (e.g., service containers150A-B) may require persistent storage for application data, and may request persistent storage with a persistent storage claim to orchestrator140. In the example, storage controller142may allocate storage to service containers150A-B through a storage node (e.g., storage nodes170A) in the form of a persistent storage volume. In an example, a persistent storage volume for service containers150A-B may be allocated a portion of the storage capacity and throughput capacity of a given storage node (e.g., storage nodes170A). In various examples, the storage container160and/or service containers150A-B may deploy compute resources (e.g., storage, cache, etc.) that are part of a compute service that is distributed across multiple clusters (not shown inFIG.1). The various virtualized computing systems (e.g., service containers150A,150B, VM155) may be examples of computing environments that may deploy one or more of the techniques described herein for programmatic generation of a topic/schema mapping127. For example, service container150A may request and/or receive a list of topics handled by message broker126. Service container150B may sample messages from each topic of the list of topics and/or may determine schema data for the different topics. VM155may receive the list of topics (e.g., topic identifier data) from service container150A and the schema data from service container150B and may populate the topic/schema mapping127in a database or other data structure. The foregoing example is merely one possible implementation of a topic mapping component122. The actual deployment of the various services and/or systems of the topic mapping component122are implementation-specific details and may be modified as desired in accordance with the present disclosure. The topic mapping component122may be deployed across any number of physical computing devices and/or virtualized computing environments, depending on the desired implementation. Event-driven architecture124may comprise one or more publisher(s)121. Publisher(s)121may generate message(s)141that may be sent to message broker126. Although only a single message broker126is depicted inFIG.1, multiple message brokers may be used in a given EDA. The techniques for topic/schema mapping are equally applicable to an EDA having multiple message brokers. Indeed, in such examples, a separate topic/schema mapping127may optionally be created for each message broker126. Message broker126(e.g., Apache Kafka) may receive the message(s)141and may determine the topics to which the message(s)141pertain. For each message141, message broker126may determine the set of subscriber(s)123that have subscribed to the topic of the particular message and may send the message141′ to the appropriate subscribers123. As shown inFIG.1, topic mapping component122may communicate with message broker126to receive a list of topics being handled by the message broker126and may subscribe to and parse messages from each topic to determine the schema data that is associated with each topic (and with each distinct message type of each topic, if there are more than one). As an example, the topic mapping component122may request a list of topics handled by the message broker126. Additionally, the topic mapping component122may sample messages from the various topics in order to determine schema data of the messages (e.g., data indicating the content and/or organization of the messages). Below is an example of a topic with a message called “lightMeasured”: 39topics:40smartylighting/streetlights/1/0/event/{streetlightId}/lighting/measured:41description: The topic on which measured values may be produced andconsumed.42parameters:43streetlightId:44$ref: ‘#/components/parameters/streetlightId’45subscribe:46summary: Receive information about environmental lightingconditions of a particular streetlight.47operationId: receiveLightMeasurement48traits:49- $ref: ‘#/components/operationTraits/kafka’50message:51$ref: ‘#/components/messages/lightMeasured’ For the topic described on line 40, a particular schema may describe the format of data for the messages “lightMeasured.” $ref on line 51 is a reference to a different line of the text describing an internal schema of lightMeasured messages (e.g., in the payload of such messages). An example of such a schema may be: 87messages:88lightMeasured:89name: lightMeasured90title: Light measured91summary: Inform about environmental lighting conditionsfor a particular streetlight.92contentType: application/json93traits:94- $ref: ‘#/components/messageTraits/commonHeaders’95payload:96$ref: “#/components/schemas/lightMeasuredPayload” The topic mapping component122includes logic to parse such schema data included in messages. The topic mapping component122may store the schema data in association with the relevant topic ID/message ID in the topic/schema mapping127, as described in further detail below. The example schema above, in turn, includes references on lines 94 and 96 to different information about the message schema for “lightMeasured” messages. For example, a pointer to the payload schema for such messages in included in line 96. An example of the payload schema may be internal to the message or may be provided at an external location that is pointed to by the $ref at line 96. An example of such a payload schema may be: 114schemas:115lightMeasuredPayload:116type: object117properties:118lumens:119type: integer120minimum: 0121description: Light intensity measured in lumens.122sentAt:123$ref: “#/components/schemas/sentAt” Accordingly, topic mapping component122may parse messages sampled from message broker126for each topic of interest in order to determine schema data describing the content and/or organization of the sampled messages. In some examples, the schema data may be included within the messages themselves. Such schema data may be referred to as internal schema data. In some other examples, the schema data may be referenced in the messages, but may not be present within the message payloads. For example, a message may include a pointer and/or URL to a different location at which the schema data is accessible. Such schema data may be referred to as external schema data. In some further examples, the logic of the topic mapping component122may be configured to determine the schema data based on the organization of the content of the sampled message, even where no internal or external schema data is provided explicitly. Once populated, a user or other system may query topic/schema mapping127using a message ID or topic ID (e.g., identifier data identifying a message received from message broker126or a subscribed-to topic handled by message broker126). The message ID or topic ID may be used to retrieve the schema data that can be used to interpret the message payload. FIG.2is a block diagram of a system200illustrating a topic mapping component222generating a topic/schema mapping226using sampled messages, according to an example of the present disclosure. As depicted inFIG.2, after requesting and receiving a list of topics handled by a particular message broker, topic mapping component222may subscribe to and sample messages for each topic. For example, topic mapping component222may receive sampled messages201for topic 1. Sampled messages201may comprise message202, message206, etc. Topic mapping component222may determine schema data associated with each topic. For example, for topic 1, topic mapping component may determine schema data204from message202and schema data208from message206. If message202and message206are of the same type of message for topic 1, schema data204and schema data208may be the same. However, in various other examples, a topic may include different message types that are each associated with their own respective schemas. Accordingly, in some cases, schema data204and schema data208may be different. As previously described, external references to schema data may be found in the message headers. However, in other cases, the schema data (e.g., the schema definition) may be found in a message's payload. In further examples, an external reference to the schema definition (and/or a portion of the schema definition) may be found in either the message header or the message payload. Topic mapping component222may generate topic/schema mapping226. Topic/schema mapping226may include topic identifier data (e.g., a Topic ID). In various examples, the topic identifier data may be metadata included in the header of messages for that topic. In other examples, the topic identifier data may be assigned by the message broker and/or by the topic mapping component222. Further, in some examples, the topic/schema mapping226may comprise message identifier data (e.g., a Message ID). The message identifier data may be useful when a particular topic includes more than one message type. In the example depicted inFIG.2, Topic 1 (identified by Topic ID 1) comprises two message types identified by Message identifier data A and B. Each of these message types for Topic 1 is associated with its own schema data. For example, Message ID A is associated with schema data204, while Message ID B is associated with schema data208. In the example depicted inFIG.2, Topic ID 2 has only a single message type (Message ID C) and is associated with schema data210. Similarly, Topic ID 3 has only a single message type (Message ID D) and is associated with schema data212. In various examples, topic/schema mapping226may optionally store dependency/linkage information for the various topics. Dependency/linkage information may define dependencies for the particular Topic ID, Message ID, and/or Schema. For example, Message ID A of Topic ID 1 and Message ID B of Topic ID 1 may be linked. For example, some data of Message ID A may depend on and/or refer to some data of Message ID B. Accordingly, the dependency/linkage information notes the dependency for each of these Message IDs. In another example, Topic ID 3, Message ID D notes a linkage to Topic ID 5, Schema field27. This may indicate that data of Topic ID 3, Message ID D is linked to the specified field of another topic (i.e., Topic ID 5). For example, some portion of the data of Topic ID 3 may be used to populate a schema field of a different topic, etc. After topic/schema mapping226is generated, other computing devices and/or components may use topic/schema mapping226to determine schema data for EDA messages handled by the message broker for which the topic/schema mapping226was generated. For example, a subscriber device may query the topic/schema mapping226using a Topic ID and/or Message ID. The subscriber device may determine the Topic ID and/or Message ID from a header of a received message. The subscriber device may query the topic/schema mapping226to determine the pertinent schema data and/or dependency/linkage information associated with the message. The schema data and/or dependency/linkage information may be used in a variety of ways. For example, schema data may be used to develop an application that automatically ingests message data and/or to label and/or use portions of the data received as part of the message. FIG.3is flowchart illustrating an example process for generating mappings between topics and message schemas for an event-driven architecture, according to an example of the present disclosure. Although the example process300is described with reference to the flowchart illustrated inFIG.3, it will be appreciated that many other methods of performing the acts associated with the process300may be used. For example, the order of some of the blocks may be changed, certain blocks may be combined with other blocks, blocks may be repeated, and some of the blocks described may be optional. The process300may be performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software, or a combination of both. In some examples, the actions described in the blocks of the process300may represent a series of instructions comprising computer-readable machine code executable by one or more processing units of one or more computing devices. In various examples, the computer-readable machine codes may be comprised of instructions selected from a native instruction set of and/or an operating system (or systems) of the one or more computing devices. The example process300includes sending a topic discovery request to a message broker (block310). In various examples, topic mapping component122may send a topic discovery request to message broker126. In the example ofFIG.3, the topic discovery request may request information concerning topics registered with message broker126. For example, the topic discovery request may request a list of all topics (e.g., all topic IDs) handled by message broker126or a subset of topics handled by message broker126. In an example, the process300may include receiving a list of topics of the message broker (block315). In various examples, the message broker126may return a list of names (and/or identifiers) of topics in response to the topic discovery request. In some examples, after receiving the list of names of the topics, the topic mapping component122may subscribe to the different topics in order to be able to sample messages from each of the topics to generate a topic/schema mapping (e.g., topic/schema mapping226). In some examples, the topic mapping component122may send metadata discovery requests for metadata related to each topic of interest. The metadata may be used to populate various fields of a topic/schema mapping. For example, security configuration parameters, server information, protocol information, version numbers, etc., may be returned describing each topic. In an example, the process300may include determining first identifier data identifying a first topic of the list of topics (block320). In some examples, the first identifier data may identify and/or distinguish the first topic from among other topics handled by message broker126. The first identifier data may be used to populate a field of the topic/schema mapping226that identifies the particular topic from among other topics handled by the message broker126. In an example, the process300may include receiving a first message pertaining to the first topic from the message broker (block325). For example, the topic mapping component122may subscribe to the first topic and may sample messages generated for the first topic and received from the message broker126. As described in further detail below, logic of the topic mapping component122may be configured to determine various payload data and/or metadata included in the sampled messages and may include such data in the topic/schema mapping. In an example, the process300may include determining first schema data using the first message pertaining to the first topic, where the first schema data may include data describing content of the first message and/or organization of the first message (block330). For example, a message may include internal schema data that describes formatting of the messages. Accordingly, the internal schema data may be used by topic mapping component122to populate a field of the topic/schema mapping (e.g., topic/schema mapping226) for the first message of the first topic. Such a procedure may be followed for each topic of interest and/or for each message type of each topic in order to programmatically generate a topic/schema mapping226for the message broker126of the EDA. The internal schema may describe the formatting of the message (e.g., the schema may describe the different fields and their location within the payload of the message) and/or may describe the content of the message (e.g., the schema may describe what kind of data is represented in each field of the payload of the message. Additionally, the schema may similarly describe the content and organization of the message header. Further, although an internal schema is described herein, the first schema may instead be an external schema that is referenced by data within the first message. The external schema may be stored at a different location (rather than within the payload of the first message). Accordingly, the topic mapping component122may access the location referenced by the first message data in order to determine the first schema data for the first message. In still other examples, logic of the topic mapping component122may be used to parse the first message in order to determine the schema (e.g., based on a comparison of the first message to one or more known schemas). In an example, the process300may include storing the first identifier data in association with the first schema data in a first data structure (block335). As described herein, the topic mapping component122may generate the topic/schema mapping226which may be instantiated as a data structure. The topic/schema mapping226may associate topic identifier data and/or message type identifier data with the appropriate schema data for that topic and/or message type. Accordingly, the topic mapping component122may programmatically populate the fields of the topic/schema mapping226for each topic and/or each message type discovered during the process300in order to build a machine-readable and/or searchable topic/schema mapping226. As previously described, users may then search for the appropriate schema data for a particular topic and/or message type and may use the schema data to automatically interface with the event-driven architecture124with other systems and/or generate code for the event-driven architecture124. FIG.4illustrates a flow diagram of an example generation of mappings between topics and schema data for an event-driven architecture according to various aspects of the present disclosure. Although the examples below are described with reference to the flow diagram illustrated inFIG.4, many other methods of performing the acts associated withFIG.4may be used. For example, the order of some of the blocks may be changed, certain blocks may be combined with other blocks, and some of the blocks described are optional. The methods may be performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software, or a combination of both. In illustrated example400, a topic mapping component422may request a list of topics pertaining to message broker426(block410). The message broker426may receive the request for the list of topics (block412). In various examples, the request for the list of topics may specify one or more publishers configured in communication with the message broker426, may request all topics handled by the message broker and/or may otherwise specify a subset of the various topics handled by message broker426. In other examples, the topic mapping component422may request a list of all topics handled by the particular message broker426. In response, message broker426may send a list of topic names (block413). The list of topic names may correspond to the requested list of topics in the request sent by topic mapping component422. For example, if the topic mapping component422requested a subset of all topics (e.g., topics related to one or more specified publishers), the list of topic names may include only those topic names related to the specified publishers. In other examples, the list of topic names may include all topic names of a particular event-driven API and/or all topic names handled by the particular message broker426. In various examples, the list of topic names may be a list of topic identifier data (e.g., data that uniquely identifies each topic so that the topics may be distinguished from one another). The topic mapping component422may receive the list of topic names (block414) sent by the message broker426in response to the request. For each topic name of the list of topic names, topic mapping component422may request per-topic message data from the message broker426(block416). For example, the topic mapping component422may subscribe to each topic and may sample messages from each topic. Message broker426may receive the per-topic message requests (and/or the subscriber request) from topic mapping component422(block418). Message broker426may send messages of each subscribed topic to the topic mapping component422. Message broker426may send the per-topic messages to topic mapping component422(block420). Topic mapping component422may receive the per-topic messages (block421). Topic mapping component422may determine schema data from the message header/payload (block424). For example, the topic mapping component422may include computer-executable instructions configured to parse the message header to determine if an external and/or internal reference to schema data is present. In some further examples, the topic mapping component422may include computer-executable instructions configured to parse the payload of the message to determine if the schema data is present within the payload (and/or whether an external reference is present that references a location of all or part of the schema data for the message type and/or topic). Topic mapping component422may store an association between each topic/message ID and the schema data that is associated with that topic/message ID in a data structure (block427). For example, the topic mapping component422may generate a topic/schema mapping (such as topic/schema mapping226) that associates topic IDs and/or message IDs with their associated schemas. After generating the topic/schema mapping, some time may pass (action429). A user device428may send a request for schema data for a first topic ID to topic mapping component422(block430). For example, the user device428may subscribe to a first topic handled by message broker426. Accordingly, the user device428may receive one or more messages from message broker426of the first topic. In various examples, the request for schema data may include topic identifier data (e.g., identifying the topic to which the user device428is subscribed) and/or message identifier data (e.g., metadata determined from a message header received by the user device428). Topic mapping component422may receive the request from the user device428(block432). Topic mapping component422may perform a lookup of the topic/schema mapping using the first topic ID as a query (block434). In some examples, a message type ID may also be used as a query term. For example, topics may be associated with multiple message types. Accordingly, in order to determine the correct schema data for the message type, the message ID may also be provided. The topic mapping component422may determine schema data associated with the provided topic ID and/or message ID. The topic mapping component422may send the schema data to the requesting device (block436). The user device428may receive the schema data in response to the request (block438). Thereafter, the user device428may be enabled to programmatically parse the message data and/or may use the schema to develop applications and/or APIs that may automatically ingest the message data. FIG.5is block diagram of a system500comprising a first computing device502in communication with a message broker526according to an example of the present disclosure. First computing device502may comprise at least one processor504and non-transitory computer-readable memory503. The memory503may store non-transitory computer-readable instructions506. The instructions506may be executed by the processor to perform various techniques described herein related to topic/schema mappings. The first computing device502may be configured in communication with a message broker526. The message broker526may have one or more registered topics including first topic507. The first topic507may be a topic of an event-driven architecture for which message broker526receives message data published by one or more publishers. Message broker526may send the message data to one or more subscribers subscribed to the first topic507. The first computing device502may send a topic discovery request511to the message broker526. In response, the message broker526may send list of topics508to the first computing device502. In various examples, first computing device502may subscribe to first topic507. Message broker526may send a first message510of the first topic507to first computing device502. The first message510may include first identifier data522that may identify the first topic507from among other topics of the message broker526. In various examples, first computing device502may subscribe to the first topic507in order to receive the first message510of the first topic507. The first message510may comprise or otherwise be associated with first schema data512. The first schema data512may include data describing content514of the first message510and/or data describing an organization516of the first message510. First computing device502may parse the first message510to determine the first schema data512and/or the first identifier data522. First computing device502may receive the first message510and may store the first schema data512′ in a first data structure520in association with the first identifier data522′. The first data structure520may be, for example, a database (e.g., a lookup table) and/or some other data structure, depending on the desired implementation. It will be appreciated that all of the disclosed methods and procedures described herein can be implemented using one or more computer programs or components. These components may be provided as a series of computer instructions on any conventional computer readable medium or machine readable medium, including volatile or non-volatile memory, such as RAM, ROM, flash memory, magnetic or optical disks, optical memory, or other storage media. The instructions may be provided as software or firmware, and/or may be implemented in whole or in part in hardware components such as ASICs, FPGAs, DSPs or any other similar devices. The instructions may be executed by one or more processors, which when executing the series of computer instructions, performs or facilitates the performance of all or part of the disclosed methods and procedures. It should be understood that various changes and modifications to the example embodiments described herein will be apparent to those skilled in the art. Such changes and modifications can be made without departing from the spirit and scope of the present subject matter and without diminishing its intended advantages. It is therefore intended that such changes and modifications be covered by the appended claims. | 39,497 |
11861426 | DETAILED DESCRIPTION A machine, such as a computer, may be equipped with sensors that allow the machine to detect feature of the environment in which it operates. For example, a machine could be equipped with a light sensor that detects the amount and/or color of light present at the machine. Or, the machine could be equipped with an accelerometer that detects changes in motion, a Global Position System (GPS) receiver that detects changes in absolute position, or some other type of sensor. Machines typically have some kind of operating system which, among other things, provides an interface between the machine's hardware and the software that runs on the machine. For example, an operating system may provide an application programming interface (API) that allows software to interact with the sensors. Thus, an application might call a function provided by the API to request the current acceleration vector (as read by the accelerometer), the current latitude and longitude (as read by GPS receiver), or some other sensor reading. The way in which certain APIs are used can complicate the design of a program. For example, an API might provide a function that an application can call to request a sensor reading, as described above. While such a function allows an application to obtain the sensor readings, incorporating such readings into the application's runtime loop increase the complexity of the application. Such an application would have to include code that requests a reading periodically, or that requests a reading in response to some kind of event. In general, an application that wants to read sensors with an API may have to include significant code to initialize and instantiate the API, and to manage the data that comes from the API. The fact that such code is called for in order to use the API may discourage some software designers from using sensor data in a program. Additionally, one implementation of an API—the Sensor API for the MICROSOFT WINDOWS operating systems—is designed in such a way each application that uses the API instantiates a separate copy of the API, due to the in-process nature of the API. In some situations involving large numbers of applications running simultaneously, and/or large numbers of sensors, this design may consume excess system resources. As an alternative making direct function calls to read sensors, a technology such as Component Object Model (COM) could be used. Thus, a COM object could obtain sensor readings (e.g., using Sensor API), and applications that want to obtain the readings could implement callbacks specified by the COM interface, which the COM object would use to send sensor data to the applications in response to certain types of events. However, in the context of reading sensors, a COM implementation might consume more resources than are needed. The subject matter herein uses a messaging protocol to communicate sensor information to applications. A sensor service acts as an intermediary between applications and a sensor interface, and communicates sensor data to the application in the form of messages. Thus, the sensor service uses the sensor interface (e.g., the Sensor API for the MICROSOFT WINDOWS operating systems) to obtain sensor data. The sensor service also receives subscription requests from applications, whereby applications request to receive sensor data (or certain types of sensor data). The sensor data then pushes, to the applications, messages containing sensor data to which the applications have subscribed. The messaging protocol may be made simple and lightweight, thereby putting relatively little tax on a machine's resources. In one example, the sensor service may simply pass along raw data received from sensors. However, in another example, the sensor service may refine the data in some way. For example, a light sensor reading may contain detailed values in some color space (e.g., separate red, green, and blue values). However, if an application cares only about the color temperature that is present at a machine, then the sensor service might convert the red, green, and blue values into a color temperature value, which it could provide to the application. Additionally, refinement of data can be based on information from multiple sensors, or on information other than the sensors themselves. For example, the sensor service might contain logic that detects motion based on a combination of accelerometer and GPS data, or might determine the location of the nearest restaurant (or gas station, or hospital) based on the current latitude and longitude (as read from the GPS receiver) and further based on a database that indicates the latitude and longitude of certain establishments. Once the sensor service is equipped with such logic, the sensor service can use the same basic message infrastructure to provide either raw sensor data, or information based on arbitrary levels of abstraction. Turning now to the drawings,FIG.1shows an example scenario in which messages may be used to communicate sensor data, and other data, to applications. In the scenario ofFIG.1, various sensors102,104, and106collect various types of data. For example, sensor102may be a light sensor that detects the color and/or temperature of light in the vicinity of the light sensor. Sensor104may be an accelerometer that detects the direction and/or magnitude of acceleration that the sensor is undergoing. Sensor106may be a GPS receiver that communicates with satellites in order to triangulate the sensor's current latitude and longitude. Sensors102,104, and106may be attached to a particular computer or other machine (e.g., machine108). In such a case, these sensors effectively sense the acceleration, latitude, longitude, etc., of the machine to which they are attached. However, sensors102-106could be physically disassociated from machine108. Moreover,FIG.1shows an example in which the sensors are a light meter, an accelerometer, and a GPS receiver, but the subject matter herein could be used with any type of sensor. An operating environment present at machine108may provide a sensor application programming interface (API)110. Sensor API110provides a mechanism through which programs that execute on machine108may interact with sensors102-106. An application (or an operating system component, a driver, a plug-in, etc.) could use sensor API110to read the values of sensors102-106. Thus, if a particular program wants to know the current latitude and longitude of machine108, then that program could issue a call to a function provided by sensor API110. The function may communicate with the relevant hardware driver for the GPS receiver (which is sensor106in this example), and may return the current latitude and longitude readings. A set of functions that programs can call to read sensor values is one example implementation of sensor API110. However, sensor API110could be implemented in any appropriate manner. As another example, sensor API110could be implemented as a Component Object Model (COM) object, where a program implements a set of callbacks that allows the COM object to communicate with the program. While an application program could interact with sensor API110directly, in the example ofFIG.1the direct consumer of the information that sensor API110provides is sensor service112. Sensor service112uses sensor API to gather data from sensors102-106, and then packages this sensor data (or information derived from the sensor data) in the form of messages, such as message114. These messages may be provided to an application, such as application116. Sensor service112may use a message service to deliver messages to applications. The message service could be provided by sensor service112; or, sensor service could make use of a message service that is provided by some other component (such as a message service that the operating system provides so as to allow different executable components on a given machine to communicate with each other). An application, such as application116, may subscribe to certain sensor events by registering with sensor service112to receive notifications of those events. For example, application116might subscribe to receive notifications of changes in machine108's latitude and longitude. In such an example, sensor service112might use sensor API110to poll sensor106(the GPS receiver) periodically for the current latitude and longitude. Sensor service112could then generate messages when the latitude and longitude change, or could issue a message after the passage of some amount of time (e.g., every minute) even if there is no change. In addition to application116, there could be one or more other applications on machine108, such as applications118and120. These applications could register separately for notifications of certain events (as indicated by the dashed lines from applications118and120to sensor service112). For example, application118might register to receive notification of changes in acceleration, and application120might register to receive notification of light readings and latitude/longitude. Any application could register to receive any type of messages from sensor service112. In one example, messages are used to convey raw sensor data. However, in other examples, message could be used to convey higher-level conclusions that are derived from sensor data, from other data, or from a combination of both sensor data and other data. For example, sensor service112may obtain data122from sources other than the sensors themselves. Sensor service112may employ one or more high-level models124that, when forming conclusions, take into account data from sensors102-106, other data122, or some combination of sensor data and other data. For example, a high-level model could attempt to determine where a person is going, based on a person's calendar, and also based on changed in the location of a device the person is carrying. In such a case, a high-level model could combine sensor data (latitude and longitude reported by a GPS receiver) with other data122(which, in this case, would be appointments from a person's calendar), in order to form a conclusion about where the person is going. (E.g., the model might reason, “The person who owns this device is walking toward the administration building, and has an appointment with the company president on his calendar; therefore, the person's current destination is the office of the company president.”) An application could subscribe to “destination” notifications, and sensor service112could issue a message to that application when an appropriate high level model has determined where the person is going. It is noted that, when a model is used to draw conclusions from sensor data and/or from other data, the conclusions drawn by the model (and, therefore, the information contained in messages based on the model) would differ from the raw sensor data. Thus, if a model determines that a person is walking based on changes in accelerometer readings, the messages that are sent to an application might indicate that a person has started and/or stopped walking. The accelerometer senses acceleration vectors, but does not sense the commencement or cessation of walking directly, and thus messages based on a “walking” model would differ from the actual sensor readings. FIG.2shows an example of how a sensor service may use sensor readings, and possibly other data, to create messages. In the example ofFIG.2, sensor service112takes readings202,204, and206, from a sensor. Sensor service may take readings202-206in any manner. For example, there may be a sensor interface (such as sensor API110, shown inFIG.1), which allows programs to take sensor readings, although the subject matter herein is not limited to the example in which a sensor API is used. In the example ofFIG.2, the sensor from which readings are taken is accelerometer208, although any type of sensor could be used. In the case where the sensor is an accelerometer, that sensor may produce acceleration vectors as readings. For example, reading202contains an indication of a particular acceleration vector (given by the acceleration values in the X, Y, and Z dimensions). Readings204and206could show the same values for the acceleration vector (if the acceleration does not change between readings), or could show different values for the acceleration vector (if the acceleration vector has changed between readings). Sensor service112may generate messages based on sensor readings, where the messages are to be sent to subscribing applications. An application, such as application210, may subscribe to certain types of messages by registering with sensor service112. In one example, application210registers to receive sensor data. Thus, application210could receive a message212, which indicates that the acceleration vector has changed, and also indicates the current value of the acceleration vector. In the example ofFIG.2, the sending of message212may be triggered by a change in the acceleration vector. That is, sensor service112could monitor accelerometer readings, and could send a message to subscribing applications whenever the value of the sensor reading changes. However, sensor service112could send a message in response to any sort of trigger, of which a change is a sensor reading is merely one example. As another example, sensor service112could send a message after the passage of n units of time (e.g., after every n second), in which case the passage of time is the trigger to send the message. While sensor service112could send messages to report on sensor readings, sensor service112could also send message to report on higher-level concepts based on abstract models. For example, sensor service112could send message214to indicate that a particular type of motion has started or stopped. In the example ofFIG.2, accelerometer208might be attached to a device that can be carried by a person, and message214might indicate that the person is walking. The conclusion that the person is walking might be based on the use of high-level models of different types of motion to analyze the pattern of changes in the acceleration vector. Thus, sensor service112could send messages that are based on raw sensor data (as in the case of message212), or could send messages that are based on conclusions that high-level models draw from the sensor data (as in the case of message214). As noted above, in connection withFIG.1, when sensor service112uses high level models to generate messages, sensor service112may use sensor data, but may also use other data122, which could be any type of data (e.g., data from a database, data from a user's calendar, data retrieved from the Internet, etc.). FIG.3shows, in the form of a flow chart, an example process in which messages may be generated and sent to subscribing applications. Before turning to a description ofFIG.3, it is noted that the flow diagram ofFIG.3is described, by way of example, with reference to components shown inFIGS.1-2, although this process may be carried out in any system and is not limited to the scenarios shown inFIGS.1-2. Additionally, the flow diagram inFIG.3shows an example in which stages of a process are carried out in a particular order, as indicated by the lines connecting the blocks, but the various stages shown in this diagram can be performed in any order, or in any combination or sub-combination. At302, a subscription request may be received from an application (or other type of program). Thus, an application may subscribe to receive notifications of sensor values, or to receive certain kinds of events with respect to sensor values. For example, an application might subscribe to receive accelerometer readings. The application might subscribe to a message reporting the acceleration vector every five seconds, or every time the acceleration vector changes, or in response to some other trigger. The application might request all acceleration data, or only certain acceleration data. At304, a sensor interface is used to read sensor value. For example, sensor service112(shown inFIG.1) might use sensor API110(also shown inFIG.1) to read sensor values. Sensor service112might have a loop that periodically reads those from the sensors using sensor API110(or some other sensor interface), and that determines, based on the values and changes thereto, what messages to report. At306, models may be applied to sensor values. A model may attempt to draw conclusions from raw sensor data—e.g., a model might conclude that a device is being moved through human walking, based on an analysis of acceleration vector readings. Models may be based solely on sensor data, or may be based on some combination of sensor data and other data122, as previously described in connection withFIGS.1and2. While a sensor service could apply a model at306, in an alternative example the sensor service might apply no model and might simply report raw sensor data. At308, a message may be created to convey sensor data and/or to convey conclusions formed by high-level models. In one example (e.g., in certain versions of the MICROSOFT WINDOWS operating systems), a message with the type “WM_CONTEXT” may be created, where the format of the WM_CONTEXT could be made generally known to software developers via a published specification. Thus, third-party applications and other third-party programs can receive and interpret sensor data (and other types of information) that is conveyed in a WM_CONTEXT message. However, the subject matter herein is not limited to the use of a WM_CONTEXT message; any type of message could be used. At310, the message that was created may be provided to a subscribing application (or other program). For example, a messaging infrastructure could be used to push a message, such as a WM_CONTEXT message, to an application that has subscribed to receive such messages. At312, the application (or other program) may take a tangible action based on the received message. For example, the application could communicate information to a person based on the received message, or could store information based on the message in some tangible form. To use one example from above, if the message indicates the current accelerometer readings, an application could display the current accelerometer readings contained in the message. FIG.4shows an example environment in which aspects of the subject matter described herein may be deployed. Computer400includes one or more processors402and one or more data remembrance components404. Processor(s)402are typically microprocessors, such as those found in a personal desktop or laptop computer, a server, a handheld computer, or another kind of computing device. Data remembrance component(s)404are components that are capable of storing data for either the short or long term. Examples of data remembrance component(s)404include hard disks, removable disks (including optical and magnetic disks), volatile and non-volatile random-access memory (RAM), read-only memory (ROM), flash memory, magnetic tape, etc. Data remembrance component(s) are examples of computer-readable storage media. Computer400may comprise, or be associated with, display412, which may be a cathode ray tube (CRT) monitor, a liquid crystal display (LCD) monitor, or any other type of monitor. Software may be stored in the data remembrance component(s)404, and may execute on the one or more processor(s)402. An example of such software is message generation software406, which may implement some or all of the functionality described above in connection withFIGS.1-3, although any type of software could be used. Software406may be implemented, for example, through one or more components, which may be components in a distributed system, separate files, separate functions, separate objects, separate lines of code, etc. A computer (e.g., personal computer, server computer, handheld computer, etc.) in which a program is stored on hard disk, loaded into RAM, and executed on the computer's processor(s) typifies the scenario depicted inFIG.4, although the subject matter described herein is not limited to this example. The subject matter described herein can be implemented as software that is stored in one or more of the data remembrance component(s)404and that executes on one or more of the processor(s)402. As another example, the subject matter can be implemented as instructions that are stored on one or more computer-readable storage media. (Tangible media, such as an optical disks or magnetic disks, are examples of storage media.) Such instructions, when executed by a computer or other machine, may cause the computer or other machine to perform one or more acts of a method. The instructions to perform the acts could be stored on one medium, or could be spread out across plural media, so that the instructions might appear collectively on the one or more computer-readable storage media, regardless of whether all of the instructions happen to be on the same medium. Additionally, any acts described herein (whether or not shown in a diagram) may be performed by a processor (e.g., one or more of processors402) as part of a method. Thus, if the acts A, B, and C are described herein, then a method may be performed that comprises the acts of A, B, and C. Moreover, if the acts of A, B, and C are described herein, then a method may be performed that comprises using a processor to perform the acts of A, B, and C. In one example environment, computer400may be communicatively connected to one or more other devices through network408. Computer410, which may be similar in structure to computer400, is an example of a device that can be connected to computer400, although other types of devices may also be so connected. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. | 22,213 |
11861427 | DETAILED DESCRIPTION The following description provides specific details for a thorough understanding of, and enabling description for, various examples of the technology. One skilled in the art will understand that the technology may be practiced without many of these details. In some instances, well-known structures and functions have not been shown or described in detail to avoid unnecessarily obscuring the description of examples of the technology. It is intended that the terminology used in this disclosure be interpreted in its broadest reasonable manner, even though it is being used in conjunction with a detailed description of certain examples of the technology. Although certain terms may be emphasized below, any terminology intended to be interpreted in any restricted manner will be overtly and specifically defined as such in this Detailed Description section. Throughout the specification and claims, the following terms take at least the meanings explicitly associated herein, unless the context dictates otherwise. The meanings identified below do not necessarily limit the terms, but merely provide illustrative examples for the terms. For example, each of the terms “based on” and “based upon” is not exclusive, and is equivalent to the term “based, at least in part, on”, and includes the option of being based on additional factors, some of which may not be described herein. As another example, the term “via” is not exclusive, and is equivalent to the term “via, at least in part”, and includes the option of being via additional factors, some of which may not be described herein. The meaning of “in” includes “in” and “on.” The phrase “in one embodiment,” or “in one example,” as used herein does not necessarily refer to the same embodiment or example, although it may. Use of particular textual numeric designators does not imply the existence of lesser-valued numerical designators. For example, reciting “a widget selected from the group consisting of a third foo and a fourth bar” would not itself imply that there are at least three foo, nor that there are at least four bar, elements. References in the singular are made merely for clarity of reading and include plural references unless plural references are specifically excluded. The term “or” is an inclusive “or” operator unless specifically indicated otherwise. For example, the phrases “A or B” means “A, B, or A and B.” As used herein, the terms “component” and “system” are intended to encompass hardware, software, or various combinations of hardware and software. Thus, for example, a system or component may be a process, a process executing on a computing device, the computing device, or a portion thereof. Briefly stated, the disclosed technology is generally directed to blockchain technology. In one example of the technology, a first transaction node of a hosted permissioned blockchain network is provisioned for a first consortium member of a plurality of consortium members of the hosted permissioned blockchain network. In some examples, a shared pool of validator nodes of the hosted permissioned blockchain network is provisioned. In some examples, the shared pool of validator nodes includes at least one validator node. In some examples, the shared pool of validator nodes is shared among the plurality of consortium members of the hosted permissioned blockchain network. In some examples, the validator nodes of the shared pool of validator nodes are configured for blockchain transaction validation based on abyzantinefault tolerance (BFT) consensus protocol. In some examples, a second transaction node of the hosted permissioned blockchain network is provisioned for a second consortium member of the plurality of consortium members of the hosted permissioned blockchain network. In some examples, each transaction node of the hosted permissioned blockchain network is separate from each validator node of the hosted permissioned blockchain network. In some examples, a hosted service is capable of hosting managed, cloud-hosted permissioned blockchain networks for clients. In the background art, each consortium member of a permissioned blockchain network typically has one or more validator nodes. Typically, during normal operations, the validator nodes validate and process submitted blockchain transactions, and execute blockchain logic, as well as performing functions such as participating in the governance of the blockchain network. The validation of transactions may include confirming transactions through a consensus protocol. In some examples of the disclosed blockchain network, the blockchain network provides validator nodes and transaction nodes as separate discrete devices. The validator nodes in the shared pool of validator nodes may confirm blockchain transactions using a consensus protocol.A byzantinefault tolerance (BFT) mechanism, such as Istanbul BFT (IBFT), or another suitable BFT mechanism, may be used as the consensus mechanism for the permissioned blockchain network. When a first consortium member begins a permissioned blockchain network, in some examples, the hosted service provisions a transaction node for the consortium member, and provisions a shared pool of validator nodes. When subsequent new consortium members join the permissioned blockchain network, in some examples, a transaction node is added for the new consortium member, but no new validator nodes are provisioned. In some examples, all consortium members of the permissioned blockchain share the shared pool of validator nodes. Illustrative Devices/Operating Environments FIG.1is a diagram of environment100in which aspects of the technology may be practiced. As shown, environment100includes computing devices110, as well as network nodes120, connected via network130. Even though particular components of environment100are shown inFIG.1, in other examples, environment100can also include additional and/or different components. For example, in certain examples, the environment100can also include network storage devices, maintenance managers, and/or other suitable components (not shown). Computing devices110shown inFIG.1may be in various locations, including on premise, in the cloud, or the like. For example, computer devices no may be on the client side, on the server side, or the like. As shown inFIG.1, network130can include one or more network nodes120that interconnect multiple computing devices no, and connect computing devices no to external network140, e.g., the Internet or an intranet. For example, network nodes120may include switches, routers, hubs, network controllers, or other network elements. In certain examples, computing devices no can be organized into racks, action zones, groups, sets, or other suitable divisions. For example, in the illustrated example, computing devices no are grouped into three host sets identified individually as first, second, and third host sets112a-112c. In the illustrated example, each of host sets112a-112cis operatively coupled to a corresponding network node120a-120c, respectively, which are commonly referred to as “top-of-rack” or “TOR” network nodes. TOR network nodes120a-120C can then be operatively coupled to additional network nodes120to form a computer network in a hierarchical, flat, mesh, or other suitable types of topology that allows communications between computing devices no and external network140. In other examples, multiple host sets112a-112C may share a single network node120. Computing devices110may be virtually any type of general- or specific-purpose computing device. For example, these computing devices may be user devices such as desktop computers, laptop computers, tablet computers, display devices, cameras, printers, or smartphones. However, in a data center environment, these computing devices may be server devices such as application server computers, virtual computing host computers, or file server computers. Moreover, computing devices110may be individually configured to provide computing, storage, and/or other suitable computing services. Illustrative Computing Device FIG.2is a diagram illustrating one example of computing device200in which aspects of the technology may be practiced. Computing device200may be virtually any type of general- or specific-purpose computing device. For example, computing device200may be a user device such as a desktop computer, a laptop computer, a tablet computer, a display device, a camera, a printer, embedded device, programmable logic controller (PLC), or a smartphone. Likewise, computing device200may also be server device such as an application server computer, a virtual computing host computer, or a file server computer, e.g., computing device200may be an example of computing device110or network node120ofFIG.1. Computing device200may also be an IoT device that connects to a network to receive IoT services. Likewise, computer device200may be an example any of the devices, nodes, members, or other entities illustrated in or referred to in various figures, as discussed in greater detail below. As illustrated inFIG.2, computing device200includes processing circuit210, operating memory220, memory controller230, data storage memory250, input interface260, output interface270, and network adapter280. Each of these afore-listed components of computing device200includes at least one hardware element. Computing device200includes at least one processing circuit210configured to execute instructions, such as instructions for implementing the herein-described workloads, processes, or technology. Processing circuit210may include a microprocessor, a microcontroller, a graphics processor, a coprocessor, a field-programmable gate array, a programmable logic device, a signal processor, or any other circuit suitable for processing data. The aforementioned instructions, along with other data (e.g., datasets, metadata, operating system instructions, etc.), may be stored in operating memory220during run-time of computing device200. Operating memory220may also include any of a variety of data storage devices/components, such as volatile memories, semi-volatile memories, random access memories, static memories, caches, buffers, or other media used to store run-time information. In one example, operating memory220does not retain information when computing device200is powered off. Rather, computing device200may be configured to transfer instructions from a non-volatile data storage component (e.g., data storage component250) to operating memory220as part of a booting or other loading process. Operating memory220may include 4th generation double data rate (DDR4) memory, 3rd generation double data rate (DDR3) memory, other dynamic random access memory (DRAM), High Bandwidth Memory (HBM), Hybrid Memory Cube memory, 3D-stacked memory, static random access memory (SRAM), or other memory, and such memory may comprise one or more memory circuits integrated onto a DIMM, SIMM, SODIMM, or other packaging. Such operating memory modules or devices may be organized according to channels, ranks, and banks. For example, operating memory devices may be coupled to processing circuit210via memory controller230in channels. One example of computing device200may include one or two DIMMs per channel, with one or two ranks per channel. Operating memory within a rank may operate with a shared clock, and shared address and command bus. Also, an operating memory device may be organized into several banks where a bank can be thought of as an array addressed by row and column. Based on such an organization of operating memory, physical addresses within the operating memory may be referred to by a tuple of channel, rank, bank, row, and column. Despite the above-discussion, operating memory220specifically does not include or encompass communications media, any communications medium, or any signals per se. Memory controller230is configured to interface processing circuit210to operating memory220. For example, memory controller230may be configured to interface commands, addresses, and data between operating memory220and processing circuit210. Memory controller230may also be configured to abstract or otherwise manage certain aspects of memory management from or for processing circuit210. Although memory controller230is illustrated as single memory controller separate from processing circuit210, in other examples, multiple memory controllers may be employed, memory controller(s) may be integrated with operating memory220, or the like. Further, memory controller(s) may be integrated into processing circuit210. These and other variations are possible. In computing device200, data storage memory250, input interface260, output interface270, and network adapter280are interfaced to processing circuit210by bus240. Although,FIG.2illustrates bus240as a single passive bus, other configurations, such as a collection of buses, a collection of point to point links, an input/output controller, a bridge, other interface circuitry, or any collection thereof may also be suitably employed for interfacing data storage memory250, input interface260, output interface270, or network adapter280to processing circuit210. In computing device200, data storage memory250is employed for long-term non-volatile data storage. Data storage memory250may include any of a variety of non-volatile data storage devices/components, such as non-volatile memories, disks, disk drives, hard drives, solid-state drives, or any other media that can be used for the non-volatile storage of information. However, data storage memory250specifically does not include or encompass communications media, any communications medium, or any signals per se. In contrast to operating memory220, data storage memory250is employed by computing device200for non-volatile long-term data storage, instead of for run-time data storage. Also, computing device200may include or be coupled to any type of processor-readable media such as processor-readable storage media (e.g., operating memory220and data storage memory250) and communication media (e.g., communication signals and radio waves). While the term processor-readable storage media includes operating memory220and data storage memory250, the term “processor-readable storage media,” throughout the specification and the claims whether used in the singular or the plural, is defined herein so that the term “processor-readable storage media” specifically excludes and does not encompass communications media, any communications medium, or any signals per se. However, the term “processor-readable storage media” does encompass processor cache, Random Access Memory (RAM), register memory, and/or the like. Computing device200also includes input interface260, which may be configured to enable computing device200to receive input from users or from other devices. In addition, computing device200includes output interface270, which may be configured to provide output from computing device200. In one example, output interface270includes a frame buffer, graphics processor, graphics processor or accelerator, and is configured to render displays for presentation on a separate visual display device (such as a monitor, projector, virtual computing client computer, etc.). In another example, output interface270includes a visual display device and is configured to render and present displays for viewing. In the illustrated example, computing device200is configured to communicate with other computing devices or entities via network adapter280. Network adapter280may include a wired network adapter, e.g., an Ethernet adapter, a Token Ring adapter, or a Digital Subscriber Line (DSL) adapter. Network adapter280may also include a wireless network adapter, for example, a Wi-Fi adapter, a Bluetooth adapter, a ZigBee adapter, a Long Term Evolution (LTE) adapter, or a 5G adapter. Although computing device200is illustrated with certain components configured in a particular arrangement, these components and arrangements are merely one example of a computing device in which the technology may be employed. In other examples, data storage memory250, input interface260, output interface270, or network adapter280may be directly coupled to processing circuit210, or be coupled to processing circuit210via an input/output controller, a bridge, or other interface circuitry. Other variations of the technology are possible. Some examples of computing device200include at least one memory (e.g., operating memory220) adapted to store run-time data and at least one processor (e.g., processing unit210) that is respectively adapted to execute processor-executable code that, in response to execution, enables computing device200to perform actions, such as, for example, one or more of the processes discussed in greater detail below. Illustrative Systems FIG.3is a block diagram illustrating an example of a portion (301) of a hosted blockchain service architecture. Portion301may include client device341, firewall335, blockchain member361, blockchain management device331, hosted portal332, and shared validator node pool350. Shared validator node pool350may include validator nodes, such as validator nodes351-354. Blockchain member361may include proxy362, transaction node371, and hosted storage363. Client device341may be an Ethereum client in some examples. Client device341, blockchain management device331, transaction node371, and the validator nodes in shared validator node pool350may include examples of computing device200ofFIG.2. Various components ofFIG.3may include or be a part of a distributed system of devices, where the devices in the distributed system may include examples of computing device200ofFIG.2.FIG.3and the corresponding description ofFIG.3in the specification illustrates an example system for illustrative purposes that does not limit the scope of the disclosure. Blockchain management device331may be part of a distributed system controlled by a hosted blockchain service. Client device341may be a device controlled by a client of the hosted blockchain service. Blockchain member361may be part of the hosted blockchain service, where blockchain member361is provisioned and hosted by the hosted blockchain service on behalf of the client. Blockchain management device331may host transaction node371via host portal332. In some examples, transaction node371has an exposed remote procedure call (RPC) endpoint for which it is possible to send transactions to. Accordingly, client device341may communicate with transaction node371of blockchain member361via proxy362via a remote procedure call (RPC). Blockchain member361may be protected from unauthorized communication via firewall335. Client device341may accordingly control hosted transaction node371via proxy362. Transaction node371, hosted by the blockchain host service, may host storage363, and communicate with shared validator node pool350. In some examples, unlike transaction node371, the validator nodes in shared validator node pool350do not have an exposed RPC endpoint. In some examples, the validator nodes are only accessible by the host the clients cannot access the validator nodes, not even remotely. In some examples, clients cannot change settings, install software, or the like on the validator nodes. In some examples, each client has access only to the RPC endpoint of its transaction node. In some examples, client device341can change various configuration settings of transaction node371via the management plane, via making a request to the host, which can make the change via the management plane of blockchain management device331and hosted portal332. AlthoughFIG.3shows one transaction node, transaction node361, in blockchain member361, in some examples, a blockchain member361may have more than one transaction node in blockchain member361. In general, each blockchain member may have multiple transaction nodes each belonging to the same consortium member. In the architecture ofFIG.3, in some examples, validator nodes and transaction nodes are separate device. Further, in the architecture ofFIG.3, in some examples, each consortium member of a blockchain network has its own blockchain member, where each blockchain member contains at least one transaction node, and the consortium member can access the blockchain member via the exposed RPC endpoint of the transaction node(s) in the blockchain member. In some examples, the validator nodes are not included in any of the blockchain member. Rather, in some examples, the validator nodes act as a shared pool of validator nodes shared among the consortium members and that are managed by the host. In some examples, the provisioning and management of blockchain members and the validator nodes is provided by a resource provider that is under the control of the host, where the resource provider may be a distributed system. FIG.4is a block diagram illustrating an example of system400. System400may include client device441-443, blockchain management device431, transaction nodes471-473, and shared validator node pool450. Shared validator node pool450may include validator nodes such as validator nodes451-454. Blockchain management device431may be part of a distributed system controlled by a hosted blockchain service that hosts managed, cloud-hosted permissioned blockchain networks for clients. Client devices441-443may be a device controlled by clients of the hosted blockchain service. Blockchain management device431may host transaction nodes471-473and shared validator node pool450. In some examples of system400, functions that are typically performed by a validator node are instead separated so that there are separate validator nodes and transaction nodes, with the transaction nodes being devices that are separate and discrete from the validator nodes. In some examples, the transaction nodes are separate and discrete physical devices from the validator nodes. In some examples, the transaction nodes are separate and discrete virtual devices from the validator nodes. The transaction nodes may receive and process blockchain transaction for the cloud-hosted permissioned blockchain network. The transaction nodes may also respond to queries to the data, enabling a transaction to be viewed by an authorized party. The validator nodes may confirm blockchain transactions for the cloud-hosted permissioned blockchain network using a BFT consensus protocol and commit validated transactions to the blockchain. In some examples, the transaction nodes perform blockchain transactions, but do not commit transactions to the blockchain. In some examples, blockchain transactions are committed by the validator nodes upon validation based on consensus as determined by the consensus protocol. “Committing the blockchain transactions” refers to committing the transactions to the blockchain. In some examples, the provisioning and management of blockchain members and the validation pools is provided by a resource provider that is under the control of the host, where the resource provider may be a distributed system. In some examples, the resource provider provides cloud-hosted permissioned Quorum-based blockchain networks using the IBFT consensus mechanism in a virtual machine environment. In some examples, when a client begins a permissioned blockchain network as the first consortium member of the blockchain, the resource provider of the hosted service provisions both a transaction node for the consortium member, and provisions a shared pool of validator nodes. In some examples, the consensus mechanism may be a suitable BFT consensus mechanism other than IBFT. In some examples, although the blockchain networks are Quorum-based, the networks vary from known Quorum-based networks in various ways, such as using transaction nodes that are separate from validator nodes, by using a shared pool of validator nodes, and/or in other ways discussed herein. In some examples, the network may be based on a suitable platform other than Quorum, such as Hyperledger Besu, as but one example. In some examples, the number of validator nodes used in a permissioned blockchain network hosted by the hosted service varies depending on factors such as fault tolerance and the type of consensus protocol used. In some examples, with regard the IBFT consensus protocol, the IBFT protocol ensures network consensus in the event that no more than ⅓ of the number of validator nodes minus one is faulty, where “faulty” is defined as either a node failure (e.g., a node crashing) or a node acting in a malicious manner (e.g., forging a transaction). In some examples, because each of the validator node is managed by the hosted service, however, there is no risk of a malicious node, and thus it only need be ensured that node more than ⅓ of the number of validator nodes minus one does not have a node failure—the IBFT formula is unchanged, but the possible causes of a faulty node are reduced. In some examples, to guard against N failures, 1+N*3 validator nodes are used, such as 1 node for zero failures, 4 nodes for one failure, and 7 nodes for two failures. Accordingly, in some examples, when a new cloud-hosted permissioned blockchain network is being set up, the hosted service may determine the fault tolerance of the client, and provision the number of validator nodes in the shared validator node pool for the cloud-hosted permissioned blockchain network accordingly. In some examples, the available options for the cardinality of validator nodes in the shared validator node pool are 1 validator node, 4 validator nodes, or 7 validator nodes. In other examples, 4 validator nodes are always used, rather than providing an option. In some examples, the possibility of 1 validator node may be provided if the user wishes to use the network for the purpose of development and testing. In some examples, a greater number of validator nodes may also be selected, such as 10 validator nodes, 13 validator nodes, and so on. In some examples, because there is no risk of a malicious node, there is reduced need to be able to sustain a large number of faults in the blockchain network, so that sustaining more than two faults is unnecessary, so that no more than 7 validator nodes are necessary. In some examples, using a number of validator nodes other than 1+N*3 validator nodes is a waste of resources, because additional nodes are being used without contributing to the fault tolerance. Furthermore, in IBFT, use of 2, 3, or 6 validator nodes may result in a deadlocked state in which consensus cannot be reached on a transaction, and, accordingly, use of 1+N*3 may also be worthwhile in terms of preventing such a deadlocked state from occurring. When subsequent new consortium members join the permissioned blockchain network, in some examples, a transaction node is provisioned for the new consortium member, but no new validator nodes are provisioned. In some examples, all consortium members of the permissioned blockchain network share the shared pool of validator nodes, with the shared pool of validator node being managed by the host. In some examples, each consortium member of the permissioned blockchain network has access to its own transaction node, which is hosted by the blockchain host service. In some examples, while each consortium member of the permissioned blockchain network has access to its own transaction node, each consortium member has zero access to any of the validator nodes, which are instead accessible only by the host service. In other examples, one or more consortium members may have limited access to one or more of the validator nodes in terms of being able to monitor the metrics and/or logs of one or more validator nodes, while still not being able to exercise any control or management over any of the validator nodes. In these examples, one or more consortium members can receive information about the validator nodes, but cannot effect any changes in any of the validator nodes. In some examples, the initial consortium members of the consortium blockchain network agree on certain parameters of the blockchain network. In some examples, the initial consortium members may determine various parameters, which other prospective consortium members must agree to as a condition to joining the network. In other examples, the parameters may be determined by agreement of all of the founding consortium members of the network. Once the network has been established, including the provisioning of the transaction node for each consortium member of the consortium network and the provisioning of each validator node in the shared validator node pool, normal operation of the blockchain network may proceed. During operation of the blockchain network, transactions may be submitted by participants. Participants are participants in that they may submit blockchain transactions to the blockchain network, and can see transactions that they are authorized to see, but participants are not necessarily members of the consortium. In some examples, participants that are not consortium members do not have the privileges that are reserved by members of the consortium, such as voting rights. In some examples, participants are approved to join by the consortium members. Some or all of the consortium members may also be participants, but need not be. During operation of the blockchain network, the transaction nodes may process blockchain transactions, which may be received from participant devices. In some examples, transactions are processed by the transaction nodes in the blockchain network. In some examples, processed transactions are validated by the validator nodes in the blockchain network based on the consensus protocol. The validation may include confirming and signing of transactions, where the consensus protocol is used to determine whether consensus has been achieved in the confirmation of a transaction. In some examples, after a transaction is validated based on the consensus protocol, the transaction is committed to the blockchain. During operation of the blockchain network, the transaction nodes may also respond to blockchain queries from participants. Illustrative Processes For clarity, the processes described herein are described in terms of operations performed in particular sequences by particular devices or components of a system. However, it is noted that other processes are not limited to the stated sequences, devices, or components. For example, certain acts may be performed in different sequences, in parallel, omitted, or may be supplemented by additional acts or features, whether or not such sequences, parallelisms, acts, or features are described herein. Likewise, any of the technology described in this disclosure may be incorporated into the described processes or other processes, whether or not that technology is specifically described in conjunction with a process. The disclosed processes may also be performed on or by other devices, components, or systems, whether or not such devices, components, or systems are described herein. These processes may also be embodied in a variety of ways. For example, they may be embodied on an article of manufacture, e.g., as processor-readable instructions stored in a processor-readable storage medium or be performed as a computer-implemented process. As an alternate example, these processes may be encoded as processor-executable instructions and transmitted via a communications medium. FIG.5is a diagram illustrating an example dataflow for a process (590) for a blockchain system. In some examples, the process ofFIG.5is performed by a distributed system controlled by a host service to provide cloud-hosted permissioned blockchain networks to clients. In the illustrated example, step591occurs first. At step591, in some examples, a first transaction node of a hosted permissioned blockchain network is provisioned for a first consortium member of a plurality of consortium members of the hosted permissioned blockchain network. As shown, step592occurs next in some examples. At step592, in some examples, a shared pool of validator nodes of the hosted permissioned blockchain network is provisioned. In some examples, the shared pool of validator nodes includes at least one validator node. In some examples, the shared pool of validator nodes is shared among the plurality of consortium members of the hosted permissioned blockchain network. In some examples, the validator nodes of the shared pool of validator nodes are configured for blockchain transaction validation based on abyzantinefault tolerance (BFT) consensus protocol. As shown, step593occurs next in some examples. At step593, in some examples, a second transaction node of the hosted permissioned blockchain network is provisioned for a second consortium member of the plurality of consortium members of the hosted permissioned blockchain network. In some examples, each transaction node of the hosted permissioned blockchain network is separate from each validator node of the hosted permissioned blockchain network. The processing may then proceed to a return block, where other processing is resumed. CONCLUSION While the above Detailed Description describes certain examples of the technology, and describes the best mode contemplated, no matter how detailed the above appears in text, the technology can be practiced in many ways. Details may vary in implementation, while still being encompassed by the technology described herein. As noted above, particular terminology used when describing certain features or aspects of the technology should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects with which that terminology is associated. In general, the terms used in the following claims should not be construed to limit the technology to the specific examples disclosed herein, unless the Detailed Description explicitly defines such terms. Accordingly, the actual scope of the technology encompasses not only the disclosed examples, but also all equivalent ways of practicing or implementing the technology. | 34,252 |
11861428 | DETAILED DESCRIPTION In order that a person skilled in the art may understand the technical solution better in the present disclosure, a more complete description of the embodiments of the present disclosure will be rendered by reference to the appended drawings, which are provided for purposes of illustration and are not intended to be exhaustive of or limiting the present disclosure. All the other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without involving any inventive effort shall fall within the scope of the present application. The core of the embodiments of the present disclosure is to provide a service orchestration method of a physical machine, which may improve the convenient level of service orchestration for the physical machine and guarantee the system stability on the basis of realizing service orchestration for the physical machine; and the other core of the present disclosure is to provide a service orchestration device and apparatus of the physical machine, and a computer-readable storage medium, which have the above beneficial effects. In order that those skilled in the art may understand the solution of the present disclosure better, the present disclosure is further described in details hereinafter through combination with the drawings and implementation manners. FIG.1is a flow chart of a service orchestration method of a physical machine, which is provided by an embodiment of the present disclosure; andFIG.2is a schematic structural diagram of a system framework for realizing service orchestration based on OpenStack, which is provided by an embodiment of the present disclosure. As shown inFIG.1andFIG.2, the service orchestration method of the physical machine includes: S10: creating a physical machine resource inheriting all attributes of a cloud host, and modifying the physical machine resource according to features of a target physical machine to obtain a target physical machine resource; S20: configuring a bottom driver of Ironic according to the features of the target physical machine; and S30: invoking Ironic by nova to perform service orchestration for the target physical machine by utilizing the target physical machine resource when Ironic operates. Firstly, it needs to be noted that OpenStack, which is an open source project of a cloud computing management platform, aims to provide open source projects of software for construction and management of public and private clouds; inFIG.2, Ironic is a product form of providing a computing resource for a user in the field of cloud computing, is a private computing resource and is different from that a virtualized computing resource is established on a shared hardware resource, and the computing of the user is performed in a manner of directly accessing the hardware resource; Heat, i.e. service orchestration, is a service for choreographing composite cloud application based on a template and is also a project of an OpenStack open source community; Cinder is an assembly for providing a block storage service in OpenStack and is mainly configured for providing a virtual disk for an example of a virtual machine; and Neutron is a core assembly for providing a network service in OpenStack, is configured for realizing software network resource management based on a thought of a software defined network (SDN), fully utilizes various network related technologies in a Linux system and supports a third-party plug-in. A physical machine resource OS:Nova:Host is created firstly. The physical machine resource inherits all attributes of the cloud host, belongs to a computing nova module, and inherits all attributes of the cloud host from OS:Nova:Server, so that the physical machine resource may be configured for choreographing a bare machine service by heat based on OpenStack in the prior art, thereby achieving the purposes of deploying the target physical machine and managing the target physical machine and having no affection on orchestration for related functions of the cloud host. Additionally, as the added physical machine resource OS:Nova:Host needs to choreograph the target physical machine resource, and the physical machine has some features different from the cloud host, such as the feature of not supporting a suspend/resume operation, a method for inheriting the features of the involved physical machine from OS:Nova:Server needs to be modified to obtain the target physical machine resource corresponding to the target physical machine. It needs to be noted that Nova is configured for managing the life cycle of the virtual machine, Ironic is configured for managing the life cycle of the physical machine, and Ironic is configured for providing an application programming interface (API) interface of managing the physical machine for nova; and therefore, the bottom driver of Ironic needs to be configured according to the features of the target physical machine. For nova, a required invoking process of invoking the bottom driver for performing service orchestration for the target physical machine by Ironic is the same as that for the virtual machine, the creation of examples are executed by interfaces of nova, but only a bottom nova-scheduler driver and a bottom nova-compute driver are different, wherein the bottom driver of the virtual machine adopts a virtualization technology, and the physical machine adopts a preboote xecute environment (PXE) technology and an intelligent platform management interface (IPMI) technology. When heat is configured to send an instruction of performing service orchestration for the target physical machine to nova, Ironic is invoked by Nova to perform service orchestration for the target physical machine by utilizing the target physical machine resource when Ironic operates; and in other words, Ironic is invoked by nova, and Ironic is configured to simulate a virtualized driver of nova, so as to realize a virtualized driver based on Ironic. The service orchestration method of the physical machine, which is provided by the embodiment of the present disclosure, includes: creating the physical machine resource inheriting all attributes of the cloud host, and modifying the physical machine resource according to the features of the target physical machine to obtain the target physical machine resource; then configuring the bottom driver of Ironic according to the features of the target physical machine; and invoking Ironic by nova to perform service orchestration for the target physical machine by utilizing the target physical machine resource when Ironic operates. Ironic is a project of performing service orchestration for the physical machine in OpenStack, and Ironic is invoked by nova to perform service orchestration for the target physical machine by utilizing the target physical machine resource when Ironic operates, thereby realizing service orchestration for the physical machine; in addition, compared with a method of setting the target physical machine resource in the prior art, the target physical machine resource in the method of the present disclosure is obtained by inheriting all attributes of the cloud host, and modifying the physical machine resource according to the features of the target physical machine, thereby greatly omitting design details of setting the target physical machine resource and simplifying the setting process; and as the attributes of the cloud host are inherited, orchestration for related functions of the cloud host is not affected, and the system stability is guaranteed relatively. Based on the above embodiment, the technical solution is further described and optimized in the embodiment; in the embodiment, the step of creating the physical machine resource inheriting all attributes of the cloud host, and modifying the physical machine resource according to the features of the target physical machine to obtain the target physical machine resource, includes: creating the physical machine resource inheriting all attributes of the cloud host; and modifying the physical machine resource according to the features of the target physical machine, and rewriting a handle_create method, a handle_suspend and handle_resume method and a check_suspend_complete and check_resume_complete method to obtain the target physical machine resource, wherein the handle_create method, the handle_suspend and handle_resume method and the check_suspend_complete and check_resume_complete method are processing functions of a life cycle event. In the embodiment, the physical machine resource is modified according to the features of the target physical machine after the physical machine resource inheriting all attributes of the cloud host is created; and the target physical machine has some features different from the cloud host, such as the feature of not supporting a suspend/resume operation, the method for inheriting the features of the involved physical machine from OS:Nova:Server needs to be rewritten: 1. rewritting the handle_create method, wherein a cure parameter config_driver is true, so as to be independent of a metadata service; and ConfigDrive is configured for injecting a file and injecting a script, so as to realize initialization operations such as changing a password, configuring a network, configuring a name of a physical host and the like; 2. rewriting the handle_suspend and handle_resume method, so that the return thereof is a constant character string, and an operation is not performed; and 3. rewritting the check_suspend_complete and check_resume_complete method, so that the return thereof is true always, and the method has no affection on the target physical machine resource. It needs to be noted that in other implementation manners, other attribute information may also be added/deleted/modified according to the features of the target physical machine, which is not limited in the embodiment. It may be seen that the physical machine resource is modified according to the manner of the embodiment, so that the target physical machine resource configured for performing service orchestration for the target physical machine is obtained, contents to be modified are less, and the operation is simple. Based on the above embodiment, the technical solution is further described and optimized in the embodiment; and before invoking Ironic by nova to perform service orchestration for the target physical machine by utilizing the target physical machine resource when Ironic operates, the embodiment further includes: a step of performing identity verification for a invoke instruction sent for nova by Ironic, and invoking Ironic by nova after identity verification passes to perform service orchestration for the target physical machine by utilizing the target physical machine resource when Ironic operates. In the embodiment, identity verification is performed for nova firstly by Ironic according to the invoke instruction sent by nova when the invoke instruction is sent to Ironic by nova before the step of invoking Ironic by nova to perform service orchestration for the target physical machine. In the embodiment, identity verification may include: verifying the validity and safety of the nova and verifying that whether the nova is a preassigned nova; and in the embodiment, the type of identity verification is not limited, and a corresponding verification mode is also not limited, and for example, whether the invoke instruction sent by nova includes preset verification information or a preset digital certificate and the like may be verified. It may be understood that in the embodiment, identity verification is further performed for nova, so as to further guarantee the safety and reliability of performing service orchestration for the target physical machine. Based on the above embodiment, the technical solution is further described and optimized in the embodiment; and after invoking Ironic by nova to perform service orchestration for the target physical machine by utilizing the target physical machine resource when Ironic operates, the embodiment further includes: setting label information for the target physical machine completing service orchestration. In the embodiment, the label information is further set for the target physical machine after Ironic is invoked by nova to perform service orchestration for the target physical machine by utilizing the target physical machine resource when Ironic operates, i.e. after the step of performing service orchestration for the target physical machine is completed. The label information may be text information or digital information or character information and so on, which are not limited in the embodiment, and the purpose thereof is to set a label for the target physical machine to distinguish the physical machine completing service orchestration and a physical machine not completing service orchestration, so that the user acquires a service orchestration state of each physical machine more conveniently and more intuitively subsequently. Based on the above embodiment, the technical solution is further described and optimized in the embodiment; and after invoking Ironic by nova to perform service orchestration for the target physical machine by utilizing the target physical machine resource when Ironic operates, the embodiment further includes: writing an operation record of performing service orchestration for the target physical machine into an operation log. In the embodiment, operation time for realizing the operation of service orchestration, a unique identification information of the target physical machine and other information are further acquired, and then the information is written into the operation log after Ironic is invoked by nova to perform service orchestration for the target physical machine by utilizing the target physical machine resource when Ironic operates, i.e. after the operation of performing service orchestration for the target physical machine is completed. It needs to be noted that in the actual operation, the recording mode may be a text form or an excel form or a database table form, which are not limited in the embodiment and are selected according to actual requirements. In the embodiment, the operation record of performing service orchestration for the target physical machine is further written into the operation log, so as to be convenient for the user to check operation situations of performing service orchestration for each target physical machine, so as to further enhance user experience. Based on the above embodiment, the technical solution is further described and optimized in the embodiment; and after invoking Ironic by nova to perform service orchestration for the target physical machine by utilizing the target physical machine resource when Ironic operates, the embodiment further includes: displaying configuration information corresponding to service orchestration for the target physical machine. In the embodiment, the current configuration information corresponding to service orchestration for the target physical machine is displayed by a preset display device after Ironic is invoked by nova to perform service orchestration for the target physical machine by utilizing the target physical machine resource when Ironic operates. The configuration information includes the type and the model of an operation system, the capacity, the type and the model of a hard disk and other information, and the type of the configuration information is not limited in the embodiment. In the actual operation, a display form is not limited; and in the embodiment, the type of the display device is also not limited, and for example, the display device may be a liquid crystal display (LCD) or a touch screen and so on. It may be seen that in the embodiment, the configuration information corresponding to service orchestration for the target physical machine is further displayed, so as to be convenient for the user to inquire and acquire situations of service orchestration for the target physical machine more conveniently. Based on the above embodiment, the technical solution is further described and optimized in the embodiment; and after invoking Ironic by nova to perform service orchestration for the target physical machine by utilizing the target physical machine resource when Ironic operates, the embodiment further includes: sending corresponding prompt information. In the embodiment, a prompt device is further triggered to send the corresponding prompt information after Ironic is invoked by nova to perform service orchestration for the target physical machine by utilizing the target physical machine resource when Ironic operates. It needs to be noted that the prompt device may be a buzzer and/or an indicating light and/or a displayer, and the corresponding prompt information such as a humming sound/a flasher/displayed texts or images and the like is sent by the prompt device such as the buzzer/the indicating light/the displayer and the like to intuitively prompt that the user has completed the operation of performing service orchestration for the target physical machine and may perform other operations for the target physical machine, so as to further enhance user experience. The embodiments of the service orchestration method of the physical machine, which is provided by the present disclosure, are described in details in the above. The present disclosure also provides the service orchestration device and apparatus of the physical machine corresponding to the method, and the computer-readable storage medium; and as the embodiments of the device, the apparatus and the computer-readable storage medium correspond to the embodiments of the method, the embodiments of the device, the apparatus and the computer-readable storage medium may refer to the descriptions for the embodiments of the method, which are not repeated here. FIG.3is a structure diagram of the service orchestration device of the physical machine, which is provided by an embodiment of the present disclosure. As shown inFIG.3, the service orchestration device of the physical machine includes: a creating module31configured for creating the physical machine resource inheriting all attributes of the cloud host, and modifying the physical machine resource according to the features of a target physical machine to obtain the target physical machine resource; a configuration module32configured for configuring the bottom driver of Ironic according to the features of the target physical machine; and an execution module33configured for invoking Ironic by nova to perform service orchestration for the target physical machine by utilizing the target physical machine resource when Ironic operates. The service orchestration device of the physical machine, which is provided by the embodiment of the present disclosure, has the beneficial effects of the above service orchestration method of the physical machine. In an embodiment of the present disclosure, the service orchestration device of the physical machine further includes: a verification module configured for performing identity verification for a invoke instruction sent for nova by Ironic, and invoking the execution module after identity verification passes. In an embodiment of the present disclosure, the service orchestration device of the physical machine further includes: a label setting module configured for setting the label information for the target physical machine completing service orchestration. In an embodiment of the present disclosure, the service orchestration device of the physical machine further includes: a recording module configured for writing the operation record of performing service orchestration for the target physical machine into the operation log. In an embodiment of the present disclosure, the service orchestration device of the physical machine further includes: a display module configured for displaying the configuration information corresponding to service orchestration for the target physical machine. In an embodiment of the present disclosure, the service orchestration device of the physical machine further includes: a prompt module configured for sending the corresponding prompt information after Ironic is invoked by nova to perform service orchestration for the target physical machine by utilizing the target physical machine resource when Ironic operates. FIG.4is a structure diagram of the service orchestration apparatus of the physical machine, which is provided by an embodiment of the present disclosure. As shown inFIG.4, the service orchestration apparatus of the physical machine includes: a memory41configured for storing computer programs; and a processor42configured for executing the computer programs to implement the steps of the above service orchestration method of the physical machine. The service orchestration apparatus of the physical machine, which is provided by the embodiment of the present disclosure, has the beneficial effects of the above service orchestration method of the physical machine. In order to solve the above technical problems, the present disclosure also provides the computer-readable storage medium; the computer programs are stored on the computer-readable storage medium; and the computer programs are executed by the processor to implement the steps of the above service orchestration method of the physical machine. The computer-readable storage medium provided by the embodiment of the present disclosure has the beneficial effects of the above service orchestration method of the physical machine. The service orchestration method, device and apparatus of the physical machine, and the computer-readable storage medium, which are provided by the present disclosure, are introduced in details in the above. In the text, the embodiments are applied to explain the principles and implementation manners of the present disclosure, and the above descriptions for the embodiments are only configured for helping understand the method of the present disclosure and the core thought thereof. It should be noted that those ordinary skilled in the art may also make several improvements and modifications to the present disclosure on the premise of not departing from the principles of the present disclosure, and the improvements and the modifications also fall into the protection scope of the claims of the present disclosure. Each embodiment in the specification is described in a progressive way. Each embodiment focuses on the differences from other embodiments. The same and similar parts between each embodiment may be seen in each other. For the device disclosed in the embodiment, because it corresponds to the method of open embodiment, the description is relatively simple, and the relevant places may be seen in the method section. A person skilled in the art may further realize that the units and algorithm steps of each example described in combination with the examples disclosed herein may be realized by electronic hardware, computer software or a combination of the two. In order to clearly explain the interchangeability of hardware and software, the composition and steps of each example have been described generally according to the functions in the above instructions. Whether these functions are implemented in hardware or software depends on the application and design constraints of the technical solution. A person skilled in the art may use different methods to implement the described functions for each application, but this implementation should not be considered beyond the scope of the present application. | 23,794 |
11861429 | Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements. The figures are not necessarily to scale, and the size of some parts may be exaggerated to more clearly illustrate the example shown. Moreover, the drawings provide examples and/or implementations consistent with the description; however, the description is not limited to the examples and/or implementations provided in the drawings. DETAILED DESCRIPTION In the present disclosure, use of the term “a,” “an”, or “the” is intended to include the plural forms as well, unless the context clearly indicates otherwise. Also, the term “includes,” “including,” “comprises,” “comprising,” “have,” or “having” when used in this disclosure specifies the presence of the stated elements, but do not preclude the presence or addition of other elements. In some applications, a resistive memory array including resistive memory cells (e.g., memristors) can be used to perform matrix operations. A matrix operation refers to an operation where a matrix is subject to a mathematical computation (e.g., a multiplication, a convolution, etc.) with input data. The input data can be in the form of a vector, for example. In examples where the resistive memory cells are memristors, the resistive memory array can be referred to as memristive crossbar array. Although the ensuing discussion refers to resistive memory arrays that include memristors, it is noted that in other examples, different types of resistive memory cells can be employed. A resistive memory cell refers generally to a memory cell that represents data using a resistance of the memory cell. For example, a first resistance of the memory cell represents a first data state, a second resistance of the memory cell represents a second data state, and so forth. Examples of other resistive memories include phase-change memories, magnetic memories (MRAMs), and so forth. A resistive memory array includes a number of row lines and a number of column lines intersecting the row lines to form a number of junctions or cross-points, and a number of memristors coupled between the row lines and the column lines at the junctions. A resistive memory array is pre-programmed with an array of values that represent values of a matrix that is to be multiplied with an input. The programming of the resistive memory array sets the resistance of each resistive memory cell in the resistive memory array, where the resistance (or conversely, conductance) of a resistive memory cell represents a respective value of the matrix. After the resistive memory array is pre-programmed with values of a matrix, an input data (e.g., an input vector) can be applied to the resistive memory array to perform a computation with the matrix. Each element of the input vector can be converted into an analog input voltage and applied to each corresponding row line of the resistive memory array. The input voltage at each row line of the resistive memory array is weighted by the conductance of the resistive memory cells in each column line and accumulated as the current output from each column line. The foregoing operation involving the input vector and the matrix represented by the resistive memory cells of the resistive memory array is performed in the analog domain, since both the input data and the matrix data are in analog form. If wire resistances can be ignored, the electrical current values, I (in vector form), flowing out of the resistive memory array is approximately IT=VTG, where V represents the input voltages (in vector form), and G is the conductance matrix, including contributions from each resistive memory cell in the resistive memory array. The indication “T” indicates that the respective vector I or V is transposed. In some computing applications, for example deep learning applications such as neural network applications, logical operations can be carried out in multiple processing layers. In some cases, the output of one processing layer can be used as an input in another processing layer. A logical operation can involve a matrix operation that can be performed in the analog domain using a resistive memory array. In some examples, a matrix operation may be performed as part of a convolution operation, where an input is convolved with a matrix operand (referred to as an n×n kernel). Examples of deep learning applications include big data analysis, image recognition, speech recognition, machine learning, and other computationally complex tasks. In example applications where there are a large number of matrix operations, implementing such matrix operations using resistive memory arrays can accelerate the matrix operations such that the matrix operations can be performed more quickly and efficiently than if performed using digital processors. The resistive memory arrays are accelerators separate from the digital processors that can be used for certain operations to reduce overall processing time in performing a set of operations. Once a resistive memory array is pre-programmed with matrix values, the resistive memory array retains these programmed values (exhibits non-volatility) for an extended duration and is generally not changed during a process. As a result, the pre-programmed resistive memory array applies the same matrix to input data each time the resistive memory array is used to perform a corresponding matrix operation. The reason that data stored in a resistive memory array is not changed during computer operations is that re-programming the resistive memory array can be slow due to long programming times for each memristor and many memristors in an n×n array. If a number of resistive memory arrays are used to implement a number of matrix operations, then it may be expected that the respective matrices stored in the corresponding memristive crossbar arrays do not change over time (at least during the course of a set of operations associated with a given application). However, this expectation that the programmed values of the resistive memory arrays remain static can pose a challenge in applications where dynamic data processes are performed. A dynamic data process involves conditional operations where a first condition being true leads to performance of a first operation, but a second condition being true leads to performance of a different second operation. A dynamic data process can involve a large number of conditional operations. Multiple different resistive memory arrays can be implemented to perform the conditional different operations. For example, a first resistive memory array is used to perform a first operation in response to a first condition being true, and a different second resistive memory array is used to perform another operation in response to a second condition (e.g., the first condition not being true). However, deploying different resistive memory arrays for performing different conditional operations may be inefficient, particularly if a dynamic data process involves a large number of conditional operations that branch at multiple points of the dynamic data process. Increasing the number of resistive memory arrays to perform conditional operations takes up valuable space in devices (e.g., integrated circuit dies) in which the resistive memory arrays are included. In accordance with some implementations of the present disclosure, as shown inFIG.1, a device100can include both a resistive processing core102including a resistive memory array104to perform an analog computation, and a digital processing core106including a digital memory108programmable with different values to perform different computations responsive to respective different conditions. As a result, conditional operations of a dynamic data process can be performed using the digital processing core, instead of using different resistive processing cores. The device100further incudes a controller110that can selectively apply input data to the resistive processing core102and the digital processing core106. The controller110can also program values in the digital memory108of the digital processing core106, as well as perform pre-programming of the resistive memory array104of the resistive processing core102. To perform a conditional operation, the digital memory108of the digital processing core106can be programmed with a corresponding set of values (e.g., values of a matrix) that depends on which condition is true. If a first condition is true, then the controller110can program a first set of values into the digital memory108to perform a first operation. If a second condition (different from the first condition) is true, then the controller110can program a second set of values into the digital memory108to perform a second operation different from the first operation. The device100can be an integrated circuit die (e.g., an integrated circuit chip, a stacked arrangement of dies, etc.) on which the resistive processing core102, the digital processing core106, and the controller110are formed. In other examples, the device100can be in the form of a circuit board, an electronic device, and so forth. The controller110can be implemented as a microprocessor, a core of a multi-core processor, a microcontroller, a programmable integrated circuit device, a programmable gate array, or any other hardware processing circuit. In some examples, the controller110can be implemented using just a hardware processing circuit. In other examples, the controller110can be implemented using a combination of a hardware processing circuit and machine-readable instructions (software and/or firmware) executable on the hardware processing circuit. FIG.2shows an example of the resistive processing core102and the digital processing core106, according to further examples. The resistive memory array104(also referred to as a resistive crossbar array) of the resistive processing core102includes a plurality of row lines204, a plurality of column lines206, and a plurality of resistive memory cells208. A resistive memory cell208may be coupled between each unique combination of one row line204and one column line206. In the example ofFIG.2, there are N rows, and M columns, where N>2, and M>2. The row lines204can include electrical conductors that carry current through the resistive memory array104. In some examples, the row lines204may be parallel to each other. Similarly, the column lines206can include electrical conductors that may be parallel to each other, and the column lines are non-parallel (e.g., perpendicular) to the row lines204. A resistive memory cell208has a resistance that changes with an applied programming voltage or current pulse with a magnitude that exceeds a programming threshold or exceeds a pulse duration threshold. Once programmed, the resistance of the resistive memory cell208is maintained for a specified time period long enough to regard the resistive memory cell208as non-volatile. In some examples, a resistive memory cell208is settable to multiple resistance states, which may facilitate various analog operations. The multiple resistance states may allow the representation of various values in a matrix. In some examples, the resistive memory cells208can be implemented as memristors. A memristor includes a memristor switching layer sandwiched between metal layers (that form the electrodes of the memristor). The memristor switching layer can include a nitride-containing composition, an oxide-containing composition, both a nitride-containing composition and an oxide-containing composition, and so forth. Examples of oxide-containing compositions of the memristor switching layer include any or some combination of the following: tantalum oxide, hafnium oxide, titanium oxide, yttrium oxide, niobium oxide, zirconium oxide, zinc oxide, nickel oxide, iron oxide, cobalt oxide, tungsten oxide, aluminum oxide, calcium oxide, magnesium oxide, dysprosium oxide, lanthanum oxide, silicon dioxide, and so forth. Examples of nitride-containing compositions of the memristor switching layer include any or some combination of aluminum nitride, gallium nitride, tantalum nitride, silicon nitride, oxynitrides such as silicon oxynitride, and so forth. The memristor switching layer of a memristor changes resistance depending upon a potential difference that has been applied across the electrodes of the memristor or a current sourced through the device. Each memristor has a switching voltage or current that refers to a voltage or current used to switch the state of the memristor. When the supplied voltage or current is greater than the memristor switching voltage or memristor switching current, the memristor switches state, i.e., the resistance of the memristor changes. If the flow of charge is stopped by turning off the applied voltage or applied current, the memristor will “remember” the last resistance that it had. The resistance of the memristor can be detected to read the state stored in the memristor. In some examples, each resistive memory cell208can include other components (not shown), such as an access transistor. An access transistor can be controlled (such as by a row line204) between an activated state and a deactivated state. If activated, the access transistor electrically connects the access transistor to the corresponding column line206. The access transistors can be used to activate a selected individual memory cell208or a group of memory cells208to program or read. The memory cells208of the resistive memory array104can be programmed according to values of an input matrix. The resistance values stored in the memory cells208can represent the values of the input matrix. Selected memory cells208can be programmed, for example, by having programming signals driven through them by the row lines206, which drives a change in the resistance range of the selected memory cells208. The programming signals can define the values to be applied to the selected memory cells. To set a resistance of a memristor, a respective programming signal is set to a voltage that exceeds the corresponding switching voltage threshold. Once the resistive memory cells208of the resistive memory array104has been programmed with matrix values, the resistive memory array104can be used in a matrix operation. To perform the matrix operation, input voltages210are applied at the row lines204. The input voltages210may have been converted from an input vector by a digital-to-analog converter (DAC) (not shown). A drive circuit may deliver the analog input voltages210to the resistive memory array104. The input voltages210are read voltages that have lower magnitudes than voltages used to program the resistive memory cells208. The input voltages210representing vector values interact with the resistive memory cells208at the corresponding junctions, to produce resulting electrical currents output along the column lines206. The sum of each column line j (206) is represented by Σvigi,j, where viis the voltage applied along row line i (204), and gi,jis the conductance of the resistive memory cell208at the junction of row line i and column line j. The sum of each column line j determines the resulting electrical current output by the column line j. The multiple column lines206output corresponding electrical currents representing respective sums. Current amplifiers216transform the respective electrical currents output by the column lines206to corresponding output voltages214. In some examples, each current amplifier216is a transimpedance amplifier. A transimpedance amplifier is a current to voltage converter, implemented using an operational amplifier218and resistor220, for example. The output voltages214(in analog form) can represent analog multiplication results of the input voltages210and the matrix values stored in the resistive memory cells208of the resistive memory array104. In some examples, the analog output voltages214can be converted by an analog-to-digital converter (ADC) (not shown) to a set of digital results representing a vector-matrix multiplication of the input vector with the input matrix. The digital results can be output by the resistive processing core102to another circuit. The digital processing core106shown inFIG.2includes the digital memory108, which can be implemented as a static random access memory (SRAM), a dynamic random access memory (DRAM), or any other type of random access memory with a write access speed that is greater than the write access speed of the resistive memory array104. The digital processing core106further includes digital logic221, which is used to perform mathematical operations in the digital domain based on values stored in the digital memory108. In the example shown inFIG.2, the digital logic221includes digital multipliers222and digital adders224. An input register226stores input data values (which can represent values of an input vector) that are to be multiplied or otherwise combined with matrix values stored in the digital memory108. The output of the input register226is provided to respective first inputs of the digital multipliers222, and the output of the digital memory108is provided to respective second inputs of the digital multiplier222. Each digital multiplier222multiplies the input vector (represented by the input values of the input register226) with a corresponding portion of the matrix stored by the digital memory108. The output of the digital multiplier222is provided to a digital adder224, which adds a current value stored in a respective output register226with the output of the digital multiplier222. The outputs of the output register226forms the output values that represent the multiplication of the input vector with the matrix. To perform different matrix operations, the digital memory108can be written with different matrix values to be multiplied with an input vector. For example, depending upon whether a first condition or a second condition is true, the digital memory108can be programmed with a first matrix or a second matrix (different from the first matrix). Dynamically programming different matrices into the digital memory108in response to different conditions is feasible since the digital memory108can be written at a higher speed than the resistive memory array104of the resistive processing for102. FIG.3illustrates an example set of operations300of a dynamic data process. InFIG.3, each rectangle represents an operation performed by a corresponding resistive processing core102. A circle inFIG.3represents an operation304performed by a digital processing core106. The operation304performed by the digital processing core106inFIG.3is a conditional operation that depends upon the value of an output Y produced by an operation302(as implemented by a corresponding resistive processing core102). In the example ofFIG.3, if Y is greater than T (an example of a “first condition”), then the operation304is a first matrix operation that involves a first matrix. Under this first condition, a first matrix is programmed into the digital memory108of the digital processing core106. On the other hand, if Y is not greater than T (an example of a “second condition”), then the operation304is a second matrix operation that involves a second matrix that is different from the first matrix. Under this second condition, the second matrix is programmed into the digital memory108of the digital processing core106. The output of the operation304is provided to the next operation306, which can be implemented by a corresponding resistive processing core102. FIG.4is a block diagram of an example system that includes multiple integrated circuit chips402. The system ofFIG.4can include a computer or multiple computers. Each integrated circuit chip402can have an arrangement of tiles (tile0to tile15shown in the example). The tiles in the integrated circuit chip402are connected over an on-chip interconnect network404. Although a specific number of tiles are shown in each integrated circuit chip402, it is noted that in different examples, different numbers of tiles can be included in the integrated circuit chip402. A tile can refer generally to a collection of processing cores and related circuitry. Different integrated circuit chips402can have the same tile arrangement or can have different tile arrangements. An integrated circuit chip402is connected to another integrated circuit chip402over a chip-to-chip interconnect network406. FIG.4also shows components of tile15. The other tiles of the integrated circuit chip402can have the same arrangement or can have different arrangements than tile15. Tile15includes multiple processing cores, including resistive processing cores408,410, and412, and a digital processing core414. Each of the resistive processing cores408,410, and412is similar to the resistive processing core102ofFIG.2, while the digital processing core414is similar to the digital processing core106ofFIG.2. The processing cores408,410,412, and414are connected over a tile interconnect416, which is further connected to a tile data memory418to store input data (received by the tile) and output data computed by the processing cores408,410,412, and414and stored into the tile data memory418. Tile15further includes a controller420, which is similar to the controller110ofFIG.1. Tile15also includes an instruction memory422that stores machine-readable instructions that are executable on the controller420to cause the controller420to perform respective tasks. For example, the machine-readable instructions of the instruction memory422can include instructions corresponding to the set of operations300shown inFIG.3. The controller420can control the selective application of input data to the processing cores408,410,412, and414, and the selective activations of the processing cores408,410,412, and414. The controller420can also control the dynamic programming of values into the digital memory of the digital processing core414, as well as the pre-programming of values into resistive memory arrays of the resistive processing cores408,410, and412. FIG.5is a flow diagram of a process performed by a controller, such as the controller110ofFIG.1, the controller420ofFIG.4, or a different controller. During performance of a dynamic data process, the controller performs the following tasks. The controller selects (at502) a first processing core comprising a resistive memory array to perform an analog computation. The analog computation uses values programmed in resistive memory cells of the resistive memory array. The values can be pre-programmed in the resistive memory cells of the resistive memory array prior to performing the dynamic data process. In some examples, the values programmed in the resistive memory cells of the resistive memory array remain static during the dynamic data process. In response to a first condition, the controller programs (at504) first values into a digital memory of a digital processing core to perform a first computation. In response to a second condition, the controller programs (at506) second values different from the first values into the digital memory of the digital processing core to perform a second computation different from the first computation. During the performance of the dynamic data process that includes a set of operations, the controller identifies operations of the set of operations to be performed by resistive processing cores including respective resistive memory arrays, and identifies a conditional operation of the set of operations to be performed by the digital processing core. FIG.6is a block diagram of a system600according to further examples. The system600includes a plurality of integrated circuit devices602. A communication interconnect604interconnects the plurality of integrated circuit devices602. An integrated circuit device of the plurality of integrated circuit devices includes a resistive processing core606comprising a resistive memory array608to perform an analog computation, a digital processing core610comprising a digital memory612programmable with different values to perform different computations responsive to respective different conditions, and a controller614to selectively apply input data to the resistive processing core606and the digital processing core610. In the foregoing description, numerous details are set forth to provide an understanding of the subject disclosed herein. However, implementations may be practiced without some of these details. Other implementations may include modifications and variations from the details discussed above. It is intended that the appended claims cover such modifications and variations. | 24,736 |
11861430 | DETAILED DESCRIPTION OF THE EMBODIMENTS OF THE INVENTION In the following, exemplary embodiments of a product-marking system and of a product-identification system of the present teachings as well as associated methods are described with reference to the Figures. In addition, exemplary applications of luminescent markers (encodings) according to the present teachings are also described. FIG.1shows a perspective view of a representative, non-limiting product-marking system100for marking a product10according to a first embodiment of the present teachings. As shown inFIG.1, the product-marking system100includes a product-identification device12configured to detect (identify, recognize, deduce) at least one physical property of the product to be marked. An encoding device16is configured to determine (select) an encoding (which contains one or more luminescent markers), e.g., from a plurality of encodings (each containing one or more luminescent markers), associated with (based on, corresponding to) the detected physical property. Preferably, the (each) encoding specifies (is based upon) the (unique) presence or absence of a plurality of luminescent materials (i.e. materials that exhibit luminescence, preferably photoluminescence, such as fluorescence and/or phosphorescence), and at least one of the luminescent materials exhibits a predetermined decay behavior. A mark-applying unit18is configured to selectively apply one or more (preferably two, three or more) of the plurality of luminescent materials onto the product10for forming the determined (selected) encoding. A control unit24is configured to control the mark-applying unit18such that it applies a mark20, which corresponds to the determined (selected) encoding, onto the product10. The individual components of the product-marking system100as well as the encoding are described in more detail below. In the present embodiment, the product-marking system100is used to mark an industrially processed workpiece, such as a leather cutting, as is used, for example, in the automotive industry. That is, as used herein, the term “product” is intended to comprise not only end products of a manufacturing process, but also starting products and intermediate products, such as to-be processed workpieces or the like. The exemplary leather cuttings can have different shapes and colors that must be reliably detected in an automated manner in the course of a later processing. In order to make this reliable detection possible, the product-marking system100of the present embodiment applies a corresponding machine-readable mark (encoding)20onto the product10. For example, in the case of leather cuttings, the mark is preferably applied onto the rear side of the product10(i.e. the side that is not exposed when in its final installed state). Furthermore, in some applications of the present teachings, it may be preferable to apply two or more separate (spaced apart) marks to make possible the detection of multiple different properties of the product, such as size, shape, color, thickness, etc. In the present embodiment, a first mark (encoding) is preferably used to indicate the shape of the product10(e.g., the leather cutting), and a second mark (encoding) is used to indicate the color of the product10. In the present embodiment, the properties of the product10may be detected in an automated manner. For example, the product-identification device12may be configured as an optical product-identification device that includes at least one camera26and/or a color detection sensor28. Using the camera26, the shape of the product10is determined using known image recognition methods. For monitoring purposes, an image14of the detected product10can be displayed on a display device30. The shape of the product10is preferably detected in a first step of a processing process of the product10. After successful detection of the shape of the product10, the product10is advanced in the course of the process to another workstation where color detection is performed using the color detection sensor28(for example, using a known COAST-type color sensor sold by the present applicant). At this time as well, the shape and/or the color of the product10can be displayed on a further display32for monitoring purposes. Based on the detected shape and/or the detected color of the product10, the encoding device16, which can be configured, for example, as a conventional computer having a processor, storage, etc., can determine (identify, select) the encoding (luminescent marker(s)) to be associated with the shape and/or the color. In the present embodiment, for example, all possible shapes of the product10are respectively associated with unique encodings (which may be expressed, for example, using a binary code or the like) in a database, which optionally may be stored, e.g., in the storage of the computer or in a separate server. Similarly, all possible colors of the product10are respectively associated with unique encodings. It is understood that in the case of a use of only a single mark that specifies multiple properties (e.g., shape and color), all possible combinations of shape and color can be respectively associated with unique encodings. In addition, in other embodiments of the present teachings, it is also possible that the encodings are not stored in a database in advance, but rather are generated in real time based on the detected property or properties of the product10using appropriate algorithms. In such embodiments, a corresponding algorithm for converting the detected encoding into the corresponding property (properties) of the product10must then be used to later identify the detected encoding and determine the properties specified thereby. In a subsequent step of the process for marking the product10, the control unit24controls the operation of the mark-application unit18such that at least one mark20that corresponds to the determined (selected) encoding is applied onto the product10. InFIG.1, although the control unit24is shown as a separate control unit, it is understood that the control unit24can be integrated with the encoding device16and it also includes a processor, storage, etc. The mark-application unit18of the present embodiment includes a spray nozzle34in fluid communication with a plurality of containers36. The containers36respectively contain the plurality of luminescent materials and are respectively connected to the spray nozzle34via solenoid valves38. The control unit24is configured to control the respective solenoid valves38such that the mark20having the determined (selected) encoding is sprayed onto the product10. In particular, the product-marking system100is configured to apply a first mark20that indicates, for example, the shape of the product10at a first position, and to apply a second mark that indicates, for example, the color of the product10at a position different than the first mark20. For this purpose, after the application of the first mark20, the product10is preferably moved to a new position (e.g., from position1to position2inFIG.1). Alternatively thereto it is understood that, for example, the spray nozzle34can be moved while the product10remains stationary. In the present embodiment, the luminescent materials in the containers36are preferably each dissolved or suspended in a solvent, such as water. The solutions and/or suspensions containing the luminescent materials are sprayed onto the products10via the spray nozzle34. The different materials or substances (hereinafter also referred to as “markers” or “taggants”) are preferably sprayed-on sequentially. If the markers, which are typically formed as microparticles, are dispersed in a suspension, the suspension liquid may be flushed by circulation in order to maintain the markers uniformly distributed (dispersed) in the suspension so that they do not settle (sediment) out. Such systems are known to persons skilled in the art, so a description thereof is omitted here. To check whether the mark(s) has (have) been correctly applied, a test device42can be provided that is configured to check whether the determined (selected) encoding is present or not. An example of such a test device42is shown, for example, inFIG.4and can be constructed similar to a corresponding device of a below-described product-identification system200for identifying the product10. After completion of the mark-application process described above, a surface of the product10is printed with one or more marks20that indicate(s) one or more properties of the product10(for example, shape and color) and can be read (detected, recognized) in an automated manner (e.g., by a mark reader, such as the product-identification system200described below) at a later time in the manner described below. FIG.2shows various luminescent markers that have differing predetermined decay behavior (e.g., different decay times) and can be used in connection with the product-marking system100of the present teachings. In the context of the present teachings, the term “luminescence” is intended to encompass both the phenomenon of phosphorescence, shown inFIG.2, that decays with a certain time constant, and a fluorescence without such a phosphorescence, which is described in the following with reference toFIG.3. As shown inFIG.2, some known luminescent markers can be excited in the infrared (IR) range and also luminesce in the IR range, but at a longer wavelength than the excitation wavelength (cf. the left illustration in the first row ofFIG.2). As shown in the right illustration in the first row ofFIG.2, the excitation occurs, for example, using an excitation pulse of, for example, 300 μs, and results in a luminescence having a time constant τ of a decay behavior (generally the time until the drop to 1/e of the initial intensity) of, for example, 150 μs. The present example uses a so-called “DOWN converter” or “down shifter”, in which the emitted wavelength is longer than the excitation wavelength. Row2ofFIG.2shows a marker that is excited in the blue wavelength range and also luminesces in the IR range. Such a marker is also known as a “DOWN converter” or “down shifter” that emits light in the IR range after excitation using, for example, a blue LED, again with a predetermined decay behavior or a corresponding time constant τ thereof. As shown in row3ofFIG.2, the known marker is excited in the red wavelength range and luminesces in the IR range. In this case as well, a predetermined time constant τ can be attributed to the decay behavior. Finally, the last row inFIG.2shows that markers that are excited in the UV range also can be used and, depending on the marker used, that phosphoresce in the visible range (VIS). Depending on the marker used, emissions occur in different wavelength ranges a), b) and c) with respectively different decay behaviors. FIG.3shows that one or more markers that do not phosphoresce but rather fluoresce also can be used in some aspects of the present teachings. Fluoresce means that, upon being excited, for example, in the UV range, fluorescent materials immediately emit light in the visible range VIS and the fluorescence immediately ends (or within a few nanoseconds) of the termination of the excitation. For such materials, the color (wavelength) of the emitted light again depends on the type of the marker used (see the peaks a), b) and c) in the left part ofFIG.3). As described below, the color of the emitted light can be determined (detected, recognized) by a suitable color detector (for example, a fluorescence color sensor sold by the present applicant), and the presence of the markers a), b) and c) can be inferred based the output of the color detector. In some embodiments of the present teachings, at least one marker, or preferably a plurality of markers, having a luminescent decay behavior is (are) used to form the encoding(s) (luminescent marker(s)). There is no upper limit on the total number of the possible markers that may be utilized to form the encoding(s), but the upper limit is based, in practice, on the availability of markers having appropriate luminescences. FIG.5shows a table of various, currently known markers that can be used in connection with the present teachings. Here, commercially available markers can be used, such as, for example, activated zinc sulfides, modified yttrium oxysulfides (which are optionally doped with europium), garnet, rare-earth gallates/germanates, modified rare-earth oxysulfides, modified gadolinium oxysulfides, chromium-containing gallates, etc. As shown inFIG.5, there are, for example, three markers that are excitable in the UV range and each luminesce in the visible range with different decay behaviors. In connection with a use of, for example, two combinations, each having two of the markers, as well as the case that none of the markers is present, three further values can be encoded that indicate the presence or absence of the respective luminescent markers. Thus, by using luminescent markers that are excitable in the UV range while luminescing in the visible range, six values (e.g., 1 to 6) can be encoded. These could, for example, constitute the three first bits or three last bits of a binary encoding. Furthermore, as shown inFIG.5, three markers are also known that do not phosphoresce but rather fluoresce in the visible range when excited in the UV range (second column inFIG.5). Two combinations, for example, of the different markers can also be used here, in order to obtain a total of five possible colors of fluorescence. In combination with an encoding the represents (specifies) the absence of all of the three markers, six different values can again be encoded. In addition, two known markers luminesce in the IR range when excited in the blue wavelength range (column3inFIG.5), so that four further values can be encoded similar to the above-described manner. Column4inFIG.5shows that a marker can also be used that can be excited in the red wavelength range and luminesces in the infrared range, which also results in two values. Finally the last column inFIG.5shows that, in an analogous manner, three different markers can be used that are excitable in the infrared range and also luminescence in the infrared range, which again leads to six possible values for a binary encoding. FromFIG.5it can also be seen that, in total, 6×6×4×2×6=1728 encodings can be obtained by using the different markers shown inFIG.5. The number and type of markers that are used can differ in accordance with the application of the present teachings. For example, the plurality of materials can include at least two markers, each having different decay behavior (e.g., decay times), and the presence or absence of such markers forms part of the encoding. In particular, the at least two markers can have at least two different excitation wavelength ranges that are selected from the group consisting of UV, blue, red, and IR. For example, it is possible to use the marker that is excitable in the red wavelength range and one of the markers that is excitable in the UV wavelength range in one embodiment of the present teachings. Alternatively or additionally, at least two markers can be used that are excited in the same wavelength range and differ with respect to the time constant of the decay behavior and/or the emitted wavelengths. These can be, for example, two of the markers that are excitable in the UV wavelength range. In the above-mentioned case, a decay behavior that is detected when the two markers are present is different from the decay behavior of the individual markers, and the encoding specifies whether both markers are present or not. This corresponds, for example, to the combination4in the first column ofFIG.5. In addition, the plurality of materials can include at least one non-phosphorescing marker whose emission wavelength falls, for example, in the visible range (at least one of the markers shown in column2ofFIG.5). Preferably at least two non-phosphorescing markers having the same excitation wavelength range, for example in the UV range, and having different emission wavelengths, for example in the visible range, are used, based on which it can be determined whether none, one, or both of the non-phosphorescing markers is (are) present. Thus, for example, by using the markers in column2ofFIG.5, which fluoresce in the green wavelength range and in the red wavelength range, it can be determined, based on the color of the fluorescence (green, red, or yellow), whether one of the two markers, none of the two markers, or both markers is (are) present in the detected encoding. FIG.4shows a partial perspective view of a representative product-identification system200for identifying the product10according to the present teachings. The product-identification system200includes a plurality of light-emitting units (for example, LEDs)212,214,216,218that are respectively configured to emit light in different wavelength ranges (for example, UV, blue, red, and IR) toward a surface of the product10to be identified. Furthermore, a control unit224is configured to control the ON/OFF states of the plurality of light-emitting units such that they respectively emit light at appropriate timings. A detection unit220is configured to detect respective intensities of light emitted from the surface of the product10in response to excitation light emitted from the respective light-emitting units212,214,216,218. Furthermore, an evaluation unit222is configured to determine (identify, recognize), based on the intensities detected by the detection unit220(as well as the temporal course thereof), an encoding (in particular, based on one or more time constants of the respective decay behaviors) that is associated with a property of the product10. The encoding is structured as described above and specifies the presence or absence of each of a plurality of luminescent materials (markers), wherein at least one of the luminescent materials (markers) has a predetermined decay behavior that is utilized to make the determination (identification, recognition) of the encoding. As was already mentioned above, in the context of the present specification, the term “luminescence” is intended to mean that either a secondary emission having a predetermined decay behavior is present, or that a fluorescence without phosphorescence is present. As shown inFIG.4, the light-emitting units, the control unit, the detection unit, and the evaluation unit can be integrated (housed, accommodated), for example, in a corresponding housing. However, it is understood that in other embodiments, for example, only the light-emitting units can be integrated in an optionally portable device, while one or more of the control unit224, the detection unit220, and the evaluation unit222, can be provided separately. An exemplary method for detecting the encoding can be effected such that the UV LEDs of the device42first emit a light pulse having a predetermined duration, and the time constant of the decay behavior of the luminescence is subsequently determined (detected). Depending on whether one of the markers shown in column1inFIG.5is present, none is present, or a combination of the markers in column1inFIG.5is present, different decay behaviors are detected. Accordingly, the first six values of the encoding can be determined. Subsequent thereto (or simultaneously when suitable detectors are used), the UV-LED can emit a (further) light pulse, based on which the color of a fluorescence owing to the presence of one or more of the markers in column2inFIG.5can be detected. Based on the detected emission wavelengths, the second six values of the encoding can be determined accordingly. In a similar manner, the blue LEDs, the red LEDs, and the IR LEDs can emit corresponding light pulses, and corresponding decay behaviors or time constants are detected in order to determine the remaining values of the encoding. Based on the determined values or a bit sequence derived therefrom, an unambiguous association of the encoding with the property of the product10can then take place. As also mentioned, it is possible to establish this association in advance and store the associations in a database for use by the product-identification system200. Using the above-described product-marking system and/or the above-described luminescent markers, a method for marking a product10can be carried out that includes the following steps: detecting at least one property of the to-be-marked product10, determining (selecting) an encoding to be associated with the property, and applying a plurality of luminescent materials onto the product10so that a mark20that corresponds to the determined encoding is applied. Even though the above embodiment has been described for the case that the product10is a workpiece in the context of a manufacturing process, for example, a leather cutting, it is understood that the present teachings also can be applied to various other applications. Thus, other products (e.g., articles of manufacture) can also be provided with the luminescent marks according to the present teachings, such as, for example, tires, e.g., automobile tires, wherein in this case the property can be one or more of, for example, a tire profile depth, the manufacturer, the size, etc. This can be advantageous, for example, in the context of warehousing of the product. That is, owing to the fact that the mark is not visible with the naked eye, a security against manipulations and the like can be ensured. In a further application of the luminescent marks (markers) according to the present teachings, a plastic material, such as for example, for a packaging, in particular for beverages, may have the luminescent marks (markers), e.g., embedded therein. In such an embodiment of the present teachings, the individual luminescent materials or particles, e.g., taggants, can be introduced into the plastic material (masterbatch) during manufacturing, so that when the plastic material is later used, for example, as a plastic layer of the packaging, a corresponding mark can be provided in an automated manner. The product-identification system100described above could also be used for detecting the luminescent markers. For example, the control unit24could be configured to receive or retrieve (detect) information with respect to the product to be to be marked, for example, a plastic tab or the like for the pharmaceutical industry. Then, based on this information, the encoding device16can be configured to determine the encoding to be used in the plastic material. In this case, the mark-application unit18can include a plurality of metering devices that are driven by the control device24in accordance with the encoding such that a respective masterbatch that contains a marker of the encoding is added to the plastic product stream that is subsequently supplied, for example, to an extrusion system. In this way, the intended encoding reaches the plastic matrix of the product. It is understood that masterbatches that contain more than one marker can also be used. For example, a masterbatch for each encoding that already includes all of the associated markers may be used. Furthermore, compositions constituted as a prefabricated mixture, which is made of plastic granules (without admixtures) and a masterbatch (with color pigments and/or one or more markers or taggants), can also be used and can be added in accordance with the determined encoding. Based thereon, for example, in the context of a recycling, different plastic bottles can be reliably and dependably differentiated and separated (e.g., sorted) accordingly. For example, different plastic bottles made of the same base material can be separated depending on their use (for example, in the foodstuffs sector or in the sanitation sector) during the recycling process, even if they are essentially comprised of the same material. The plastic bottles can also be separated or sorted based on the encoding described herein by detecting at least one predetermined decay behavior or preferably a plurality of different decay behaviors. It is explicitly emphasized that all of the features disclosed in the description and/or the claims should be considered as separate and independent from one another for the purpose of the original disclosure as well as for the purpose of limiting the claimed invention, independent of the combinations of features in the embodiments and/or the claims. It is explicitly stated that all range specifications or specifications of groups of units disclose every possible intermediate value or subgroup of units for the purpose of the original disclosure as well as for the purpose of limiting the claimed invention, in particular also as the limit of a range specification. Additional embodiments of the present teachings include but are not limited to: 1. A product-marking system (100) for marking a product (10), includinga product-identification device (12) configured to detect at least one property of the product (10) to be to marked,an encoding device (16) configured to determine an encoding to be associated with the property, wherein the encoding specifies the presence or absence of each of a plurality of materials that exhibit a luminescence, and of which at least one exhibits a predetermined decay behavior,a mark-application unit (18) configured to apply the plurality of materials onto the product (10), anda control unit (24) configured to control the mark-application unit (18) such that it applies a mark (20), which corresponds to the determined encoding, onto the product (10). 2. The product-marking system according to the above embodiment 1, wherein the plurality of materials includes at least two markers respectively having different decay behaviors, the presence or absence of which markers forms part of the encoding. 3. The product-marking system according to the above embodiment 2, wherein the at least two markers have at least two different excitation wavelength ranges that are selected from the group consisting of UV, blue, red, and IR. 4. The product-marking system according to one of the above embodiments 2 or 3, wherein the at least two markers are excited in the same wavelength range, and differ with respect to the time constant of their decay behavior and/or the emitted wavelength. 5. The product-marking system according to the above embodiment 4, wherein when the two markers are present, a decay behavior that is detected is different from the decay behavior of the individual markers, and the encoding specifies whether or not both markers are present. 6. The product-marking system according to one of the above embodiments 1 to 5, wherein the plurality of materials includes at least one non-phosphorescing marker whose emission wavelength falls in the visible range. 7. The product-marking system according to the above embodiment 6, wherein at least two non-phosphorescing markers having the same excitation wavelength range, for example in the UV range, and having different emission wavelengths, for example in the visible range, are used, based on which it can be determined whether none, one, or both of the non-phosphorescing markers is or are present. 8. The product-marking system according to one of the above embodiments 1 to 7, wherein the product detection device (12) is an optical product detection device that includes at least one camera (26) or at least one color detection sensor (28). 9. The product-marking system according to the above embodiment 8, wherein the property is a shape of the product (10) determined by the camera (26), or a color of the product determined by the color detection sensor (28). 10. The product-marking system according to one of the above embodiments 1 to 9, which is configured to apply a further mark, which indicates a further property of the product (10), onto the product at a position different than the mark (20). 11. The product-marking system according to one of the above embodiments 1 to 10, wherein the mark-application unit (18) includes a spray nozzle (34) and a plurality of containers (36), in which the plurality of materials is respectively contained, and which are each connected to the spray nozzle via a solenoid valve (38), wherein the control unit (24) is configured to control the respective solenoid valves (38) such that the mark (20) having the determined encoding is sprayed onto the product (10). 12. The product-marking system according to the above embodiment 11, wherein the materials are dissolved in a solvent, for example, water, contained in the containers (36), and are sprayed-on using the spray nozzle (34). 13. The product-marking system according to one of the above embodiments 1 to 12, further including a test device (42) that is adapted to check whether the applied mark (20) includes the determined encoding. The product-marking system according to one of the above embodiments 1 to 13, wherein the product (10) is a leather cutting, on the rear side of which the mark (20) is applied, or a tire, in particular an automobile tire, that is provided with the mark (20). 15. A product-identification system (200) for identifying a product (10), includinga plurality of light-emitting units (212,214,216,218) configured to respectively emit light in different wavelength ranges toward a surface of the product (10) to be identified,a control unit (224) configured to control the plurality of light-emitting units (212,214,216,218) such that they emit light,a detection unit (220) configured to detect respective intensities of light emitted from the surface of the product (10) in response to light emitted from the respective light-emitting units (212,214,216,218), andan evaluation unit (222) configured to determine, based on the intensities detected by the detection unit (220), an encoding that is to be associated with a property of the product (10),wherein the encoding specifies the presence or absence of each of a plurality of materials that exhibit a luminescence, and of which at least one exhibits a predetermined decay behavior. 16. The product-identification system according to the above embodiment 15, wherein the plurality of materials includes at least two markers respectively having different decay behaviors, wherein the evaluation unit (222) is configured to determine their presence or absence as a part of the encoding. Optionally, the plurality of materials further includes at least one non-phosphorescing (e.g., fluorescent) material having an emission wavelength in the visible range. 17. The product-identification system according to the above embodiment 16, wherein the at least two markers have at least two different excitation wavelength ranges that are selected from the group consisting of UV, blue, red, and IR, and the control unit (224) is adapted to control the plurality of light-emitting units (212,214,216,218) such that they emit light in the at least two different excitation wavelength ranges. 18. A method for marking a product (10), includingdetecting at least one property of the product (10) to be to marked,determining an encoding to be associated with the property, wherein the encoding specifies the presence or absence of each of a plurality of materials that exhibit a luminescence, and of which luminescences at least one exhibits a predetermined decay behavior, andapplying the plurality of materials to the product (10) such that a mark (20) that corresponds to the determined encoding is applied. 19. A plastic material for use in a product (10), including a plurality of materials that exhibit a luminescence, and of which at least one exhibits a predetermined decay behavior, wherein the presence or absence of each of the plurality of materials determines an encoding that is associable with a property of the product (10). 20. The plastic material according to the above embodiment 19, wherein the plurality of materials includes at least two markers respectively exhibiting different decay behaviors, the presence or absence of which markers forms part of the encoding. 21. The plastic material according to the above embodiment 20, wherein the at least two markers have at least two different excitation wavelength ranges that are selected from the group comprised of UV, blue, red, and IR. 22. The plastic material according to one of the above embodiments 20 or 21, wherein the at least two markers are excited in the same wavelength range and differ with respect to the time constant of their decay behavior and/or the emitted wavelength. 23. The plastic material according to one of the above embodiments 19 to 22, wherein the plurality of materials includes at least one non-phosphorescing marker whose emission wavelength falls in the visible range. 24. The plastic material according to the above embodiment 23, wherein at least two non-phosphorescing markers having the same excitation wavelength range, for example in the UV range, and having different emission wavelengths, for example in the visible range, are used, based on which it can be determined whether none, one, or both of the non-phosphorescing markers is or are present. 25. A packaging, in particular for beverages, including a plastic material according to one of the above embodiments 19 to 24. 26. A plastic material for use in marking a product, including:a synthetic polymer, anda plurality of luminescent materials contained in the synthetic polymer, wherein:at least one of the plurality of luminescent materials exhibits a predetermined decay behavior,at least one of the plurality of luminescent materials is a non-phosphorescing marker having an emission wavelength that falls in the visible range, andthe presence or absence of each of the plurality of luminescent materials determines an encoding that is associable with a property of the product. 27. The plastic material according to the above embodiment 26, wherein the plurality of luminescent materials includes at least two markers respectively exhibiting different decay behaviors, the presence or absence of which markers forms part of the encoding. 28. The plastic material according to the above embodiment 27, wherein the at least two markers respectively have at least two different excitation wavelength ranges that are selected from the group comprised of UV, blue, red, and IR. 29. The plastic material according to the above embodiment 27, wherein the at least two markers are excited in the same wavelength range and exhibit one of different decay times or different emitted wavelengths. 30. The plastic material according to one of the above embodiments 26-29, wherein the plurality of luminescent materials includes at least two non-phosphorescing markers having the same excitation wavelength range and having different emission wavelengths. 31. A packaging, such as a beverage bottle, comprising a plastic material according to one of the above embodiments 26-30. 32. A product-identification system for identifying a product, including:a plurality of light-emitting units configured to respectively emit light in different wavelength ranges toward a surface of the product,a control unit configured to selectively energize the plurality of light-emitting units to emit light,a detection unit configured to detect respective intensities of light emitted from the surface of the product in response to the light emitted from the respective light-emitting units, andan evaluation unit that is configured to determine, based on the intensities detected by the detection unit, an encoding that is associated with a property of the product,wherein the encoding specifies the presence or absence of each of a plurality of luminescent materials, andat least one of the plurality of luminescent materials exhibits a predetermined decay behavior. 33. The product-identification system according to the above embodiment 32, wherein:the plurality of luminescent materials includes at least two markers respectively having different decay behaviors, andthe evaluation unit is configured to determine the presence or absence of the plurality of luminescent materials as a part of the encoding. 34. The product-identification system according to one of the above embodiments 32 or 33, wherein the evaluation unit is configured to determine the presence or absence of the plurality of luminescent materials based on a decay behavior of the plurality of luminescent materials. 35. The product-identification system according to one the above embodiments 33-34, wherein:the at least two markers have at least two different excitation wavelength ranges selected from the group consisting of UV, blue, red, and IR, andthe control unit is configured to energize the plurality of light-emitting units to emit light in the at least two different excitation wavelength ranges. Depending on design requirements, exemplary embodiments of the encoding device16, the control unit24and/or the evaluation unit222of the present disclosure may be implemented in hardware and/or in software. The encoding device16, the control unit24and/or the evaluation unit222can be configured using a digital storage medium, for example one or more of a ROM, a PROM, an EPROM, an EEPROM, a flash memory, etc., on which electronically readable control signals (program code—instructions) are stored, which interact or can interact with one or more programmable hardware components to execute programmed functions. The (each) programmable hardware component can be formed by a processor, a computer processor (CPU=central processing unit), an application-specific integrated circuit (ASIC), an integrated circuit (IC), a computer, a system-on-a-chip (SOC), a programmable logic element, and/or a field programmable gate array (FGPA). A microprocessor is a typical component of a microcontroller according to the present teachings. The digital storage medium can therefore be machine- or computer-readable. Some exemplary embodiments thus comprise a data carrier or non-transient computer readable medium which includes electronically readable control signals which are capable of interacting with a programmable computer system or a programmable hardware component such that one of the methods or functions described herein is performed. An exemplary embodiment is thus a data carrier (or a digital storage medium or a non-transient computer-readable medium) on which the program for performing one of the methods described herein is recorded. In general, exemplary embodiments of the present disclosure, in particular the encoding device16, the control unit24and/or the evaluation unit222, are implemented as a program, firmware, computer program, or computer program product including a program, or as data, wherein the program code or the data is operative to perform one of the methods when the program runs on (is executed by) a processor or a programmable hardware component. The program code or the data can for example also be stored on a machine-readable carrier or data carrier, such as any of the types of digital storage media described above. The program code or the data can be, among other things, source code, machine code, bytecode or another intermediate code. A program according to an exemplary embodiment can implement one of the methods or function during its performance, for example, such that the program reads storage locations and/or writes one or more data elements into these storage locations, wherein switching operations or other operations are induced in transistor structures, in amplifier structures, or in other electrical, electronic, optical, magnetic components, or components based on another functional or physical principle. Correspondingly, data, values, sensor values, or other program information can be captured, determined, or measured by reading a storage location. By reading one or more storage locations, a program can therefore capture, determine or measure sizes, values, variables, and other information, as well as cause, induce, or perform an action by writing in one or more storage locations, as well as control other apparatuses, machines, and components, and thus for example also perform any complex process that the control unit24and/or the evaluation unit222according to the present teachings may be designed to perform. Although some aspects of the present teachings have been described in the context of a device or apparatus, it is to be understood that these aspects also represent a description of a corresponding method, so that a block or a component of a device or apparatus is also understood as a corresponding method step or as a feature of a method step. In an analogous manner, aspects which have been described in the context of or as a method step also represent a description of a corresponding block or detail or feature of a corresponding device. | 41,003 |
11861431 | MODES OF THE INVENTION Hereinafter, a device, system, method and computer program for printing a quick response (QR) code according to the present disclosure will be described in detail with reference to the accompanying drawings. Embodiments described herein are provided to help those of ordinary skill in the art easily understand the technical idea of the present disclosure and thus the present disclosure is not limited thereby. The items illustrated in the accompanying drawings are schematized to easily describe embodiments of the present disclosure and thus may be different from forms in which the items are actually embodied. Components to be described below are only examples provided to implement the present disclosure. Thus, in another embodiment of the present disclosure, other components may be used without departing from the idea and scope of the present disclosure. An expression such as “comprise”, when used herein to indicate the inclusion of some elements, is an open expression intended to simply indicate the presence of the elements and should not be understood as excluding other components. Although the present disclosure is described herein with respect to various embodiments, the present disclosure should not be understood as being limited thereto. It should be apparent to those of ordinary skill in the art that the present disclosure includes various alternatives, modifications, and equivalents. The terms “user terminal” and “electronic device” may each be understood as a smart phone, a wearable device (e.g., smart glasses, a watch, etc.), an Internet-of-Things (IoT) terminal, a personal digital assistant (PDA), a tablet personal computer (PC), a laptop computer, or any other devices capable of communicating with a server. The term “medium” includes a computer-readable storage medium. The computer-readable storage medium may be an available medium accessible by a computer. Examples of the computer-readable storage medium may include, but are not limited to, a random access memory (RAM), a read-only memory (ROM), an electrically erasable and programmable ROM (EEPROM), a compact disc (CD)-ROM, other optical disk storage devices, a magnetic disk storage device, other magnetic storage devices, and any other media that are capable of being used to deliver or store desired program code in the form of instructions or data structures and are accessible by a computer. Hereinafter, a device for printing a QR code according to an embodiment of the present disclosure will be described in detail with reference toFIG.1. A device100for printing a QR code includes at least one processor110and at least one memory120storing instructions causing the at least one processor110to perform operations when executed by the at least one processor110, an input device130configured to receive a QR image from a server200or receive a user input, and an output device140configured to spray ink so as to print an QR image. The processor110receives QR data from the server200. In this case, the QR data may be data for creating a QR image, e.g., data to which a base64-string technique is applied. Therefore, because one image matches one string, a QR image corresponding to the QR data may be created based on the QR data. When the created QR image is subjected to image conversion, the image-converted QR image is stored in the memory120and printed on an object by the output device140. The object includes a curved figure and an example of the curved figure is an egg. A curved figure such as an egg has an ovoid shape with different radii of curvature at points on a curved surface of the egg, and the present disclosure relates to a technique for printing a QR code on the curved figure. Hereinafter, image conversion performed on a QR code according to an embodiment of the present disclosure will be described in detail with reference toFIG.2. An original QR image (hereinafter referred to as the “first QR image”)300that is created based on QR data and has yet to be image-converted has a first length representing a length from a center of the first QR image300in a first direction, a second length representing a length from the center of the first QR image300in a second direction perpendicular to the first direction, and a third length representing a length from the center of the first QR image300in a third direction diagonal to the first and second directions. For example, when the first QR image300has a size of 1×1 cm2, the first length may be 0.5 cm, the second length may be 0.5 cm, and the third length may be about 0.71 cm. A size of a QR image may be selected within a range of 1×1 cm2to 1.2×1.2 cm2. A QR image (hereinafter referred to as the “second QR image”)400that is obtained by image-converting the first QR image300and is to be printed on a curved figure has a fourth length representing a length from a center of the second QR image400in the first direction, a fifth length representing a length from the center of the second QR image400in the second direction, and a sixth length representing a length from the center of the second QR image400in the third direction. Each side of the image-converted QR image having a square shape should be modified to be convex so that the image-converted QR image may be printed on a curved figure such as the shell of an egg and QR recognition may be implemented normally without an error using a camera of a general user terminal. Thus, the fourth length and the fifth length of the second QR image400are respectively greater than the first length and the second length of the first QR image300and are substantially the same. The sixth length of the second QR image400and the third length of the first QR image300are substantially the same. As described above, the fourth length and the fifth length of the second QR image400are respectively greater than the first length and the second length of the first QR image300and are substantially the same, and length conversion is performed based on Equations (1) to (3) below. xd=xu1-α❘"\[LeftBracketingBar]"xu❘"\[RightBracketingBar]"2,Equation(1)yd=yu1-β❘"\[LeftBracketingBar]"yu❘"\[RightBracketingBar]"2,andEquation(2)rd=ru1-γ❘"\[LeftBracketingBar]"ru❘"\[RightBracketingBar]"2,Equation(3) In Equation (1), xudenotes the first length, xddenotes the fourth length, and α denotes a first weight assigned to the first direction. In Equation (2), yudenotes the second length, yddenotes the fifth length, and β denotes a second weight assigned to the second direction. In Equation (3), rudenotes the third length, rddenotes the sixth length, and γ denotes a third weight assigned to the third direction. Because the fourth and fifth lengths of the second QR image400are greater than the first and second lengths of the first QR image300and are substantially the same, α and β are greater than 0, and α and β are substantially the same. However, although the fourth and fifth lengths of the second QR image400are respectively greater than the first and second lengths of the first QR image300, maximum values thereof may be 1.4 times, and preferably, 1.3 times those of the first and second lengths of the first QR image300. Accordingly, α and β are greater than 0 and less than 0.25. Because the sixth length of the second QR image400and the third length of the first QR image300are substantially the same, γ is substantially 0. After determining the first to third weights, the first QR image300is divided into grids. For example, the first QR image300may be divided into grids having a size of 5×5 to 100×100, and preferably, grids having a size of 10×10. As shown inFIG.3, one egg has a region having a large radius of curvature and a region having a small radius of curvature. A radius of curvature of an uppermost end of an egg may vary according to the size of the egg, i.e., whether the egg is a large egg, medium egg or a small egg. However, because all regions of curved figures on which a QR code is to be printed have a convex shape, the lengths of grids on the same lines in the first and second directions passing through the center of the first QR image300may be increased to a maximum extent and a change of the lengths may be gradually reduced as distances between the grids increase. Thus, the second weight may be reduced from β to 0 as the distances of the grids from the center of the first QR image300in the first direction and a direction opposite to the first direction increase, and the first weight may be reduced from α to 0 as the distances of the grids from the center of the first QR image300in the second direction and a direction opposite to the second direction increase. The first weight may be assigned to the grid on the same line in the first direction, and the second weight may be assigned to the grid on the same line in the second direction. When the first weight is reduced from α to 0 or the second weight is reduced from β to 0, the first or second weight may be linearly or exponentially reduced as the distances of the grids from the center of the first QR image300in reference directions (i.e., the first direction and the direction opposite thereto or the second direction and the direction opposite thereto) increase. Referring toFIG.3, the larger a radius of curvature at the center of a curved figure on which the second QR image400is to be printed, the smaller the first weight and the second weight, and the smaller the radius of curvature radius at the center of the curved figure on which the second QR image400is to be printed, the larger the first weight and the second weight. That is, the first weight and the second weight when the radius of curvature at the center of the curved figure on which the second QR image400is to be printed is small may be respectively greater than the first weight and the second weight when the radius of curvature is large. In the image conversion according to an embodiment of the present disclosure, the second QR image400may be stored in the memory120only when a degree (e.g., a ratio) of change of the lengths of the grids is equal to or less than a predetermined value. When the degree of change of the lengths of the grids is greater than the predetermined value, the first weight and the second weight are adjusted at least once according to feedback, and the second QR image400may be stored in the memory120only when a degree of change of the lengths of the grids is greater than the predetermined value. Specifically, it may be determined whether a first ratio of the fourth length of the second QR image400to the first length of the first QR image300is equal to or less than a first predetermined value, the second QR image400may be stored in the memory120when the first ratio is equal to or less than the first predetermined value, and the first weight may be reduced, a fourth length of a new second QR image400generated through image conversion may be calculated, and it may be determined whether a ratio of the fourth length of the new second QR image400to the first length of the first QR image300is equal to or less than the first predetermined value when the first ratio is greater than the first predetermined value. Specifically, it may be determined whether a second ratio of the fifth length of the second QR image400to the second length of the first QR image300is equal to or less than the first predetermined value, the second QR image400may be stored in the memory120when the second ratio is equal to or less than the first predetermined value, and the second weight may be reduced, a fifth length of a new second QR image400generated through image conversion may be calculated, and it may be determined whether a ratio of the fifth length of the new second QR image400to the second length of the first QR image300is equal to or less than the first predetermined value when the second ratio is greater than the first predetermined value. The first predetermined value may be selected from a range of 1 to 1.4. Preferably, 1.3 may be selected as the first predetermined value. Hereinafter, image conversion performed on a QR code according to an embodiment of the present disclosure will be described in detail with reference toFIG.4. In image conversion according to an embodiment of the present disclosure, the second QR image400may be stored in the memory120only when a ratio between an area of a second QR image400and an area of a first QR image300generated by changing the lengths of the grids is equal to or less than a predetermined value. When the ratio between the area of the second QR image400and the area of the first QR image300is greater than the predetermined value, the first weight and the second weight may be adjusted at least once according to feedback, and the second QR image400may be stored in the memory120only when a ratio between new areas is equal to or fess than the predetermined value. Specifically, the area of the first QR image300and the area of the second QR image400may be calculated, it may be determined whether a ratio of the area of the second QR image400to the area of the first QR image300is equal to or greater than a second predetermined value, the second QR image400may be stored in the memory120when the ratio is equal to or less than the second predetermined value, and the first weight and the second weight may be reduced, an area of a new second QR image400generated through the image conversion may be calculated, and it may be determined whether a ratio of the area of the new second QR image400to the area of the first QR image300is equal to or less than the second predetermined value when the ratio is greater than the second predetermined value. The second predetermined value may be selected from a range of 1 to 1.4. Referring toFIG.3, a ratio between grids into which the first QR image300is divided may be reduced as a radius of curvature at the center of a curved figure on which the second QR image400is to be printed is large, and may be increased as the radius of curvature at the center of the curved figure to which the second QR image400is to be printed is small. Therefore, a QR recognition rate may be increased by selecting an appropriate division ratio according to a radius of curvature. Hereinafter, image conversion performed on a QR code according to an embodiment of the present disclosure will be described in detail with reference toFIG.5. A first QR image300may be divided into grids, a second QR image400may be generated by performing image conversion on the grids through changing of the lengths of the grids, and an anti-aliasing technique may be applied to the second QR image400, thereby smoothing a boundary between the grids. In this case, a boundary between grids divided in a QR image420to which the anti-aliasing technique is applied may be processed more smoothly than a boundary between grids divided in a QR image410to which the anti-aliasing technique has yet to be applied. Therefore, by applying the anti-abasing technique, a QR recognition rate for a curved figure such as an egg may be increased. Hereinafter, a system for printing a QR code according to an embodiment of the present disclosure will be described in detail with reference toFIG.6. The system for printing a QR code according to the embodiment of the present disclosure may include the device100for printing a QR code, a clock500with a certain period, a conveyor belt600located below the device100and configured to transfer a curved figure according to the period of the clock500, and a sensor700located on the conveyor belt600and configured to detect whether there is a curved figure on the conveyor belt600. Here, because the clock500is synchronized with the device100and the sensor700, when a curved figure on the conveyor belt600is detected by the sensor700, the device100may print the second QR image400on the curved figure located below the device100. Meanwhile, the sensor700may be, for example, an optical sensor or an infrared (IR) sensor. The above-described embodiments of the present disclosure are intended for purposes of illustration and the present disclosure is not limited thereto. In addition, various modifications and changes may be made in the present disclosure by those of ordinary skill in the art without departing from the spirit and scope of the present disclosure and should be considered to be within the scope of the present disclosure. REFERENCE NUMERALS 100: device for printing QR code110: processor120: memory130: input device140: output device200: server300: first QR image400: second QR image410: QR image to which anti-aliasing technique has yet to be applied420: QR image to which anti-aliasing technique is applied500: clock600: conveyor belt700: sensor | 16,762 |
11861432 | DETAILED DESCRIPTION In this document, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. The term “comprising” means “including, but not limited to.” Similarly, the term “comprises” means “includes, and is not limited to.” Unless defined otherwise, all technical and scientific terms used in this document have the same meanings as commonly understood by one of ordinary skill in the art. In this document, terms that are descriptive of position such as “upper” and “lower”, “horizontal”, “vertical” and the like are intended to indicate relative positions with respect to the components for which those terms are descriptive, and are not intended to be absolute and require that the component remain in that absolute position in all configurations. Except where specifically stated otherwise, numeric descriptors such as “first”, “second”, etc. are not intended to designate a particular order, sequence or position in an overall process or schema, but instead are simply intended to distinguish various items from each other by describing them as a first item, a second item, etc. The terms “media transport system” and “media transport device” refer to a set of hardware components that are configured to receive printed media (i.e., a substrate onto which text and/or graphics have been printed) and move the printed media through one or more modules that perform various processing steps on the printed media, such as position adjustment, sensing, printing and/or delivery to a final destination. A “currency transport device” or “currency transport system” is a type of media transport device that is configured to process and dispense, accept or otherwise convey printed financial instruments such as currency notes, checks, money orders, bank notes and the like. Other types of media transport devices include financial transaction card readers, ticket-taking machines, and the like. FIG.1Ais a block diagram that illustrates example components of a media transport system, such as may exist in a prior art automated teller machine or ticket taking machine.FIG.1Billustrates an example of an automated teller machine that includes the components shown inFIG.1A. The machine includes a housing120that contains the media transport system. As shown inFIG.1B, optionally the housing may include a door121and one or more extendible rails122, shuttles, or other movable mechanisms that may be used to remove at least part of the media transport system from the housing120. The media transport system includes components such as a receiver111that includes belts, rollers or other conveying mechanisms that will receive media into the system via a media portal—i.e., an opening in the housing. The media transport system also may include a separator module112that includes belts, rollers or other media conveyor devices that separate stacked printed media, such as stacked checks or currency notes. In currency transport systems, such rollers are sometimes referred to as “pick rollers” or “pick wheels.” The media transport system also may include an alignment module113that includes belts, rollers or other conveying mechanisms that will adjust a position of the media with respect to an internal reference point. The media transport system also may include an imaging system114with a camera and/or other sensors and associated programming that can detect what the media is and/or what is printed on the media. The media transport system also may include a transport module115with one or more belts, rollers or other media conveyors that will move the printed media to an appropriate destination116such as a shuttle, bin or exit port. The media transport system also may include other modules, such as a printer that can apply additional printed markings to the printed media. The areas within each module through which the printed media passes form a media travel pathway from the receiver's intake to the final destination. FIG.2illustrates an example of a cleaning system that includes a cleaning substrate201, which sometimes may be referred to as a cleaning card, that may be used to clean various components of a media transport system such as that shown inFIGS.1A and1B. The cleaning substrate includes a cleaning face202(which as shown is an upper face) and an opposing face (not shown, since it is under cleaning face202) that may or may not also be a cleaning face. In this embodiment and in each other embodiment described in this document, at least one of the cleaning faces may be coated with a cleaning solution such as a solvent so that internal components of the media transport system may be cleaned when the components move along or across the cleaning face of the substrate. Example cleaning solutions include isopropyl alcohol, deionized water, alkaline surfactants, and other materials or combinations of these. Alternatively, a cleaning face may be textured or made of fiber that will promote friction when a movable object is moved against the cleaning face. The cleaning substrate may be made of a flexible, tear-resistant material such as a fibrous aramid or meta-aramid fabric material such as that marketed under the NOMEX® brand; a cellulosic material; a flexible polymeric substrate provided with thin, non-woven layers made of absorbent material such as that marketed under the SONTARA® brand; a sponge made of polyurethane or other type of foam; or a combination of any of these, such as a sponge coated with a fabric, non-woven absorbent or cellulosic material. At least a portion of the cleaning substrate has a length and width that is sized and shaped to fit within the media travel pathway of the transport device with which the cleaning substrate is intended to be used. As used in this document, the term “fit within” does not necessarily mean that the substrate is entirely held within the media travel pathway, as a handle or other extension of the substrate may extend from the media travel pathway outside of the transport device so that a person can hold and move it into and out of the device. An example is handle327as shown inFIG.3, as well as the handles such as are generally known on “T-cards” that have a handle portion that is wider than an insertable portion. However, in some optional embodiments the cleaning substrate will be retained completely within the media travel pathway. In some optional embodiments, no portion of the substrate will extend from a media acceptor port (such as a currency acceptor slot) of the transport device. The cleaning substrate will include multiple cleaning structures, each of which is positioned to align with and clean a belt, drive roller, idle roller, drive roller/idle roller pair, and/or one or more sensors of the media transport device. Other cleaning structures may include discrete raised areas made of peaks and peripheral walls that slope or otherwise extend from the peaks, such as those disclosed in U.S. Pat. No. 8,323,779, the disclosure of which is incorporated into this document by reference. For example, referring toFIG.3, the cleaning substrate will include multiple scarifying holes312,313that are positioned to align over, under, or in between rollers of the media transport device. A scarifying hole is an opening created and positioned to provide a scraping edge against which another item may be scraped so that debris or other contaminants on the item's surface are scraped and thus removed via movement of the scarifying hole, the item or both. Referring toFIG.4, the substrate401includes a scarifying hole411sized and positioned to fit between an idle roller423and drive roller425of the media transport system. In normal operation when the media transport system receives printed media, the drive roller425presses against the media, and a motor turns the drive roller425to move the media through the media travel pathway. The idle roller423is positioned proximate to the drive roller425to serve as a backstop for the force of the drive roller425. The printed media passes between the idle roller423and drive roller425. In a cleaning operation, the drive roller425is activated but the cleaning substrate remains stationary, anchored in the transport path by the locking structure(s). The scarifying hole411receives the idle roller423and/or the drive roller425so that one or both rollers scrape along the edges of the scarifying hole411and are thus cleaned. In configurations that only use a drive roller without an idle roller, the drive roller may be received into the scarifying hole. The drive rollers and/or idle rollers of other drive roller435/idle roller433pairs that are not aligned with the scarifying hole411will be wiped by the cleaning substrate. Then, when the cleaning substrate is moved to a different position in the media transport path (and a different locking member receptacle or media transport system post is used to lock the cleaning substrate in the different position) the other drive roller435/idle roller433pair may be aligned with the scarifying hole411or a different scarifying hole in the substrate. Scarifying holes also may be sized and positioned to align with and accept one or more belts or other moving parts of the media travel pathway. For example, referring toFIG.5, one or more axles522, when activated, turn a belt525while the cleaning substrate is positioned within the media travel pathway. If the scarifying hole has a width that smaller than, or at least no greater than, the width of the belt525, the belt525will be scraped by the scarifying hole511and wiped by the cleaning substrate501. In addition, if the scarifying hole511is positioned to align with the axle522, the axle522may help push a segment of the belt525into the scarifying hole511to promote scraping of the belt525along the edge of the scarifying hole511. FIG.6illustrates an alternative toFIG.5in which, as an alternative to (or in addition to) a scarifying hole, a belt scraper is formed by one or more inwardly-facing flaps618a,618bthat may be lifted upward, pressed downward, or otherwise moved so that the belt may be positioned under or over each flap. Each flap618a,618bwill have at least one side that is attached to the cleaning substrate and at least one edge that is cut away from the substrate so that it can be lifted or pressed and receive the belt. The edge may be a single curved edge as shown, or multiple angled edges may be used. As the belt is operated, the cut-away edge of the flap will scrape the belt. The face of the cleaning substrate also may wipe the belt. In addition, both sides of the belt may brush across the cleaning surface of the substrate and the flap as the belt passes over the substrate and under the flap (or over the flap and under the substrate). This may result in the deposit of dirt, oil or other contaminants634on the cleaning substrate, as shown inFIG.6. Optionally, the flaps618a,618bmay be adjacent to a scarifying hole611. In some embodiments, the flaps618a,618bmay hold the cleaning substrate in a position by contact with the belts during operation. The embodiment ofFIG.6also illustrates an embodiment with a handle617for grasping while inserting a body613of the substrate into the media transport device. The handle617is positioned so that it will remain outside of the media transport device whole the body613, which includes a cleaning surface, is within the media transport device. In some embodiments the handle617may extend from the currency acceptor port (or other media acceptor port) of the device when in use. In other embodiments, the handle617may extend from a portal that is not a media acceptor port (such as the portals discussed below in the context ofFIG.8). The handle617can also be used to move the body613inward and outward, and optionally to wiggle the card from side to side, to move the body over a range of positions. The addition of a handle is not limited to the embodiment ofFIG.6; other embodiments (such as embodiments with scarifying holes with or without flaps, embodiments with belt scrapers, embodiments with holes that align with sensors as inFIG.7) may include a handle. FIG.11illustrates an example of a cleaning system that includes a cleaning substrate1101that includes a cleaning face1102(which as shown is an upper face) and an opposing face (not shown, since it is under cleaning face1102) that may or may not also be a cleaning face. In the example card1101shown inFIG.11, the cleaning surface1102is made up of some or all of the top or bottom of the card1101. Either or both surfaces of the cleaning card1101may hold a cleaning solution, which is transferred to components of the media transport device during use. In addition, although not shown inFIG.11, the cleaning surface1102and optionally both surfaces may include cleaning elements as described in the other embodiments. An area adjacent to one end of the card1101is a gripping area1104that will be visible and accessible to a human operator when placed in operation in a media transport device. In some embodiments, the gripping area1104may include a pull tab1103to facilitate gripping, so that the gripping area and pull tab together serve as a handle. The pull tab1103may be in the form of a hole of any shape within the gripping area (as with the circular hole ofFIG.11). However, other pull tab structures may be attached to or integral with the gripping area1104, including loops, fins, or other items that attach to or extend from the gripping area while remaining secured to the gripping area1104. Referring toFIG.7, in some embodiments scarifying holes or other holes711of the cleaning substrate701also may be sized and positioned to align with one or more sensors722within the media travel pathway. Such a sensor722may include a pressure sensor, an optical sensor, a temperature sensor, and/or any other sensor that is available in the media transport system and may, in some embodiments, include a transmitter723aand receiver723bas is shown inFIG.7. The media transport device may use the sensor722to detect the position of the cleaning substrate701within the media travel pathway (e.g., based on pressure or optical data determine whether the sensor is over a scarifying hole). In addition, in some media transport systems one or more sensors722may be used to detect and issue an alert indicating whether the device is jammed and media is not moving through the media travel pathway; if so, the placement of a scarifying hole711or other opening under such a sensor can help avoid the media transport system stopping. Instead of a hole, the opening may be a transparent material so that the media sensor does not detect the substrate when the opening is positioned over the media sensor. The system knows that the cleaning substrate is in proper position when it detects the hole because if the hole711or opening not been positioned under the sensor722, the sensor722would have detected the presence of the non-moving substrate and thus detected that the cleaning substrate is not moving through the device. Optionally, the sensor722also may be used to alert an operator that he or she has placed the cleaning substrate in a proper position may be cleaned. For example, when the sensor722detects a hole, it may cause a user interface associated with the media output device to output an audible or visual alert indicating that the cleaning substrate is in a proper cleaning position, but if the sensor722does not detect a hole it may cause the user interface to output an alert indicating that the cleaning substrate is not in a proper cleaning position. Referring back toFIG.2, in some embodiments one or more of the scarifying holes211may include a scraper217that is attached to one or more edges of the scarifying hole. The scraper217extends inwardly toward the center of the scarifying hole211. Referring toFIG.4, depending on whether the alignment of the scraper417is positioned closer to an upper face, closer to a lower face, or centrally within the scarifying hole, the scraper may provide additional scraping force against the idle roller423and/or the drive roller425. FIG.10illustrates an example cleaning face1001with discrete raised areas made of peaks1003and peripheral walls1005that extend downward from the peaks to the cleaning face. In the example shown inFIG.10, the machine direction of the card (i.e., the direction into which the card will be inserted into the media transport device) is left-to-right or right-to-left. The peaks1003may be in the form of an apex as shown inFIG.10, or they may be ledges with walls extending down from a flat ledge surface. The discrete raised areas may extend from only one side of the cleaning face, or they may extend from opposing sides of the cleaning face as shown inFIG.10. In any of the embodiments described above, the cleaning face(s) of the cleaning substrate may be textured to provide additional cleaning function (e.g., by applying friction to belts that pass over the cleaning substrate). The cleaning face(s) also may include a material such as a meshed loop structure that entangles dirt to trap it. FIG.8further illustrates a method of using a cleaning system such as that described above. The method includes accessing the media transport device via a portal into which a first cleaning substrate may be placed (step801). Optionally, the portal may be a portal that is not accessible during normal operation of the device. For example, the portal may not be a currency acceptor slot that is used during normal operation to insert currency into the media transport device. Instead, the portal will be formed by opening one or more of the modules that form the media travel pathway. Alternatively, the substrate may be partially or fully inserted through the currency acceptor slot or other portal. The cleaning substrate will be inserted into a media travel pathway of the media transport device through the portal. The cleaning substrate may be placed into position with or without operating the motor that actuates the media conveyors (i.e., belts and/or rollers) of the media transport device (step802). In some embodiments, the media transport device may detect that a cleaning substrate has been placed into the media travel pathway, and if so it may automatically change its mode of operation to a cleaning mode rather than a normal operating mode. The cleaning mode may differ from the normal mode in that, for example, it may hold the cleaning substrate in a particular location for a defined period of time before moving the substrate to a next section, or it may adjust the pressure applied to the substrate, or it may override a “device jam” alert and permit the media conveyors to turn even though the cleaning substrate is not moving through the machine in a normal mode of operation. Detection that the substrate is a cleaning substrate may occur by any suitable means, such as by manual input, by detecting a shape of the substrate, or by using image processing to detect a code or other identifying indicia that is printed on the substrate. If the cleaning substrate includes scarifying holes, the scarifying holes may be aligned to rollers, belts, sensors and/or other selected components of the media transport device that are in the media travel pathway. If the cleaning substrate includes one more flaps for cleaning a belt, one or more belts may be positioned over or under the flap(s). The cleaning substrate may remain fully within the media travel pathway. Alternatively, a portion of the cleaning substrate may extend out from the media travel pathway, such as through a currency acceptor slot, so long as enough of the cleaning substrate remains within the pathway to provide a cleaning function. If opened, the portal to the first section will then be closed (step803) so that at least some of the belts or rollers contact the first cleaning substrate. A first section of the media transport device will thus receive the cleaning substrate through the portal, and the substrate will then be moved within the pathway (step804) to clean various components of the pathway. For example, an operator may grasp a handle of the cleaning substrate, insert a body of the cleaning substrate into a portal of the media transport device, and jostle the handle to move the body through a range of positions within the pathway to cause the scarifying holes and/or belts of the body to clean the rollers, belts and/or other components of the pathway over the range of positions. In addition, or alternatively, a motor of the media transport device may be operated so that the belts or rollers move, contact the first cleaning substrate and are cleaned while the first cleaning substrate is positioned within the first section of the media transport device (step805). For example, while the cleaning substrate is in the first section, scarifying holes that align with the media conveyors (e.g., belts or rollers) may contact and clean the media conveyors while the media transport device is operated. The motors may be used to help move the substrate over a range of positions, or the motor may be operated to turn the belts or rollers while an operator grasps the handle and holds the substrate in a position or range of positions. The device may then be turned off (i.e., powered down or moved to an idle mode in which the belts and rollers of the media travel pathway are not operated), and portal(s) will be opened to withdraw the cleaning substrate(s) from the media travel pathway (step807) so that it may optionally be reinserted in a different position. Alternatively, if a portion of the substrate extends from the portal, the substrate may be withdrawn (step807) via the handle. Optionally, before opening the first section and placing the cleaning substrate through the portal, the method may include operating the media transport device and, while operating the motor, placing a second cleaning substrate (step810) that includes a cleaning solution into a second portal that is accessible during operation of the motor so that the second cleaning card is received into, and moves through the media transport device, and the cleaning solution contacts the belts or rollers while the second cleaning substrate moves through the media transport device. In this way, cleaning solution may be applied to the media travel pathway before the stationary card is inserted, and the stationary card may then require little or no cleaning solution. This also may help pre-clean the components of the media travel pathway before the stationary cleaning substrates are inserted. Optionally, after moving the substrate in the pathway (step804) and/or operating the media transport device so that the belts or rollers turn, contact the first cleaning substrate and are cleaned (step805), if a portal was opened to place the substrate in the pathway the method may include re-opening the portal to expose access to a section of media travel pathway and repositioning the cleaning substrate to a second position in the section according to a second alignment position (step806). That portal may then be closed, and the substrate will again be moved, and/or the motor of the media transport device will be again operated, so that at least some of the belts or rollers contact the cleaning substrate and are further cleaned while the cleaning substrate is in the second position. Alternatively, rather than powering down the device and opening the sections, operation of the media transport device may cause the cleaning substrate to move to a second section. If so, the media conveyors may move the cleaning substrate between the section as they do with currency. The scarifying holes, flaps and/or cleaning face of the cleaning substrate will then clean the media conveyors and other features of the second section. In this embodiment, if the media transport device is operating in a cleaning mode, it may move the device to the second section after a threshold period of time, or in response to a manual input, or when it detects that the cleaning substrate has achieved a threshold level of cleaning in the first section. Detection of the threshold level of cleaning may occur by using a camera to capturing images of the cleaning substrate and processing the images to determine when patterns associated with dirt appear at locations on the substrate that are expected to become dirty after cleaning. Optionally, the opacity or grayscale value of the markings in these areas may be monitored, and cleaning may be determined to have completed when a threshold opacity or grayscale level is achieved. FIG.12further illustrates a method of using a cleaning system such as that described above. The method includes accessing a media travel pathway the media transport device via a hatch into which a first cleaning substrate may be placed (step1201). Optionally, the hatch may be a portal that is not accessible during normal operation of the device. For example, the hatch may not be a media acceptor slot that is used during normal operation to insert currency into the media transport device. Instead, the hatch will be formed by opening one or more of the modules that form the media travel pathway. Alternatively, the substrate may be partially or fully inserted through the media acceptor slot or other opening. A first cleaning substrate will be inserted into a media travel pathway of the media transport device through the hatch. This cleaning substrate may be a card containing a pull tab, such as that shown inFIG.11, or it may be a card of other embodiments. The pretreatment substrate may be placed into position with or without operating the motor that actuates the media conveyors (i.e., belts and/or rollers) of the media transport device (step1202). In some embodiments, the media transport device may detect that a cleaning substrate has been placed into the media travel pathway, and if so it may automatically change its mode of operation to a cleaning mode rather than a normal operating mode, and it may then operate in the cleaning mode. The cleaning mode may differ from the normal mode in that, for example, it may hold the cleaning substrate in a particular location for a defined period of time before moving the substrate to a next section, or it may adjust the pressure applied to the substrate, or it may override a “device jam” alert and permit the media conveyors to turn even though the cleaning substrate is not moving through the machine in a normal mode of operation. Detection that the substrate is a cleaning substrate may occur by any suitable means, such as by manual input, by detecting a shape of the substrate, or by using image processing to detect a code or other identifying indicia that is printed on the substrate. The hatch to the first section will then be closed (step1203) so that at least some of the belts or rollers contact the first cleaning substrate. A first section of the media transport device will thus receive the cleaning substrate through the hatch. Optionally, the media transport device may then be operated for a period of time (such as 10-30 seconds) that is sufficient to transfer cleaning solution from the first cleaning substrate to various components within the pathway (step1204) while belts, rollers or other components of the pathway move over or under the substrate. Operation of the device may include operating the device's motor, and/or manually turning one or more belts or rollers in the transport path. However, operation of the device in this step is optional. The operator may then grasp the handle (by hand or with a gripping tool) of the first cleaning substrate and remove it from the media pathway (step1205) through a second portal of the device, either after operation of the device has stopped or while the device is still operating. (Or simply after the substrate has been positioned in the pathway, without operating a motor of the device.) Optionally, the technician may repeat steps1201-1205one or more times, each time repositioning the substrate in the pathway at the same location or in one or more different locations of the pathway. After steps1201-1205are completed, if no further cards are to be placed in the pathway the process may end (step1215). However, after pretreatment is completed, the technician may then place a second cleaning substrate into the media transport pathway (step1211). The technical may do this by opening the pathway and placing the second cleaning card into it, or by feeding the second substrate into a media acceptor of the device so that the second substrate moves along the pathway. As noted above, the second substrate is typically dry (but optionally contains a cleaning solution). If the second cleaning substrate includes scarifying holes, the scarifying holes may be aligned to rollers, belts, sensors and/or other selected components of the media transport device that are in the media travel pathway. If the second cleaning substrate includes one more flaps for cleaning a belt, one or more belts may be positioned over or under the flap(s). The cleaning substrate may remain fully within the media travel pathway. Alternatively, a portion of the second cleaning substrate may extend out from the media travel pathway, such as through a currency acceptor slot, so long as enough of the second cleaning substrate remains within the pathway to provide a cleaning function. In addition, or alternatively, a motor of the media transport device may be operated so that the belts or rollers move, contact the second cleaning substrate and are cleaned while the second cleaning substrate is positioned within a section of the media transport device (step1212). For example, while the cleaning substrate is in a first section, scarifying holes that align with the media conveyors (e.g., belts or rollers) may contact and clean the media conveyors while the media transport device is operated. The motors may be used to help move the substrate over a range of positions, or the motor may be operated to turn the belts or rollers while an operator grasps the handle and holds the substrate in a position or range of positions. Optionally, in the process described above, the first cleaning substrate may in fact include multiple stacked pretreatment and/or cleaning cards. If so, then in step1202the technician may place the stack of substrates together in the pathway. The technician may then run the device in step1204and selectively pull individual cards from the stack in step1205. For example, the technician may pull the top card from the stack and/or the bottom card from the stack, while the middle card(s) remain in place. In this way, the middle card(s) may add friction and apply moisture to the device's belts and/or rollers as the top and/or bottom card is pulled from the transport path. If the cards will be repositioned (step1217), the repositioned cards may include all cards from the original stack, or the repositioned cards may include a subset of the original cards. These techniques with multiple stacked cards may allow a longer pretreatment time, as well as the removal of cards that may have become dirty, and more movement of cards within the transport pathway to provide a better cleaning result. When this happens, the individual cards within the stack may include scarifying holes, embossments and/or other structures such as those discussed earlier in this document to further clean the rollers, belts and other components as the cards are pulled from the path. Each card may include the same pattern, or different cards in the stack may include different patterns. The device may then be turned off (i.e., powered down or moved to an idle mode in which the belts and rollers of the media travel pathway are not operated), and the hatch through which the second substrate was placed may be opened to withdraw the second cleaning substrate from the media travel pathway (step1215) so that it may optionally be reinserted in a different position. Alternatively, in step1215if a portion of the substrate extends from the portal, the substrate may be withdrawn via the portal. Alternatively, rather than powering down the device and opening the sections, operation of the media transport device may cause the second cleaning substrate to move to a second section. If so, the media conveyors may move the second cleaning substrate between the sections as they do with normal (i.e., non-cleaning) media. The scarifying holes, flaps discrete raised areas and/or cleaning face of the second cleaning substrate will then clean the media conveyors and other features of the second section. In this embodiment, if the media transport device is operating in a cleaning mode, it may move the second cleaning substrate to the second section after a threshold period of time, or in response to a manual input, or when it detects that the cleaning substrate has achieved a threshold level of cleaning in the first section. Detection of the threshold level of cleaning may occur by using a camera to capturing images of the second cleaning substrate and processing the images to determine when patterns associated with dirt appear at locations on the second cleaning substrate that are expected to become dirty after cleaning. Optionally, the opacity or grayscale value of the markings in these areas may be monitored, and cleaning may be determined to have completed when a threshold opacity or grayscale level is achieved. FIG.9illustrates that in any of the embodiments described above, the cleaning substrate901may be attached to a scroll902. The cleaning substrate may include scarifying holes911-913that move through the device and clean various elements as the cleaning substrate is withdrawn from the scroll902and moved into the media transport device. The cleaning substrate901(which may include a cleaning face and/or a non-cleaning face of the substrate) may be wrapped around a roller921of the scroll and withdrawn from a housing922of the scroll through an opening in the housing922. A retraction mechanism923such as a spring may create a force that causes the roller921to wind in a direction that will tend to withdraw the cleaning substrate901back into the housing922around the roller921. A clutch923may hold the roller in place and prevent the roller921from withdrawing the cleaning substrate into the scroll until the clutch924is released. When cleaning is completed, the clutch924may be released, which will permit the cleaning substrate901to be retracted by the retraction mechanism923and wrapped around the roller921inside of the scroll housing922. (FIG.9shows shapes representing the retraction mechanism and clutch as being outside of the housing, but this is for purposes of illustration only; these elements also may be positioned inside of the housing.) The methods and systems described above may result in significant time savings as compared to manual cleaning. In addition, they can help ensure that cleaning occurs in small and/or hard-to-reach segments within the media transport device. The features and functions described above, as well as alternatives, may be combined into many other different systems or applications. Various alternatives, modifications, variations or improvements may be made by those skilled in the art, each of which is also intended to be encompassed by the disclosed embodiments. | 35,455 |
11861433 | Hereinafter, one embodiment of the present disclosure will be described while referring to the accompanying drawings. First Embodiment Structure of an Image forming Device1 FIG.1is a side cross-sectional view showing the structure of an image forming device1according to a first embodiment of the present disclosure. InFIG.1, the direction in which process members42are aligned will be called an X-axis direction, the direction from a sheet-feeding member3toward an image forming member4will be called the positive direction on a Z-axis, and a direction orthogonal to the X-axis direction and the Z-axis direction will be called a Y-axis direction. In this example, the image forming device1is a horizontal tandem color laser printer having a plurality of process members42juxtaposed in the X-axis direction. The image forming device1is also configured to scan a plurality of beams for neighboring scan lines on a photosensitive member51and performs interlaced scanning in which the beams are scanned at positions separated from each other by one or more scan lines. The scan lines are juxtaposed in a sub scanning direction, which is the same direction in which sheets S1are conveyed. As shown inFIG.1, the image forming device1is provided with a main casing2, a sheet-feeding member3, an image forming member4, and a discharging member5. The main casing2accommodates the sheet-feeding member3, the image forming member4, and the discharging member5. The sheet-feeding member3feeds sheets S1and is provided with a sheet cassette31, a feed roller32, conveying rollers33, a feeding path34, and registration rollers35. The sheet cassette31is disposed in the bottom section of the main casing2. Sheets S1are stacked in the sheet cassette31. The feed roller32is disposed above the sheet cassette31for conveying sheets S1stacked in the sheet cassette31to the conveying rollers33. The conveying rollers33convey sheets S1fed by the feed roller32onto the feeding path34. The upstream end of the feeding path34is adjacent to the conveying rollers33, while the downstream end of the feeding path34is adjacent to the registration rollers35. The feeding path34configures a conveying path for conveying sheets S1from the conveying rollers33to the registration rollers35. The image forming member4forms images on sheets S1that have been supplied by the sheet-feeding member3. The image forming member4is provided with a scanning optical device41, the process members42, a transfer member43, and a fixing member44. The image forming member4will be described later in detail. Structure of the Scanning Optical Device41 The scanning optical device41is disposed in the upper section of the main casing2over the process members42.FIG.2is a cross-sectional view in the positive direction of the Y-axis showing the scanning optical device41provided in the image forming device1ofFIG.1.FIG.3is a view in the negative direction of the Z-axis showing the scanning optical device41depicted inFIG.2.FIG.4shows the structure of a semiconductor laser412M provided in the scanning optical device41depicted inFIG.3. As shown inFIGS.2and3, the scanning optical device41is provided with a frame411; semiconductor lasers412M,412K,412Y, and412C; an incident optical system413; a polygon mirror414as an example of the deflector; scanning optical systems415; and a beam detect (BD) sensor8as an example of the optical sensor. Each of the semiconductor lasers412M,412K,412Y, and412C is provided with N laser light-emitting members (where N is a natural number of 2 or greater). As shown inFIG.4, the semiconductor laser412M has two laser light-emitting members E1and E2(i.e., N=2), for example. The frame411is box-shaped and open on the positive side of the Z-axis. The frame411is formed of a resin. The frame411retains the polygon mirror414; the semiconductor lasers412M,412K,412Y, and412C; the incident optical system413; and the scanning optical systems415. As shown inFIG.2, the frame411is provided with a bottom wall WL1, and side walls WL3extending in the positive direction of the Z-axis from the peripheral edges on all four sides of the bottom wall WL1. An exit window WL2is formed in the bottom wall WL1for each color. The exit windows WL2are spaced apart from each other in the X-axis direction. The polygon mirror414is disposed on a motor board414B in the approximate center of the frame411. The polygon mirror414deflects the beam emitted from the incident optical system413. The incident optical system413will be described later. The polygon mirror414has a polyhedral shape with a plurality of beam deflecting surfaces414S. The polygon mirror414is driven to rotate at a high speed by the drive force of a scanner motor disposed on the motor board414B. The polygon mirror414rotates about a rotational shaft414A provided in the center of the polygon mirror414. As shown inFIG.3, the semiconductor lasers412M and412K are aligned in the X-axis direction. The semiconductor lasers412C and412Y are spaced apart but face each other in the X-axis direction. The semiconductor lasers412M and412K emits light toward the positive direction of the Y axis. Light emitted from the semiconductor laser412M travels in a direction substantially orthogonal to light emitted from the semiconductor laser412Y, and light emitted from the semiconductor laser412K travels in a direction substantially orthogonal to light emitted from the semiconductor laser412C. The incident optical system413is provided with four coupling lenses413L, four slitted plates413S, two reflective mirrors413M, and two cylindrical lenses413Z. The coupling lenses413L convert light emitted from the laser light-emitting members of the corresponding semiconductor lasers412M,412K,412Y, and412C into beams. The slitted plates413S are arranged opposite the respective coupling lenses413L. Widths of beams exiting the coupling lenses413L are regulated by slits formed in the corresponding slitted plates413S. One of the two reflective mirrors413M is arranged at a slope of approximately 45° relative to the flat plates configuring the substantially L-shaped slitted plate413S that is disposed in opposition to the semiconductor laser412Y. This reflective mirror413M is shaped to reflect light emitted from the semiconductor laser412Y at an angle of approximately 90°. The reflective mirror413M is offset in the Z-axis direction from the path of light emitted from the semiconductor laser412M so that light emitted from the semiconductor laser412M is not reflected by this reflective mirror413M. The other reflective mirror413M is arranged so as to slope approximately 45° relative to the flat plates configuring the substantially L-shaped slitted plate413S that is disposed in opposition to the semiconductor laser412C. This other reflective mirror413M is shaped to reflect light emitted from the semiconductor laser412C at an angle of approximately 90°. This reflective mirror413M is also offset in the Z-axis direction from the path of light emitted from the semiconductor laser412K so that light emitted from the semiconductor laser412K is not reflected by this reflective mirror413M. The cylindrical lenses413Z are formed of a resin material through injection molding. The cylindrical lenses413Z are arranged to confront slitted plates413S with a gap of a prescribed distance. The surfaces of the cylindrical lenses413Z that face the slitted plates413S are cylindrical incidence surfaces on which beams passing through the slitted plates413S are incident. The surfaces of the cylindrical lenses413Z facing the polygon mirror414are flat exit surfaces from which light incident on the incidence surfaces exits. By rotating at a high speed, the polygon mirror414deflects beams passing through the cylindrical lenses413Z. The scanning optical systems415form, by the N beams deflected by the polygon mirror414, N beam spots (N spot images) on the photosensitive member51at positions separated by M scan lines (where M is a natural number larger than or equal to 2). In this example, N is 2, and light emitted from the N (=2) laser light-emitting members E1and E2(FIG.4) of the semiconductor laser412M is converted to N beams by the coupling lenses413L, and these N beams are deflected by the polygon mirror414. That is, as shownFIG.5, scan lines S11-S16are arranged at regular intervals Pi, a center of each beam spots scans along a scan line on the photosensitive member51, and a distance between centers of N (=2) beam spots is M×Pi. In other words, the regular interval Pi is a distance between two successive scan lines. In the present description, expressions such as “the things separated by M scan lines” may indicate the things shifted by M scan lines. More specifically, expressions such as “the things separated by M scan lines” may indicate a center-to-center distance of the things equal to the distance M×Pi when the things have a size such as a beam spot in the sub scanning direction, or a distance of the things equal to the distance M×Pi when the things have no size in the sub scanning direction such as scan lines which have no size in the sub scanning direction. In a case that N is 3 or more, any closest two beam spots among N beam spots on the photosensitive member51may be separated by M scan lines (a distance M×Pi). That is, a center-to-center distance of any two closest beam spots among the N beam spots may be a distance M×Pi. FIG.5illustrates interlaced scanning with the scanning optical device41shown inFIG.3and how images IE1and IE2formed by the respective laser light-emitting members E1and E2move over the scanned surface of the photosensitive member51. As shown inFIG.5, N scan lines at positions being separated from each other in the sub scanning direction by M scan lines (where N=2 and M=3) are exposed by light emitted from the laser light-emitting members E1and E2. In the example ofFIG.5, scan lines S11and S14are separated from each other in the sub scanning direction by M scan lines. Here, M denotes the number of intervals Pi between two scan lines S11and S14. For example, when starting from scan line S11and counting the number of lines to scan line S14, M denotes the number of scan lines S12-S14while excluding scan line S11which is the starting point. Further, the image IE1of the laser light-emitting member E1and the image IE2of the laser light-emitting member E2are scanned at a prescribed time in the main scanning direction along the respective scan lines S11and S14. The distance in the sub scanning direction between the centers of the image IE1and image IE2is equivalent to a pitch P of beams irradiated from the semiconductor laser412M. The pitch P is 3 (=M) times a distance Pi between neighboring scan lines among the scan lines S11, S12, S13, . . . . In the meantime, the photosensitive member51is driven to rotate by a motor (not shown). Each time after the images IE1and IE2are scanned once in the main scanning direction, the scanned surface of the photosensitive member51scanned by the set of beams is moved a distance Pm in the sub scanning direction. This distance Pm is twice the distance Pi. Hence, after the images IE1and IE2are scanned along the scan lines S11and S14, for example, the images IE1and IE2are subsequently scanned along the scan lines S13and S16, as depicted by one-dot chain lines, shifted two scan lines from the scan lines S11and S14. At the next timing, the images IE1and IE2are then scanned along scan lines S15and S18shifted another two lines. The same scanning pattern is repeated thereafter. As described above, for each scan of the beam group in the present embodiment, beams are scanned along a k-th scan line Skand a (k+3)-th scan line Sk+3in the sub scanning direction (where k is a natural number). In this way, scan lines covering the entire scanned surface of the photosensitive member51are sequentially exposed. The N beams of light emitted from each of the other semiconductor lasers412K,412Y, and412C are similar to the N beams of light emitted from the N laser light-emitting members E1and E2in the semiconductor laser412M. As shown inFIG.2, the scanning optical system415includes scanning optical systems415M,415K,415Y, and415C. As shown inFIG.2, in the scanning optical system415Y for yellow, light emitted from the semiconductor laser412Y is converted into a beam by the coupling lens413L, and the beam is deflected by the polygon mirror414to form a beam spot (a spot image) on the photosensitive member51. The scanning optical system415Y is provided with a scanning lens11, a mirror12that reflects a beam passing through the upper portion of the scanning lens11, and a mirror13that reflects the beam reflected by the mirror12toward the photosensitive member51. In the scanning optical system415Y, the beam passes through the upper portion of the scanning lens11, is reflected diagonally upward by the mirror12, is reflected in the negative direction of the Z-axis by the mirror13, and exits the scanning optical device41through the exit window WL2. In the scanning optical system415M for magenta, light emitted from the semiconductor laser412M is converted to a beam by the coupling lens413L, and the beam is deflected by the polygon mirror414to form a beam spot (a spot image) on the corresponding photosensitive member51. The scanning optical system415M is disposed between the polygon mirror414and the scanning optical system415Y. The scanning optical system415M is provided with two mirrors14and15for reflecting a beam that passes through the lower portion of the scanning lens11, and a mirror16that reflects the beam reflected off the mirror15toward the photosensitive member51. In the scanning optical system415M, a beam passing through the lower portion of the scanning lens11is reflected upward by the mirror14, is reflected in the positive direction of the X-axis by the mirror15, is reflected in the negative direction of the Z-axis by the mirror16, and exits the scanning optical device41through an exit window WL2. As shown inFIG.2, the scanning optical systems415have left-right symmetry about the polygon mirror414. Consequently, the structure of the scanning optical system415C is similar to the structure of the scanning optical system415M, and the structure of the scanning optical system415K is similar to the structure of the scanning optical system415Y. Structure of the Process Members42 As shown inFIG.1, a plurality of the process members42is provided to correspond with the plurality of toner colors. In other words, there are four process members42that include a yellow process member42Y, a magenta process member42M, a cyan process member42C, and a black process member42K. The process members42are arranged in parallel and are spaced apart from each other in the X-axis direction. Each process member42is provided with a photosensitive member51, a charger52, and a developing cartridge53. The photosensitive member51has a cylindrical shape, and the top layer is a positively charged photosensitive layer formed of polycarbonate or the like. The charger52may be a positive-charging scorotron charger provided with a wire and grid that produce a corona discharge when a charging bias is applied. The charger52is disposed on the positive side of the corresponding photosensitive member51along the X-axis and confronts the photosensitive member51from a distance without contacting the photosensitive member51. The developing cartridge53is provided with a developing roller56, a supply roller57, and a thickness-regulating blade58. The upper portion of the housing constituting the developing cartridge53forms a toner chamber55for accommodating toner in the corresponding color. During image formation, toner of color accommodated in the toner chamber55of each process member42is supplied onto the corresponding supply roller57, which rotates to supply the toner to the corresponding developing roller56. At this time, the toner is positively tribocharged between the supply roller57and the developing roller56to which a developing bias is applied. The toner supplied onto the developing roller56passes between the thickness-regulating blade58and the developing roller56as the developing roller56rotates, and a thin layer of uniform thickness toner is carried on the developing roller56. In the meantime, the charger52generates a corona discharge when a charging bias is applied and uniformly charges the surface of the photosensitive member51with positive polarity. After the charger52has positively and uniformly charged the surface of the photosensitive member51as the photosensitive member51rotates, the surface of the photosensitive member51is exposed to beams exiting the corresponding exit window WL2formed in the scanning optical device41. The beams are scanned according to line data described later, forming an electrostatic latent image for each color in accordance with the image to be formed on the sheet S1. As the photosensitive member51rotates further, positively charged toner carried on the surface of the developing roller56is brought into contact with the photosensitive member51by the rotation of the developing roller56. At this time, toner is supplied to areas on the surface of the positively charged photosensitive member51whose potential was lowered when exposed to the laser beams. The toner develops the latent image on the photosensitive member51into a visible image through reverse development, producing a toner image on the surface of the photosensitive member51for each color. The transfer member43is disposed in the main casing2above the sheet cassette31and extends along the X-axis beneath the process members42. The transfer member43is provided with a drive roller59, a follow roller60, a conveying belt61, and transfer rollers62. The conveying belt61is an endless belt member that is wrapped around the drive roller59and the follow roller60. The follow roller60rotates along with the rotation of the drive roller59as the conveying belt61circulates in the direction indicated by arrows A inFIG.1. The transfer rollers62transfer toner from the corresponding photosensitive members51onto a sheet S1being conveyed on the conveying belt61. After toner has been transferred onto the sheet S1by the transfer member43, the fixing member44fixes the toner to the sheet S1. The discharging member5is provided with conveying rollers70, a discharge path71, discharge rollers72, and a discharge tray73. After toner has been fixed to the sheet S1by the fixing member44, the conveying rollers70convey the sheet S1onto the discharge path71. The upstream end of the discharge path71is adjacent to the conveying rollers70while the downstream end is adjacent to the discharge rollers72. The discharge path71forms a conveying path along which the sheet S1is conveyed from the conveying rollers70to the discharge rollers72. The discharge rollers72discharge the sheet S1into the discharge tray73. The discharge tray73is formed on the top surface of the main casing2as a sloped surface that slopes downward in the positive direction of the X-axis. Structure of an ASIC6 FIG.6is a block diagram showing the structure of an ASIC6provided in the image forming device1depicted inFIG.1. As shown inFIG.6, the image forming device1is provided with the ASIC6as an example of the integrated circuit, and a page memory7. The ASIC6is provided with a memory control circuit81, a memory circuit82, a central processing unit (CPU)83, N output buffers84, and an internal bus B1. In other words, the memory control circuit81, the memory circuit82, the CPU83, the N output buffers84, and the internal bus B1are provided in the ASIC6. The following description will focus on the semiconductor laser412M among the semiconductor lasers412M,412K,412Y, and412C. However, the description for the semiconductor laser412M may also be applied to the other semiconductor lasers412K,412Y, and412C. The page memory7stores raster image data. The image forming device1generates this raster image data based on print data that the image forming device1received from an external device. The page memory7is provided externally to the ASIC6. The memory control circuit81reads raster image data from the page memory7and outputs this data via the internal bus B1to a direct memory access (DMA) controller86. The internal bus B1is connected to the memory control circuit81, the CPU83, and the DMA controller86. The memory circuit82is provided with a memory controller85, a line memory89, and a data processing circuit90. In other words, the memory controller85, the line memory89, and the data processing circuit90are provided in the memory circuit82. Integrating the line memory89and the memory controller85in the ASIC6can reduce the time required to transfer information between the line memory89and the memory controller85, thereby speeding up processing with the memory controller85. The memory controller85is provided with the DMA controller86, a write circuit87, and a read circuit88. The DMA controller86transfers raster image data outputted from the memory control circuit81to the write circuit87. The write circuit87generates a set of line data from the raster image data received from the DMA controller86and writes this set of line data to the line memory89. The set of line data is data for pixels corresponding to a scan line on the photosensitive member51. The line memory89stores this set of line data. The read circuit88reads a set of line data from the line memory89and outputs this set of line data to the data processing circuit90. The data processing circuit90stores the set of line data received from the read circuit88in the N output buffers84. The read circuit88also outputs sets of line data to the N laser light-emitting members E1and E2of the semiconductor laser412M via the data processing circuit90and the N output buffers84. The N output buffers84correspond to the N laser light-emitting members E1and E2of the semiconductor laser412M. The output buffers84are first in, first out memory (FIFO), for example. The page memory7is dynamic random-access memory (DRAM), while the line memory89is static random-access memory (SRAM), in the embodiment. By configuring the line memory89of SRAM, which performs storage processes more quickly than DRAM, the process for storing sets of line data in the line memory89can be completed within a single scan. As shown inFIG.3, the BD sensor8is positioned so that a beam reflected off the beam deflecting surfaces414S is incident on the BD sensor8in a state that the beam deflecting surfaces414S forms a prescribed angle relative to the irradiated direction of the beam before exposure is performed according to the set of line data. The BD sensor8detects the beam deflected by the polygon mirror414. The BD sensor8outputs a detection signal to the DMA controller86that is a low level at timings in which a beam is not incident on the BD sensor8and a high level at timings in which a beam is incident on the BD sensor8. The DMA controller86transfers detection signals outputted from the BD sensor8to the write circuit87. Process of the Memory Controller85 FIG.7shows the process performed by the memory controller85provided in the ASIC6ofFIG.6. A1-A12on the left side of the table inFIG.7denote addresses in the line memory89. Every pair of columns to the right of the addresses indicate a process for one scan. In the pair of columns, left side column is for explaining a writing process and right side column is for explaining a reading process. The line memory89is assumed to include addresses A1-A12and storage areas identified by the addresses A1-A12in this example. In the process of the memory controller85, the addresses A1-A12are periodically used, and thus the address A1is treated as the address following the address A12in the line memory89. A memory area identified by each of the addresses A1-A12stores a set of line data. Further, the process of the memory controller85described inFIG.7assumes that N=2 and M=3, where N is the number of laser light-emitting members E1and E2in the semiconductor laser412M, and M is the number of lines that separate scan lines along which closest beam spots are scanned on the photosensitive member51at a time. Sets of line data stored in the addresses A1-A10are assumed to be sets of line data for ten neighboring scan lines on the photosensitive member51. In this description, neighboring scan lines indicate scan lines corresponding to lines arranged successively in the raster image data (print data), and thus are scan lines successively arranged at the regular interval Pi on the photosensitive member51. Each set of line data is generated on the basis of the raster image data. Sets of line data for neighboring ten scan lines next to the current ten scan lines are overwritten in the storage areas identified by the addresses A1-A10when performing subsequent operations (subsequent scans). The storage areas of the addresses A11and A12are supplementary areas to consistently achieve six cycles shown inFIG.7. Scanning on the photosensitive member51is not performed by using the sets of line data stored in the storage areas of the addresses A11and A12. The successive addresses of the line memory89correspond to respective ones of successive scan lines on the photosensitive member51. Since scanning is not performed based on the sets of data at the addresses A11and A12, the addresses A11and A12correspond to no scan lines on the photosensitive member51essentially. In this example, for one scan, the write circuit87writes N (=2) sets of line data for N neighboring scan lines in the line memory89at a time from the page memory7. This writing of sets of line data is performed in a sequential order in which lines are arranged in the raster image data. On the other hand, the read circuit88repeatedly reads N (=2) sets of line data for N scan lines separated by M (=3) scan lines from the line memory89to the N output buffers84. For example, two sets of line data separated for the two scan lines separated by 3 (=M) scan lines are stored in two storage areas identified by two addresses separated by 3 (=M) addresses in the line memory89. In this case, the order of reading is such that the read circuit88reads next N sets of line data for scan lines sifted by Pm from the present pair of N scan lines. Prior to the image forming device1performing a printing process on the sheet S1, the write circuit87initializes the line memory89, i.e., sets of all line data in the line memory89to blank data. For each cycle of the detection signal outputted by the BD sensor8, the memory controller85executes the following process for one scan. That is, the memory controller85begins the process for one scan described below after the write circuit87recognizes that the detection signal from the BD sensor8is the high level. As shown inFIG.7, in the first scan the write circuit87executes a process W1to write two sets of line data from the page memory7for N neighboring two scan lines on the photosensitive member51to the storage areas identified by N neighboring (successive) addresses A3and A4in the line memory89. Further, the read circuit88selects the addresses A11and A2from the line memory89storing two sets of line data for N scan lines on the photosensitive member51that are separated from each other by M scan lines. Further, the read circuit88executes a process R1to output two sets of line data read from the storage areas identified by selected addresses A11and A2to the N laser light-emitting members E1and E2of the semiconductor laser412M. The addresses A11and A2are separated in the line memory89by M lines' worth of addresses, with address A11serving as the starting point. That is, since the addresses are used cyclically and thus the address A1follows the address A12, the difference between the address A2and A11is essentially 3 (=M). The storage areas identified by the address A2stores a set of line data written by the write circuit87in the scan prior to the first scan, while the address A11stores blank data. Consequently, the scanning optical device41scans the line on the photosensitive member51corresponding to the address A2but does not scan the line on the photosensitive member51corresponding to the address A11. In the process R1, the read circuit88writes two sets of line data corresponding to N scan lines read from the storage areas identified by selected addresses A11and A2to the N output buffers84through the data processing circuit90. The scanning optical device41reads the sets of line data from the N output buffers84and outputs these sets of line data to the N laser light-emitting members E1and E2of the semiconductor laser412M. Configuring the read circuit88to write sets of line data to the output buffers84enables the scanning optical device41to expose the photosensitive member51in synchronization with the detection signal from the BD sensor8. The detection signal from the BD sensor8switches to the low level after the first scan has been started. When the detection signal returns again to high level, the memory controller85begins the second scan. In the second scan, the write circuit87executes a process W2to write two sets of line data from the page memory7for N neighboring scan lines on the photosensitive member51to the storage areas identified by N neighboring addresses A5and A6in the line memory89. The write circuit87selects the addresses A5and A6as the N neighboring addresses chronologically (successively) following the addresses A3and A4for which the process W1was executed in the first scan. The N neighboring scan lines for which the sets of line data are read from the page memory7during the second scan are successive scan lines from the N neighboring scan lines for which the sets of line data are read from the page memory7during the first scan. In addition, the read circuit88selects the addresses A1and A4from the line memory89storing sets of line data for N scan lines on the photosensitive member51that are separated from each other by M scan lines. That is, the difference between the number “4” of the address “A4” and the number “1” of the address “A1” is “3 (=M)”. The write circuit87selects the address A1as the address that comes later chronologically (successively) from among the addresses A12and A1which are between the addresses A11and A2selected for the first scan. Further, using the selected address A1as the starting point, the write circuit87selects the address A4separated by M lines' worth of addresses from the address A1. Next, the read circuit88executes a process R2for outputting sets of line data read from the storage area identified by the selected addresses A1and A4to the N laser light-emitting members E1and E2of the semiconductor laser412M via the output buffers84. The storage area identified by the address A1stores a set of line data written by the write circuit87in the scan prior to the first scan, and the storage area identified by the address A4stores a set of line data written by the write circuit87in the process W1of the first scan. Hence, the scanning optical device41scans a line on the photosensitive member51that corresponds to the address A1and scans a line on the photosensitive member51that corresponds to the address A4. In the third and subsequent scans, the write circuit87executes processes W3-W6and the read circuit88executes processes R3-R6in the same manner as the first and second scans. Through processes R1-R6, the scanning optical device41scans ten neighboring lines on the photosensitive member51corresponding to the addresses A1-A10. Note that in the process W5, the write circuit87writes blank data in the storage areas identified by the addresses A11and A12. In this case, in the process W6the write circuit87writes two sets of line data for two neighboring scan lines successive from the two neighboring scan lines related to the two sets of line data stored in the storage areas of the addresses A9and A10in the process W4. Alternatively, both the processes W5and W6may store the same two sets of line data for two neighboring scan lines successive from the two neighboring scan lines related to the two sets of line data stored in the storage areas of the addresses A11and A12in the process W4. In this case, the data processing circuit90sets the set of line data read by the read circuit88from the addresses A11and A12to zero in the processes R1and R6so that the scanning optical device41does not scan a line on the photosensitive member51corresponding to the addresses A1and A12. Thus, the addresses A11and A12are not subject to scanning by the scanning optical device41. When performing interlaced scanning as described above, the image forming device1selects addresses in the line memory89storing sets of line data for N scan lines separated from each other by M scan lines. Accordingly, the image forming device1can efficiently expose the photosensitive member51in one scan based on sets of line data for N scan lines by not reading from the line memory89sets of line data that are not to be outputted to the laser light-emitting members E1and E2. In other words, the image forming device1can reduce access time to the page memory7compared to a device that reads sets of line data from the line memory89while discarding unnecessary data from the sets of line data until necessary data is obtained. Second Embodiment Next, a second embodiment of the present disclosure will be described.FIG.8illustrates the process of the memory controller85provided in the image forming device1according to the second embodiment. In the second embodiment, the content of the processes executed by the memory controller85and the data processing circuit90differs from those in the first embodiments. A1-A16on the left side of the table inFIG.8denote addresses in the line memory89. Every pair of columns to the right of the addresses indicates the process for one scan. The line memory89is assumed to include addresses A1-A16and their storage areas. In the process of the memory controller85, the address A1is treated as the address following the address A16in the line memory89. Further, the process of the memory controller85described inFIG.8assumes that N=2 and M=3. Sets of line data stored in the storage areas identified by the addresses A1-A12are assumed to be sets of line data for six neighboring scan lines on the photosensitive member51. In other words, one scan line on the photosensitive member51corresponds to two addresses in the line memory89. A set of line data for one full-line is divided into two sets of line data (partial data) from the raster image data, and the divided two sets of line data are stored in the storage areas identified by two successive addresses. Scanning on the photosensitive member51is not performed by using sets of line data stored in the storage areas of the addresses A13-A16. As shown inFIG.8, in the first scan the write circuit87executes a process W1to write sets of line data from the page memory7for N neighboring scan lines on the photosensitive member51to storage areas identified by 2N neighboring addresses A5-A8in the line memory89. Here, the addresses A5and A6correspond to one of the N neighboring scan lines on the photosensitive member51and the addresses A7and A8correspond to the other of the N neighboring scan lines on the photosensitive member51. Further, the read circuit88selects the addresses A13, A14, A3, and A4from the line memory89storing sets of line data for N scan lines on the photosensitive member51that are separated from each other by M scan lines. Next, the read circuit88executes a process R1to output sets of line data read from the storage areas identified by the selected addresses A13, A14, A3, and A4to the laser light-emitting members E1and E2of the semiconductor laser412M via the output buffers84. In other words, the read circuit88outputs the sets of line data read from the storage areas identified by the addresses A13and A14to the laser light-emitting member E1of the semiconductor laser412M via the output buffers84. Similarly, the read circuit88outputs the sets of line data read from the storage areas identified by the addresses A3and A4to the laser light-emitting member E2of the semiconductor laser412M via the output buffers84. More specifically, the read circuit88outputs the sets of line data read from the storage areas identified by the addresses A13and A14to the data processing circuit90. The data processing circuit90converts the sets of line data from the storage areas identified by the addresses A13and A14outputted from the read circuit88to a set of line data for one scan line. That is, the data processing circuit90converts the sets of line data stored in storage areas identified by a plurality of neighboring addresses in the line memory89selected by the read circuit88to a set of (full) line data for a single scan line. The data processing circuit90outputs the converted set of line data to the laser light-emitting member E1of the semiconductor laser412M via the output buffers84. Additionally, the read circuit88outputs sets of line data read from the storage areas identified by the addresses A3and A4to the data processing circuit90. The data processing circuit90converts the sets of line data from the storage areas identified by the addresses A3and A4outputted from the read circuit88to a set of (full) line data for a single scan line. The data processing circuit90outputs the converted set of line data to the laser light-emitting member E2of the semiconductor laser412M via the output buffers84. The addresses A13and A3are separated from each other in the line memory89by 2M addresses, which correspond to M neighboring scan lines on the photosensitive member51, with the address A13serving as the starting point. The addresses A3and A4identifies the storage areas for storing sets of line data written by the write circuit87in a single scan prior to the first scan, while the storage areas identified by the addresses A13and A14store blank data. Consequently, the scanning optical device41scans the lines on the photosensitive member51corresponding to the addresses A3and A4but does not scan the lines on the photosensitive member51corresponding to addresses A13and A14. The detection signal from the BD sensor8switches to the low level after the first scan has been started. When the detection signal returns again to high level, the memory controller85begins the second scan. In the second scan, the write circuit87executes a process W2to write sets of line data from the page memory7for N neighboring scan lines on the photosensitive member51to the storage areas identified by the 2N neighboring addresses A9-A12in the line memory89. The write circuit87selects the addresses A9-A12as the 2N neighboring addresses chronologically (successively) following the addresses A5-A8for which the process W1was executed in the first scan. The N neighboring scan lines for which the sets of line data are read from the page memory7during the second scan are successive scan lines from the N neighboring scan lines for which the sets of line data for are read from the page memory7during the first scan. In addition, the read circuit88selects the addresses A1, A2, A7, and A8from the line memory89storing sets of line data for N scan lines on the photosensitive member51that are separated from each other by M scan lines. Next, the read circuit88executes a process R2for outputting sets of line data read from the storage areas identified by the selected addresses A1, A2, A7, and A8to the laser light-emitting members E1and E2of the semiconductor laser412M. More specifically, the read circuit88outputs the sets line data read from the storage areas identified by the addresses A1and A2to the data processing circuit90. The data processing circuit90then converts the sets of line data from the storage areas identified by the addresses A1and A2outputted from the read circuit88to a set of (full) line data for a single scan line. The read circuit88similarly outputs the set of line data read from the storage areas identified by the addresses A7and A8to the data processing circuit90. The data processing circuit90converts the sets of line data from the storage areas identified by the addresses A7and A8outputted from the read circuit88to a set of (full) line data for a single scan line. The storage areas identified by the addresses A1and A2store the sets of line data written by the write circuit87in the scan prior to the first scan, and the storage areas identified by the addresses A7and A8store the sets of line data written by the write circuit87in process W1of the first scan. Hence, the scanning optical device41scans lines on the photosensitive member51corresponding to the addresses A1, A2, A7, and A8. In the third and subsequent scans, the write circuit87executes processes W3and W4and the read circuit88executes processes R3and R4in the same manner as the first and second scans. Through processes R1-R4, the scanning optical device41scans six neighboring scan lines on the photosensitive member51corresponding to addresses A1-A12. Note that in the process W3, the write circuit87writes blank data in the storage areas identified by the addresses A13-A16. In this case, in the process W4the write circuit87writes two sets of line data for two neighboring scan lines successive from the two neighboring scan lines related to the two sets of line data stored in the storage areas of the addresses A9-A12in the process W2. Alternatively, both the processes W3and W4may store the same two sets of line data for two neighboring scan lines successive from the two neighboring scan lines related to the two sets of line data stored in the storage areas of the addresses A9-A12in the process W2. In this case, the data processing circuit90sets the set of line data read by the read circuit88from the storage areas identified by the addresses A13and A14in the process R1to zero so that the scanning optical device41does not scan a line on the photosensitive member51corresponding to the addresses A13and A14. Similarly, the data processing circuit90sets the set of line data read by the read circuit88from the storage areas identified by the addresses A15and A16in the process R4to zero. Thus, the addresses A13-A16are not subject to scanning by the scanning optical device41. Further, by converting sets of line data for a plurality of lines to a set of line data for a single scan line as described above, the data processing circuit90performs the following process. As a specific example, the page memory7may store 1200×1200 dpi raster image data (resolution in main scanning direction x resolution in sub scanning direction; the same applies hereafter). In this case, the data processing circuit90converts the 1200×1200 dpi data to 2400×600 dpi or 4800×600 dpi data. This enables the scanning optical device41to expose the photosensitive member51based on data corresponding to 1200×1200 dpi, even when the exposure resolution of the scanning optical device41is 600 dpi in the sub scanning direction. Through the above process, the image forming device1according to the second embodiment converts the sets of line data for a plurality of lines stored in storage areas identified by a plurality of neighboring addresses in the line memory89to the sets of line data for a single scan line. Third Embodiment Next, a third embodiment of the present disclosure will be described.FIG.9illustrates the process of the memory controller85provided in the image forming device1according to the third embodiment. In the third embodiment, the content of the processes executed by the memory controller85and the data processing circuit90differs from that in the first and second embodiments. A1-A11on the left side of the table inFIG.9denote addresses in the line memory89. Every pairs of columns to the right of the addresses indicates a process for one scan. The line memory89is assumed to include the addresses A1-A11and their storage areas. In the process of the memory controller85, the address A1is treated as the address following the address A11in the line memory89. Further, the process of the memory controller85described with reference toFIG.9assumes that N=2 and M=3. Sets of line data stored in the storage areas identified by the addresses A1-A10are assumed to be sets of line data for ten neighboring scan lines on the photosensitive member51. As shown inFIG.9, in the first scan the write circuit87executes a process W1to write sets of line data from the page memory7for N neighboring scan lines on the photosensitive member51to the N neighboring addresses A1and A2in the line memory89. Further, the read circuit88selects the addresses A5-A10from the line memory89storing sets of line data which are to be used for N scan lines on the photosensitive member51. Here, the read circuit88selects the addresses A6and A9separated from each other by M addresses, and selects the addresses A5and A7neighboring the firstly selected address A6and also selects addresses A8and A10neighboring the firstly selected address A9, thereby selecting the addresses A5-A10. In the following processes, the sets of line data stored in the storage areas identified by the addresses A6and A9are used for scanning lines on the photosensitive member51after the sets of line data stored in the storage areas identified by the addresses A6and A9are corrected by referencing the sets of line data stored in the storage areas identified by the addresses A5, A7, A8, and A10. Subsequently, the read circuit88executes a process R1to output line data read from the selected addresses A5-A10to the N laser light-emitting members E1and E2of the semiconductor laser412M via the output buffers84. Specifically, the read circuit88outputs sets of line data read from the storage areas identified by the selected addresses A5-A10to the data processing circuit90. The data processing circuit90sets the set of line data in the storage area identified by the address A6as the data to be corrected among the sets of line data in addresses A5-A7outputted from the read circuit88. The data processing circuit90references the sets of line data stored in the storage areas identified by the addresses A5and A7neighboring the address A6at which the set of line data as the data to be corrected is stored in the memory89. Based on the referenced line data in the storage areas identified by the addresses A5and A7, the data processing circuit90corrects the set of line data in the storage area identified by the address A6to generate a set of line data for one scan line. The data processing circuit90outputs this generated set of line data to the laser light-emitting member E1of the semiconductor laser412M for scanning a scan line corresponding to the address A6via the output buffer84. The data processing circuit90similarly sets the set of line data in the storage area identified by the address A9as the data to be corrected among the sets of line data in the storage areas identified by the addresses A8-A10outputted from the read circuit88. The data processing circuit90references the sets of line data stored in the storage areas identified by the addresses A8and A10neighboring the address A9at which the set of line data as the data to be corrected is stored in the line memory89. Based on the sets of line data referenced in the storage areas identified by the addresses A8and A10, the data processing circuit90corrects the set of line data in the storage area identified by the address A9to generate a set of line data for one scan line. The data processing circuit90outputs the generated set of line data to the laser light-emitting member E2of the semiconductor laser412M for scanning a scan line corresponding to the address A9via the output buffer84. Here, the addresses A6and A9, which have been targeted for correction, are separated from each other in the line memory89by M lines' worth of addresses. The detection signal from the BD sensor8switches to the low level after the first scan has been started. When the detection signal returns again to high level, the memory controller85begins the process for the second scan. In the second scan, the write circuit87executes a process W2to write sets of line data from the page memory7for N neighboring scan lines on the photosensitive member51to the storage areas identified by the N neighboring addresses A3and A4in the line memory89. The write circuit87selects the addresses A3and A4as the N neighboring addresses chronologically (successively) following addresses A1and A2for which the process W1was executed in the first scan. The N neighboring scan lines for which the sets of line data are read from the page memory7during the second scan are successive scan lines from the N neighboring scan lines for which the sets of line data for are read from the page memory7during the first scan. In addition, the read circuit88selects the addresses A7-A11and A1from the line memory89storing sets of line data for N scan lines on the photosensitive member51. Here, the selected addresses A7-A11and A1includes the addresses A8and A11separated by M scan lines' worth of addresses. The write circuit87selects the address A8as the address at which the set of line data to be corrected is stored. The write circuit87further selects the addresses A7and A9neighboring the address A8as addresses at which sets of line data, as the reference data, are stored. Additionally, the write circuit87selects the address A11at which the set of line data to be corrected is stored and selects the addresses A10and A1at which sets of line data as reference data are stored. Next, the read circuit88executes a process R2to output the sets of line data read from the selected addresses A7-A11and A1to the N laser light-emitting members E1and E2of the semiconductor laser412M via thee output buffers84. Specifically, the read circuit88outputs the sets of line data read from the storage areas identified by the selected addresses A7-A11and A1to the data processing circuit90. The data processing circuit90then sets the set of line data in the storage area identified by the address A8as data to be corrected among the line data in addresses A7-A9outputted from the read circuit88. Further, the data processing circuit90sets the set of line data in the storage area identified by the address A11as data to be corrected among the line data in the addresses A10, A11, and A1outputted from the read circuit88. Similarly to the first scan, the data processing circuit90corrects the sets of line data set as the data to be corrected, and outputs the corrected sets of line data for scanning scan lines corresponding to the addresses A8and A11. In the third and subsequent scans, the write circuit87executes processes W3-W8and the read circuit88executes processes R3-R8in the same manner as the first and second scans. Through processes R1-R8, the scanning optical device41scans ten neighboring scan lines on the photosensitive member51corresponding to address A1-A10. Note that the data processing circuit90sets the sets of line data read by the read circuit88from the addresses A11, A1, and A2to zero in the process R8so that the scanning optical device41does not scan lines on the photosensitive member51corresponding to the addresses A11, A1, and A2. As described above, the data processing circuit90sets at least one set of line data stored in at least one storage area identified by at least one of a plurality of neighboring addresses in the line memory89that were selected by the memory controller85as data to be corrected. The data processing circuit90also reference sets of line data stored in addresses in the line memory89that neighbor the address at which the set of line data to be corrected is stored. Further, the data processing circuit90corrects the at least one set of line data to be corrected based on the referenced sets of line data to generate at least one set of line data for one scan line. By correcting the set of line data to be corrected based on sets of line data stored in neighboring other addresses in this way, the data processing circuit90generates at least one set of line data for one scan line. This process can produce sharper pixel data in each set of line data for one scan line. Example of Software Implementation The functions of the image forming device1may be implemented by a program that controls a computer to function as the image forming device1and that controls the computer to function as each control block of the image forming device1(and particularly each unit in the ASIC6). In this case, the image forming device1is provided with a computer possessing at least one control device (e.g., processor) and at least one storage (e.g., memory) as the hardware required for executing the program. Each function described in the above embodiments is implemented by executing the program using this control device and the storage. The program may be permanently recorded on one or a plurality of computer-readable storage media. The storage media may be provided in the image forming device1but need not be. In the latter case, the program may be provided to the image forming device1through any wired or wireless transmission medium. In addition to this, the functions of each control block may be implemented by a quantum computer, for example. Each process described in the above embodiments may be executed through artificial intelligence (AI). In this case, the AI may be a process running on the control device or a process running on another device, such as an edge computer or cloud server. While the invention has been described in conjunction with various example structures outlined above and illustrated in the figures, various alternatives, modifications, variations, improvements, and/or substantial equivalents, whether known or that may be presently unforeseen, may become apparent to those having at least ordinary skill in the art. Accordingly, the example embodiments of the disclosure, as set forth above, are intended to be illustrative of the invention, and not limiting the invention. Various changes may be made without departing from the spirit and scope of the disclosure. Therefore, the disclosure is intended to embrace all known or later developed alternatives, modifications, variations, improvements, and/or substantial equivalents. | 54,403 |
11861434 | DETAILED DESCRIPTION Embodiment [Control Configuration of Image Forming Apparatus1] Firstly, with reference toFIG.1, a control configuration of the image forming apparatus1according to the present embodiment is described. The image forming apparatus1is an MFP, a printer, a digital printing apparatus for production printing, or the like. In this embodiment, an example in which the image forming apparatus1is an MFP for business is mainly described. The image forming apparatus1includes a control unit10, an image processing unit11, a document reading unit12, a document feeding unit13, a paper feeding unit14, a network transmitting and receiving unit15, an operation panel unit16, an image forming unit17, a fax transmitting and receiving unit18, storage unit19, and the like. Each unit is connected to the control unit10and its operation is controlled by the control unit10. The control unit10is an information processing unit such as a GPP (General Purpose Processor), a CPU (Central Processing Unit), an MPU (Micro Processing Unit), a DSP (Digital Signal Processor), a GPU (Graphics Processing Unit), and an ASIC (Application Specific Integrated Circuit, a processor for specific applications), or the like. The control unit10reads a control program stored in a ROM or a HDD of the storage unit19, by executing with developing the control program on the RAM, and it is operated as each unit of the functional blocks to be described later. Further, the control unit10controls the entire apparatus according to instruction information input from an external terminal or an operation panel unit16. The image processing unit11is a control calculation part such as a DSP (Digital Signal Processor), a GPU (Graphics Processing Unit), ASIC, or the like. The image processing unit11performs image processing on the image data. This image processing may be, for example, processing such as enlargement/reduction, density adjustment, gradation adjustment, image improvement, or the like. Further, the image processing unit11stores the image read by the document reading unit12in the storage unit19as print data200(FIG.2). At this time, the image processing unit11can also convert the print data200into an electronic document such as PDF or an image data file such as TIFF. Further, the image processing unit11may be able to execute at least a part of OCR (Optical Character Recognition) processing. The document reading unit12reads the set document. Further, the document reading unit12is arranged above the main body of the image forming apparatus1. The document reading unit12includes a scanner, platen glass, and a document reading slit. When reading a document placed on the platen glass, the document reading unit12moves the scanner to a position facing the platen glass and scans the document placed on the platen glass to obtain image data. Then, the acquired image data is stored in the storage unit19as the print data200(FIG.2). Further, the document reading unit12moves the scanner to a position facing the document reading slit when reading the document supplied from the document feeding unit13. Then, the document reading unit12reads the document through the document reading slit in synchronization with the document transport operation by the document feeding unit13to acquire image data. The document reading unit12stores the acquired image data in the storage unit19as the print data200. The document feeding unit13conveys the document read by the document reading unit12. The document feeding unit13is arranged above the document reading unit12. The document feeding unit13includes a document placing unit and a document transporting mechanism. The document feeding unit13feeds the documents placed on the document loading unit to the document reading unit12one by one by the document transport mechanism. The paper feeding unit14feeds the recording paper placed on any of the plurality of paper trays one by one toward the image forming unit17. The paper feeding unit14is provided in the main body unit. Further, in the present embodiment, the paper feeding unit14includes a cassette sensor20for each paper tray. The cassette sensor20is a measuring unit that measures the amount of recording paper placed on the paper tray. In the present embodiment, the cassette sensor20accurately measures the height in the thickness direction of the recording paper by, for example, ultrasonic waves, a laser, or the like. Alternatively, as the cassette sensor20, a measuring unit that can calculate the thickness of the printed paper by the sensor for measuring the weight of the recording paper, the set paper size and the standard weight (kg) for the cassette, or the like, may be used. The network transmitting and receiving unit15is a network connection unit including a LAN board, a wireless transceiver, and the like for connecting to an external network. The external network of the present embodiment is, for example, a LAN (Local Area Network), a wireless LAN (Wi-Fi), a WAN (Wide Area Network), a mobile phone network, a voice phone network, or the like. The network transmitting and receiving unit15transmits/receives data on a data communication line, and it transmits/receives a voice signal on a voice telephone line. In the present embodiment, the image forming apparatus1may be connected to an external PC (Personal Computer), a smartphone, a mobile phone, a dedicated terminal, or the like (hereinafter, simply referred to as the “external terminal”) via the external network. The operation panel unit16includes an input unit such as a button, a touch panel, or the like, and a display unit such as an LCD (Liquid Crystal Display), an organic EL (Electro Luminescence) display, or the like. Further, the operation panel unit16is arranged on the front side of the image forming apparatus1. The input unit of the operation panel unit16includes a numeric keypad, a start button, a cancel button, an operation mode switching button, a button for instructing job execution, and the like. In the present embodiment, the operation mode of the image forming apparatus1includes types such as copying, fax transmission, a scanner, and a network scanner, or the like. The jobs also include types such as printing, sending, saving, recording, or the like, for the selected document. The input unit of the operation panel unit16can acquire instructions for various jobs of the image forming apparatus1by the user. The information of each user can be inputted or changed according to the user's instruction acquired from the operation panel unit16. The image forming unit17forms an image on the recording paper from the data stored in the storage unit19, read by the document reading unit12, or acquired from the external terminal according to the output instruction of the user. The image forming unit17includes a photoconductor drum, an exposure unit, a developing unit, a transfer unit, a fixing unit, and the like. The image forming unit17records a toner image on a recording paper by executing an image forming process including charging, exposure, development, transfer, and fixing. Alternatively, the image forming unit17may include an inkjet head for business use. In this case, the ink ejected from the inkjet head records an ink image on the recording paper. The FAX transmitting and receiving unit18transmits/receives a facsimile. The FAX transmitting and receiving unit18can receive a facsimile from another FAX apparatus via a voice line, store the reception data in the storage unit19as the print data200(FIG.2), and cause the image forming unit17to form the image. Further, the FAX transmitting and receiving unit18can convert the document read by the document reading unit12and the network FAX data transmitted from the external terminal into image data and facsimile-transmit to another FAX apparatus via a voice line. The storage unit19is a non-transitory recording medium including a semiconductor memory such as a ROM (Read Only Memory), a RAM (Random Access Memory), or the like, or an HDD (Hard Disk Drive), or the like. The RAM of the storage unit19may keep the stored contents by a function such as self-refreshing even in a power saving state. A control program for controlling the operation of the image forming apparatus1is stored in the ROM or HDD of the storage unit19. In addition to this, the storage unit19also stores the user's account settings. Further, the storage unit19may include an area of a storage folder for each user. In addition, besides this, the image forming apparatus1may include a post-processing device that performs post-processing (after treatment) by a stapler that collects printed matter, a cutter that cuts printed matter, and the like. Alternatively, the image forming apparatus1may include a printed matter transport unit that transports the printed matter to a post-processing apparatus, which is a dedicated apparatus for performing post-processing. Further, in the image forming apparatus1, the control unit10and the image processing unit11may be integrally formed such as a CPU with built-in GPU, a chip-on module package, an SOC (System On a Chip), or the like. Further, the control unit10and the image processing unit11may have a built-in RAM, ROM, flash memory, or the like. [Functional Configuration of Image Forming Apparatus1] Here, with reference toFIG.2, a functional configuration of the image forming apparatus1is described. The control unit10of the image forming apparatus1includes a data position acquisition unit100and a cross-section drawing unit110. The storage unit19stores the print data200and the cross-section drawing data210. The data position acquisition unit100acquires the print data200and the position of the cut line to be trimmed after printing with print data200. The cross-section drawing unit110draws a fragment of characters and/or image that can be viewed on the cross-section after trimming. Here, the cross-section drawing unit110draws this fragment for each page of the print data200acquired by the data position acquisition unit100. The cross-section drawing unit110draws, for example, a fragment of rectangular shape to a specific range from the position of the cut line acquired by the data position acquisition unit100. At this time, the cross-section drawing unit110draws the fragment only when the print data200has equal or greater than a specific number of pages. In addition, in the present embodiment, the cross-section drawing unit110can adjust the drawing of the fragment according to the thickness of the recording paper to be printed. Thus, in this embodiment, the measured value of the cassette sensor20is used. Further, the cross-section drawing unit110can improve the visibility of the character and/or image drawn on the cross-section by drawing the fragment in a darker color than a usual color. For example, in order to perform this dark color drawing, the cross-section drawing unit110may adjust the dithering density to be darker, or it may increase the density adjusted in the image forming unit17regardless of the eco-setting, or the like. The print data200is various data for printing. The print data200may be, for example, electronic document data such as PDF (Portable Document Format), or the like, PS (Post Script) data, other vector data, bitmap data, files of various application software (hereinafter, referred to as “application”), or the like. Alternatively, the print data200is described in, for example, JDF (Job Description Format) and/or JMF (Job Messaging Format), and the above-mentioned PDF, PS, application data, after-printing adjustment data, account control data, or the like, may be summarized. In addition, the print data200includes a trimming setting220. The trimming setting220is data indicating each attribute in printing. In the present embodiment, the trimming setting220includes, for example, page number data. Further, the trimming setting220may include data such as a cut line, a register mark (trimming mark), a folding position, an imposition position, a milling process designation, and a trimming (cutting) width. Of these, the cut line indicates the position to be trimmed after printing with the print data200. Further, in the present embodiment, the trimming setting220also includes post-processing setting data. In the present embodiment, the post-processing also includes processing for bundling a bundle of printed matter on which the print data200is printed (hereinafter, simply referred to as a “printed matter”) by staples, stitches, spiral bindings, ring bindings, or other methods. In addition, the print data200itself may include vector or bitmap image data of the register mark, or the like. Even in this case, the trimming setting220may have another value set as the position of the cut line. The cross-section drawing data210is data of text and/or an image drawn on the cross-section for trimming when the print data200is printed. Further, in the case of text, the cross-section drawing data210may include setting data such as a font, a color, or the like. Further, in the case of an image, the cross-section drawing data210may include setting data of information such as an image format including a type of bitmap or vector data, the number of colors, or the like. Further, the cross-section drawing data210may also include setting data of the position on the cross-section. This setting data of the position may include the setting of the margin from the upper, lower, left, or right edge on the cross-section, the left alignment, the center alignment, the right alignment, and the like. In addition, the cross-section drawing data210may be attached to or included in the print data200, or it may be set and stored in the storage unit19by the printing user. Further, the section drawing data210may be set in association with the account setting. Here, the control unit10of the image forming apparatus1is made to function as the data position acquisition unit100and the cross-section drawing unit110by executing the control program stored in the storage unit19. Further, each part of the image forming apparatus1described above becomes a hardware resource for executing the image processing method according to the present disclosure. In addition, a part or any combination of the above-mentioned functional configurations may be configured in terms of hardware or circuit by IC, programmable logic, FPGA (Field-Programmable Gate Array), or the like. [Cross-Section Drawing Process by Image Forming Apparatus1] Next, with reference toFIGS.3to5, a cross-section drawing process by the image forming apparatus1according to the embodiment of the present disclosure is described. In the cross-section drawing process of the present embodiment, the print data200is acquired. Then, for each page of the print data200, fragments of characters and/or image that can be viewed in the cross-section after trimming are drawn in a specific range from the position of the cut line. The print data200on which the fragment is drawn is formed as an image on the recording paper. In the cross-section drawing process, the control unit10mainly executes the program stored in the storage unit19in cooperation with each unit and uses the hardware resources. Hereinafter, with reference to the flowchart ofFIG.3, the details of the cross-section drawing process according to the present embodiment is described step by step. (Step S100) Firstly, the data position acquisition unit100performs the data position acquisition process. The data position acquisition unit100acquires the print data200. Specifically, the data position acquisition unit100acquires the print data200from, for example, the external terminal via the network transmitting and receiving unit15. In this embodiment, the print data200also includes the trimming setting220. Further, the trimming setting220includes data on the position of the cut line to be trimmed after printing. Alternatively, the data position acquisition unit100may acquire the print data200from an external recording medium such as a USB memory, a flash memory card, or the like. Alternatively, the data position acquisition unit100may acquire the reception data, which is facsimile-received by the fax transmitting and receiving unit18, as print data200. The data position acquisition unit100stores the acquired print data200in the storage unit19. (Step S101) Then, the data position acquisition unit100performs the cross-section character image acquisition process. The data position acquisition unit100acquires the cross-section drawing data210. In the present embodiment, the cross-section drawing data210can be acquired by a plurality of methods. For example, the data position acquisition unit100may allow the user to input text and/or an image by using the GUI (Graphical User Interface) of the operation panel unit16. In this GUI, in the case of text, the font and color can be set. Further, in the case of an image, an image data file may be specified and input. Further, it may be possible to set the setting data of the cross-section drawing data210in the GUI. For example, in the cross-section, left-sided, center-sided, or right-sided, and a margin from the top, bottom, left, or right edge can be set. Alternatively, the data position acquisition unit100can also acquire the cross-section drawing data210that has already been set in the storage unit19. For example, when the drawing of the watermark is specified, the data position acquisition unit100may acquire the data of the text and/or the image of the watermark as the cross-section drawing data210. Alternatively, the data position acquisition unit100may acquire data of the affiliation of the user or the print data200, or the like, as the cross-section drawing data210from the account setting stored in the storage unit19. Alternatively, the cross-section drawing data210set by the external terminal together with the print data200may be acquired. (Step S102) Then, the cross-section drawing unit110determines whether or not the number of pages to be printed is equal to or greater than the specific number of pages. Here, if printing is performed in the number of copies, the cross-section drawing unit110may determine whether or not the number of printed sheets of one copy (hereinafter, referred to as the “printed matter”) equal to or greater than the specific page. The specific number of pages may be a number of pages having a thickness that can secure a size of the text or the image being visible. For example, as a specific number of pages, about equal to or greater than 100 pages can be set as a practical number of pages. Further, the specific number of pages may be set from the information on the thickness of the recording paper acquired by the cassette sensor20, the information on the set weight, or the like. The cross-section drawing unit110determines Yes if printing equal to or greater than the specific number of pages. On the contrary, the cross-section drawing unit110determines No if the printed matter has less than the specific number of pages. In the case of Yes, the cross-section drawing unit110advances the process to step S103. In the case of No, the cross-section drawing unit110does not print for the cross-section and ends the cross-section drawing process according to the present embodiment. As a result, normal printing is performed. (Step S103) If the number of pages is equal to or greater than the specific number of pages, the cross-section drawing unit110performs drawing adjustment process. The cross-section drawing unit110adjusts the drawing of the fragment according to the thickness of the recording paper to be printed. In the present embodiment, the height is measured by the cassette sensor20and the paper amount sensor of the recording paper tray, and the drawing in the thickness direction of the printed matter is adjusted. That is, in this example, the cross-section drawing unit110calculates where the text and/or the bitmap corresponds to in the thickness direction of the printed matter. This makes it possible to draw precisely. (Step S104) Then, the cross-section drawing unit110performs the fragment drawing process. The cross-section drawing unit110draws a fragment of the cross-section drawing data210in a specific range from the position of the cut line, which is acquired by the data position acquisition unit100, for each page of the print data200. FIG.4shows an example of the text as shown in the cross-section where the printed matter500is trimmed. In this example of the printed matter500, a visible text is shown on the cross-section trimmed by the cut line. Here, when trimmed, the text “CONFIDENTIAL” emerges and is viewed. FIG.5shows an example of one page of the above-mentioned printed matter500including a fragment of the image printed along the cut line on one side. The cross-section drawing unit110draws the text and/or the image applied to the trimmed position on each page of the print data200. The cross-section drawing unit110draws, for example, a fragment of the text or the image at a corresponding position within the specific range from the cut line of each page. In this example, the fragment is configured to a series of rectangles that correspond to positions on the cross-section of the text and/or image. That is, the cross-section drawing unit110draws the fragment as a rectangle on each page after adjusting the margin and the position according to the trimming setting220of the printed matter. The width and the distance from the cut line to a specific range for the rectangle are areas for viewing identifiable text or image after trimming. Further, the distance in this specific range can be set to, for example, about one tenth of an inch to a few fraction of an inch in consideration of the accuracy of trimming. The cross-section drawing unit110can draw these fragments by adding them to each page of the print data200. At this time, the cross-section drawing unit110can be designated to draw the fragment in a darker color than a normal color. (Step S105) Next, the cross-section drawing unit110performs an image forming process. The cross-section drawing unit110causes the image forming unit17to form an image of the print data200on which the cross-section is drawn. Further, the cross-section drawing unit110instructs the post-processing device or the post-processing apparatus to perform post-processing according to the post-processing setting of the trimming setting220. As a result, the printed matter is post-processed. Alternatively, the cross-section drawing unit110instructs the display unit of the operation panel unit16to indicate the trimming position of the printed matter, or the like, and it may be cut by the user. This allows the text and/or image drawn with toner, pigment, or ink to be viewed in the trimmed cross-section. As described above, the cross-section drawing process according to the embodiment of the present disclosure is completed. As configured in this way, the following effects can be obtained. In a typical technology, a personal printer capable of borderless printing prints a heading-like mark on the edge of a bundle of paper. However, many office image forming apparatuses or production printing printers have not been able to print to the edge of the recording paper. Specifically, in such office or production printing, the printed matter has been trimmed after printing. On the other hand, the image forming apparatus1according to the embodiment of the present disclosure includes the data position acquisition unit100that acquires the print data200and the position of the cut line to be trimmed after printing with the print data200; and the cross-section drawing unit110that draws, for each page of the print data200, a fragment of text and/or an image visible on trimmed cross-section from the position of the cut line acquired by the data position acquisition unit100to a specific range. With this configuration, the text and/or the image can be visibly printed when the cross-section of the printed matter combined by staples, stitches, ring bindings, or other methods is trimmed. This allows a business or a production printing apparatus to print the text and/or the image identifiable in the cross-section. This can be used to identify printed matter placed on shelves for tracking or other purposes. In addition, there becomes no need to stamp or print with a special printer for printing on the sides. Further, in the image forming apparatus1according to the present embodiment, the cross-section drawing unit110draws the fragment when printing equal or greater than the specific number of pages. With this configuration, fragments on the cross-section are not printed for printed matter having less than the specific page. Therefore, it is not necessary to switch whether or not to print the fragment one by one depending on the number of pages at the time of printing. Further, in the image forming apparatus1according to the present embodiment, the cross-section drawing unit110adjusts drawing of the fragment according to thickness of the recording paper to be printed. With this configuration, the text and/or the image can be drawn, precisely. Therefore, the text can be printed so that it is easier to read. Further, it becomes possible to print a two-dimensional bar code, or the like, which requires strict aspect ratio. Further, in the image forming apparatus1according to the present embodiment, the cross-section drawing unit110draws the fragment in a darker color than as usual. With this configuration, the text and/or the image can be printed in the cross-section so that it is easy to see. That is, even if the fragment is image-formed with matter that does not soak into the recording paper well, such as toner, the text and/or the image drawn on the cross-section becomes easy to see. [Other Embodiments] In addition, in the printed matter500ofFIGS.3and4of the above-described embodiment, an example in which the text is drawn on the cross-section is described. However, as described above, the cross-section drawing data210may be data including the image such as a bitmap other than text, or the like. The element “A” inFIG.6shows an example of cross-section drawing data210of such an image. The element “B” inFIG.6shows an example in which this cross-section drawing data210is printed on the printed matter501. The fragments corresponding to the pixel lines of the bitmap image are drawn on each page. The cross-section drawing data210is adjusted, scaled, and drawn as necessary. This makes it possible to print the bitmap image so that it can be viewed. In the above-described embodiment, an example of adjusting the drawing according to the height of the recording paper in drawing the text and/or the image has been described. However, it is not necessary to adjust the drawing by using the height value of the recording paper. Further, the size and resolution of the text and/or the image drawn on the cross-section may be changed depending on the number of pages. That is, when the number of pages is large, drawing a larger text and/or image is possible. Alternatively, when the number of pages is small, the text and/or image may be reduced. Even in this case, if the number of pages is too small and the text and/or image cannot be recognized, it is not necessary to print. Furthermore, when the number of pages is large, the processing load of drawing the fragment may be reduced by lowering the resolution in the height (thickness) direction of the printed matter. Further, in the above-described embodiment, although the text and/or the image is drawn on one cross-section, the text and/or the image may be drawn on a plurality of cross-sections so as to be viewable. In this case, the text and/or the image may be different for each cross-section. With this configuration, a case that an index on the side surface of the page and the text and/or the image on the upper surface are drawn may be possible. Further, in another embodiment of the present disclosure, the cross-section drawing unit110may perform display position outside the specific range, which is different from that viewed in the cross-section after trimming at the cut line. That is, the fragment may be drawn so that a different text and/or image is visible if the trimming is made at a position outside of the specific range. InFIG.7, in this way, the printed matter502shows an example in which the text “ILLEGAL COPY” is visibly drawn if a position other than the cut line is cut. FIG.8shows an example of a page of this printed matter502. In the case of this example, if trimming is performed within the specific range of the cut line, the same “CONFIDENTIAL” as in the above example is drawn. However, if trimming is performed both sides outside the specific range from the cut line, “ILLEGAL COPY” indicating that the copy is illegal is viewed. With this configuration, a printed matter with an intention at the trimmed position can be provided. For example, when the image is printed by an image forming apparatus other than the one in which the trimming position is set, it can be identified and recognized as an illegal copy, or the like. As a result, security can be enhanced. Further, by setting a position different from the register mark of the print data200as the actual cut line, further enhance of the security can be performed. Further, in the above-described embodiment, an example of drawing the fragment based on the cross-section drawing data210on each page of the print data200has been described. However, this fragment may be PDL data or bitmap image data (hereinafter, referred to as “fragment data”) different from the print data200. In this case, the fragment data may be configured to be superimposed with each page of the print data200at the time of image formation. This superimposition can be performed in the same manner as the superimposition of form printing of variable printing or watermark printing. With this configuration, the fragment can be drawn without changing the print data200. Further, the drawing of the fragment can be easily changed according to the setting, and various configurations can be supported. Further, in the above-described embodiment, an example in which the cross-section drawing process is performed by the image forming apparatus1has been described. With reference toFIG.9, however, it is also possible to use the external terminal2as an image processing apparatus to perform cross-section drawing process and transmit the print data200on which the cross-section is drawn to the image forming apparatus1bfor printing. For example, the cross-section drawing according to the above-described embodiment may be executed by the printer driver of the external terminal2. In this case, the printer driver is used to set the printing of the cross-section drawing data210on the external terminal2. In this example, a text and/or an image may be input by the GUI of the printer driver of the image forming apparatus1bduring printing. Furthermore, in the GUI of the printer driver, the font, the bitmap file, the position, and the like may be set as in the case of being set by the image forming apparatus1according to the above-described embodiment. This bitmap file can be loaded, converted, or the like, by the printer driver. Alternatively, at this time as well, a setting may be made to draw a cross-section as similar to the watermark. In addition, instead of the external terminal2, a management server that manages printing for pull-printing, or the like, can be used as an image processing apparatus to perform the same cross-section drawing process. In such an example, the user selects a print job from the list of jobs on the management server by the external terminal or the operation panel unit of the image forming apparatus. On this, various settings including text and/or image settings similar to those described above can be performed. Here, the management server may also load the bitmap file from the external terminal, or the like. Further, in the above-described embodiment, an example in which the cross-section drawing process is performed by the image forming apparatus1for business is described. However, a server for print management of industrial printing (production printing) may be used as an image processing apparatus according to another embodiment of the present disclosure to perform the cross-section drawing process. With reference toFIG.7, an example of such a production printing system X is described. The production printing system X of this example includes a server3, an offset printing apparatus4a, a digital printing apparatus4b, a post-processing apparatus4c, and an administrator terminal6, and each apparatus is connected with the network5. The server3is a server for managing the workflow of production printing. The server3is a PC server, a dedicated machine, a general-purpose machine, or the like, installed on a so-called cloud or at a user's place. The server3gives an instruction to each apparatus and transmits/receives other information. As a result, the server3manages the status of each apparatus, sets jobs, and the like. The offset printing apparatus4ais an automated printing apparatus that performs offset printing for printing a large amount (many lots). The digital printing apparatus4bis an industrial printer, or the like, that prints a smaller lot than the offset printing apparatus4a. The post-processing apparatus4cis various apparatuses for performing post-processing (after-treatment) such as folding, collating, bookbinding, cutting, or the like, for the recording paper printed by the offset printing apparatus4aor the digital printing apparatus4b. The network5is a LAN, a wireless LAN, a WAN, a mobile telephone network, an industrial network, a voice telephone network, other dedicated lines, or the like. The network5can send and receive various commands and data to and from each apparatus. The administrator terminal6is a printing administrator's terminal. The administrator terminal6accesses the server3to allow the administrator to design the print, upload data, create a job, manage a prepress process, check progress, request a process, or the like. A plurality of these apparatuses may exist depending on the application, the scale of printing, and the like. Here, an example in which the server3performs similar cross-section drawing process as in the above-described embodiment is described. In this example, the user who may be the administrator executes design or prepress application software (hereinafter, simply referred to as “application”) from the administrator terminal6to access the server3. At this time, a GUI for the cross-section drawing process is displayed by the application, and a text and/or an image can be input, and each setting can be specified. As a result, the fragment is drawn on the print data200and transmitted to each apparatus at the time of printing. Further, the server3transmits the trimming setting220to the post-processing apparatus4cto perform trimming at the designated position. Here, in the production printing, the trimming position may be changed depending on the result of prepress. In this case, the drawing of the fragment may be changed according to this change. Therefore, as described above, image data for fragments may be prepared, separately, included in the print data200, and superimposed at the time of printing. Further, it goes without saying that the configuration and operation of the above-described embodiment are examples, and it can be appropriately modified and executed without departing from the gist of the present disclosure. | 35,811 |
11861435 | DETAILED DESCRIPTION Embodiments of the present disclosure now will be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all, embodiments of the disclosure are shown. Indeed, embodiments of the disclosure may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Like numbers refer to like elements throughout. In some embodiments, some of the operations above may be modified or further amplified. Furthermore, in some embodiments, additional optional operations may be included. Modifications, amplifications, or additions to the operations above may be performed in any order and in any combination. Many modifications and other embodiments of the disclosure set forth herein will come to mind to one skilled in the art to which this disclosure pertains having the benefit of the teachings presented in the foregoing description and the associated drawings. Therefore, it is to be understood that the embodiments are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Moreover, although the foregoing descriptions and the associated drawings describe example embodiments in the context of certain example combinations of elements and/or functions, it should be appreciated that different combinations of elements and/or functions may be provided by alternative embodiments without departing from the scope of the appended claims. In this regard, for example, different combinations of elements and/or functions than those explicitly described above are also contemplated as may be set forth in some of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation. It will be appreciated that the data types, data objects, and other data representations described herein may be embodied in any of a myriad of manners. For example, such data representations may be embodied by any of a myriad of primitive data types, custom object implementations, and/or the like, without deviating from the scope and spirit of the present disclosure. Overview Ensuring a printer continues to print at an expected location on a print media is one of many factors important to ensure the printer is functioning as intended. One aspect of ensuring the printer continues to print at an expected location on a print media is maintaining a print precision, which defines the position at which printing of data is to begin and/or end. In circumstances where a printer does begin to print at an incorrect location (e.g., too high or too low), the incorrectly printed portion of the printed media may become completely unusable. For example, in the context of label printing, printing at an incorrect location may cause a label on a print media to be printed incomplete, with one or more portion(s) of data missing, cut off, and/or the like. If the print media is incorrectly printed, the printer may have completely wasted processing resources and/or the like that were utilized to perform the print, as well as waste actual print media upon which the data is printed (e.g., in circumstances where the print media is not reusable). One cause of a printer printing incorrectly positioned labels is due to slippage of the print media. The slippage may cause printing to begin at different locations for different labels on a print media based on an inconsistent force applied to a print media. For example, when label printers and/or other devices that utilized a roll of a print media, slippage may occur as the size of a print job increases. As the roll of print media is expended, the diameter of such a roll of print media decreases. A force is applied to the print media to pull the print media in the direction required for printing on labels thereon, and/or outputting the print media including the printed data. As the roll of print media is manipulated by the force (e.g., a spring force pulling the print media roll for printing and output), changing dynamics of the print media may cause a shift in the print position. For example, on a new roll of print media, a pulling force may be applied that is sufficient to pull the print media when it is at its largest (e.g., highest diameter), heaviest, and/or the like. As the print job continues, the pulling force may similarly be applied to the continuously used roll of print media. The decreased and/or otherwise altered aspects of the used roll of print media (e.g., a decreased diameter) may cause the print position to become incorrect, causing labels to print with errors varying in severity. Often, printers do not have any mechanism that compensate for or otherwise manage this change in force. Referring toFIG.2,FIG.2illustrates incorrectly printed labels due to incorrect print precision. Specifically,FIG.2depicts an example print media200including printable portions202A,202B,202C,202D, and202E. In one example context, each printable portion of the print media200corresponds to a label on a particular roll of labels. Each of the printable portions includes data printed on the particular printable portion. For example, printable portion202A includes text data204A, printable portion202B includes text data204B, printable portion202C includes text data204C, printable portion202D includes text data204D, and printable portion202E includes text data204E. The text data204A-204E may be printed by a particular printer over the course of a particular print job, which may correspond to printing of any number of labels. For example, a printer may execute a print job of tens, hundreds, thousands, and/or more labels. In one example context, the printable portion202A embodies a first label of a print job, and the printable portion202A embodies a second label of a print job, whereas the printable portions202C,202D, and202E may be tens, hundreds, or thousands of labels later in the print job. As the print job continues, the likelihood of errors in print position affecting said printers may increase, for example as the diameter of a roll of print media within the printer decreases due to output during printing. Additionally, the likelihood of errors in print position increases in circumstances where the printable portions of the print media are smaller in area. Each of the printable portions includes text data printed thereon that is intended to be printed at a particular position in the printable portion. For example, the text data may be intended for printing centered on a corresponding printable portion, such that a margin is maintained on each side of the text data. As illustrated, the print position may drift over time as the print job continues. The print position begins to drift downward at the printing of the printable portion202C. The print position drifts further downward at the printing of the printable portion202D, and even further downward at the printing of the printable portion202E such that at least a portion of the text is cut off. The drift in print position causes wasteful expenditure of computing resources used to print one or more printable portions that ultimately are unusable, such as the printable portions202C,202D, and/or202E. Additionally, the materials of the printable portions202C,202D, and202E are wasted and may need to be disposed of. At the end of a particularly long print job (e.g., printing tens, hundreds, thousands, or more, of labels), some or all of the resulting prints may be useless. Embodiments of the present disclosure generate a print position compensation that is utilized to offset a change in print position (e.g., due to slippage resulting in drift) that occurs over time. In this regard, the print position compensation may represent an offset to be utilized during one or more print jobs to initiate printing at a corrected print position. The corrected print position may account for any drift that has occurred. By reducing and/or eliminating drift, embodiments of the present disclosure more accurately perform print jobs regardless of print job length, label size, and/or any other factors impacting drift of a print position. By performing print jobs more accurately, embodiments additionally reduce material waste that would otherwise result from failed and/or inaccurate prints due to such a print position drift. Some embodiments of the present disclosure generate a print position compensation based at least in part on one or more distances and/or timestamps usable to generate a distance, where such determinations are performed during different media movement phases—such as a media output phase and a media retraction phase. For example, some embodiments determine edge position distances between an edge and a component of the printer where printing is to occur (e.g., a print head), and utilize such edge position distances to generate a print position compensation. Alternatively or additionally, some embodiments determine media movement phase timestamp differentials for a media output phase and a media retraction phase, and utilize such timestamp differentials to determine a print position compensation. Such distances and/or timestamps may be determinable using sensor(s) present in various printers. In this regard, legacy printers may be specially configured to perform such operations without requiring alternative and/or additional hardware. Similarly, new printers may be specially configured to perform such determinations without reconfiguration. Definitions The term “sensor” refers to hardware, software, firmware, and/or a combination thereof, that detects a presence of a print media, a gap between portions of a print media, a black mark, and/or other determinable aspect of a portion of a print media. Non-limiting examples of a sensor include a label stop sensor, a black mark sensor, a gap sensor, a slot sensor, and/or the like. The term “print head” refers to a printer component embodied in hardware, software, and/or firmware that engages and/or otherwise interacts with a print media to print on the print media. The term “print media” refers to a physical object including any number of area(s) upon which data is printed. Non-limiting examples of a print media include a label roll, a continuous paper feed, and any other feed of printable material. The term “printable portion” refers to defined area(s) of a print media upon which data is to be printed. In some embodiments, a print media includes printable portion(s) embodying labels or other areas upon which data is to be printed, and non-printable portion(s) separating the printable portion(s), for example gaps between such printable portion(s). The term “edge position distance” refers to a determined distance between a particular edge of a portion of a print media and a print head. The term “media movement phase” refers to a state of operation of a printer during which a print media is manipulated via one or more applied force(s). The term “media output phase” refers to a particular media movement phase during which a print media is manipulated in a first direction for outputting via the printer. Non-limiting examples of a media output phase include a phase during which a printer is printing on a print media to output the print media including such printed data, a phase during which a print media is fed through the printer to output the print media, and/or another phase in which the print media is output with or without printing. The term “media retraction phase” refers to a particular media movement phase during which a print media is manipulated in a direction opposite the direction of the printer during output. Non-limiting examples of a media retraction phase include a phase during which a printer is retracting unprinted labels that have already passed a particular sensor, but have not been printed on during a print job. The term “print position compensation” refers to electronically managed data representing an offset distance or time value at which printing is to begin. In one example context, a positive print position compensation indicates printing is to begin at a particular number of dot lines later than a determined or default position at which printing usually is to begin. The term “print operation” refers to electronically driven instructions that cause a printer to initiate a print job phase for printing particular data onto a print media. The term “print job phase” refers to a state of a printer during which data is to be printed on a print media. The term “calibration print phase” refers to a particular print job phase during which particular data is printed on a print media for use in calibrating one or more configuration(s), setting value(s), and/or other aspect(s) of the printer. For example, in some example contexts, during a calibration print phase calibration data is printed on a print media to determine a default print position at which data is to begin printing on a print media. The term “determinable step size” refers to electronically managed data representing a unit of measurement associated with adjusting a position of a print media. In some embodiments, a determinable step size represents a particular number of dot lines, where the number is determined directly or interpreted from other detected data from a sensor (e.g., timestamp data). The term “differential edge position distance” refers to electronically managed data representing a distance difference between two edge position distances. In one example context, a differential edge position distance represents a difference between a first edge position distance associated with a first media movement phase (e.g., a media output phase) and a second edge position distance associated with a second media movement phase (e.g., a media retraction phase). The term “edge” refers to a boundary location and/or area of a printable portion of a print media. In some embodiments, an edge is associated with a plurality of edges, each having a different “edge type.” The term “edge type” refers to a determined classification and/or categorization of a particular edge based on the location of the edge with respect to the corresponding printable portion of the print media and/or a particular direction. The term “leading edge” with respect to a printable portion of a print media refers to an area and/or location of the printable portion that first passes a sensor in a media output phase. The leading edge may similarly be referred to as a “front edge” of a printable portion of a print media, such as a label. In some embodiments, a leading edge is a non-limiting example of an edge type. The term “trailing edge” with respect to a printable portion of a print media refers to an area and/or location of the printable portion that last passes a sensor in a media output phase. The trailing edge may similarly be referred to as a “back edge” of a printable portion of a print media, such as a label. In some embodiments, a trailing edge is a non-limiting example of an edge type. The term “objective distance” with respect to two locations refers to electronically managed data representing a known distance between the two locations. When used with respect to particular components, an objective distance refers to electronically managed data representing a known distance between the location associated with each of the particular components. The term “boundary check” refers to any number of algorithm(s), determination(s), and/or data-driven process(es) that indicate whether a print position identified for use in performing a print job falls within a printable portion of a print media. In some embodiments, a boundary check embodies a comparison between a print position compensation and a maximum allowable compensation. The term “idle state” refers to a determined state of a printer indicating that the printer has not performed operations associated with a print job for a particular period of time. The term “edge detection event” refers to electronically managed data captured by a sensor that represents the presence of an edge within the field of view captured by the sensor. A edge detection event is detectable by the sensor and/or processing circuitry associated with the sensor. The term “event timestamp” refers to electronically managed data representing a time at which a particular event was detected. The term “media movement phase timestamp differential” refers to electronically managed data representing a determined length of time between a first event and a second event each detected during a media movement phase. The term “output phase timestamp differential” refers to a media movement phase timestamp differential determined based on a first event and a second event detected during a media output phase. The term “retraction phase timestamp differential” refers to a media movement phase timestamp differential determined based on a first event and a second event detected during a media retraction phase. The term “timestamp-based distance value” refers to electronically managed data representing a difference between the time one or more edge(s) were determined moving between a media output phase and a media retraction phase based at least in part on a determined difference between an output phase timestamp differential and a retraction phase timestamp differential. The term “print speed” refers to electronically managed data representing a known and/or determined speed at which a print media of a printer is moved. Example Apparatuses of the Disclosure FIG.1illustrates a block diagram of a printer apparatus that may be specially configured within which embodiments of the present disclosure may operate. Specifically,FIG.1illustrates an example printer apparatus100that generates and/or utilizes a print position compensation in accordance with the present disclosure. For example the printer apparatus100in some embodiments is configured to perform printing operations based at least in part on a determined print position compensation as described herein to minimize or eliminate the effects of print position drift. As illustrated, the printer apparatus100includes a sensor102, sensor ADC104, light source106, processor108, memory112, print compensation circuitry114, and print mechanisms116. The printer apparatus100further includes a platen roller118, which manipulates at least print media120. In this regard, it will be appreciated that the various components depicted and described with respect to the printer apparatus100manipulate the print media120, and/or an associated roll of print media including at least print media120, for printing data on portion(s) of such print media via the print mechanisms116, and outputting the print media including such printed data. The sensor102includes hardware, software, firmware, and/or a combination thereof, that aids in controlling movement of print media in the printer apparatus100. In some embodiments, the sensor102embodies a label stop sensor, black mark sensor, or other photoelectric sensor that aids in controlling the print media, such as by providing data indicating detect edges, movement of edges, and/or the like. The sensor102may detect gaps between printable portions of a print media (e.g., gaps between labels), black marks in a continuous stock, slots in a continuous stock, and/or the like. Alternatively or additionally, the sensor102may generate and/or capture data that is sent to the processor108specially configured to perform such detecting based at least in part on the received data from the sensor102. In some embodiments, the sensor includes a sensor ADC104that embodies an analog-to-digital converter. The sensor ADC104may generate and/or output digital signals representing the data captured by the sensor102. For example, the sensor102may detect and/or capture light rays projected from the light source106as it shines through the print media120, such as during printing and/or retraction of the print media during execution of a print job. The light source106may embody one or more LED(s), laser(s), and/or device(s) that generate high-powered light in at least one direction. The sensor ADC104may output a digital representation of the light rays captured via the sensor102. The print media120may include a plurality of printable portions on which data is to be printed. In some embodiments, each printable portion embodies a label on which data is printed via the printer apparatus100. Additionally, the print media120includes a gap between a trailing edge of a printable portion and a leading edge of the next printable portion. Such gaps and/or edges may be detectable via the sensor102as described herein. The print mechanisms116include components embodied in hardware, software, and/or firmware, that facilitate printing of data onto the print media120, feeding of print media out of the printer apparatus100, and/or tearing or removal of one or more printable portions of the print media120. In some embodiments, the print mechanisms116include a tear bar. The tear bar may be specially designed to enable tearing of printable portions from the print media120that have passed the tear bar, and/or peel printable portions from the print media120. Additionally or alternatively, in some embodiments, the print mechanisms116include a print head. The print head may be specially configured to enable printing of data onto the print media120. In some embodiments, the print head is controlled at least in part on instructions from the processor108and/or the like that cause the print head to print particular data at a particular location (e.g., a dot line), and/or at multiple locations along the print media120. In some embodiments the print head is used to print particular data at a particular position on each printable portion of the print media120. In this regard, the print head may be activated, for example based at least in part on instructions from the processor108, to print data at particular locations based at least in part on a print position compensation. In some embodiments, the processor108(and/or co-processor or any other processing circuitry assisting or otherwise associated with the processor) may be in communication with the memory112via a bus for passing information among components of the printer apparatus100. In some embodiments, for example, the memory112is non-transitory and may include, for example, one or more volatile and/or non-volatile memories. In other words, for example, the memory112in some embodiments includes or embodies an electronic storage device (e.g., a computer readable storage medium). In some embodiments, the memory112is configured to store information, data, content, applications, instructions, or the like, for enabling the printer apparatus100to carry out various functions in accordance with example embodiments of the present disclosure. The processor108may be embodied in a number of different ways. For example, in some example embodiments, the processor108includes one or more processing devices configured to perform independently. Additionally or alternatively, in some embodiments, the processor108includes one or more processor(s) configured in tandem via a bus to enable independent execution of instructions, pipelining, and/or multithreading. The use of the terms “processor” and “processing circuitry” may be understood to include a single core processor, a multi-core processor, multiple processors internal to the printer apparatus100, and/or one or more remote or “cloud” processor(s) external to the printer apparatus100. In an example embodiment, the processor108may be configured to execute instructions stored in the memory112or otherwise accessible to the processor. Alternatively or additionally, the processor108in some embodiments is configured to execute hard-coded functionality. As such, whether configured by hardware or software methods, or by a combination thereof, the processor108may represent an entity (e.g., physically embodied in circuitry) capable of performing operations according to an embodiment of the present disclosure while configured accordingly. Alternatively or additionally, as another example in some example embodiments, when the processor108is embodied as an executor of software instructions, the instructions may specifically configure the processor108to perform the algorithms embodied in the specific operations described herein when such instructions are executed. As one particular example, the processor108may be configured to perform various operations associated with controlling the printing process performed by the printer apparatus100. In some embodiments, the processor108includes hardware, software, firmware, and/or a combination thereof, that controls and/or receives data from operation of the sensor102. Additionally or alternatively, in some embodiments, the processor108includes hardware, software, firmware, and/or a combination thereof, that controls the motor110, such as to cause movement of the print media120in accordance with a media movement phase (e.g., during printing, calibration, and/or the like). For example, in some embodiments the motor110is activatable advance (e.g., feed) the platen roller118such that more of the print media120is output. Additionally or alternatively, in some embodiments the motor110is activatable to reverse the platen roller118, so as to retract the print media120. Additionally or alternatively, in some embodiments, the processor108includes hardware, software, firmware, and/or a combination thereof, that controls activation of the light source106during one or more phase(s) to produce light rays that shine through a print media, such as the print media120, during printing. Additionally or alternatively, in some embodiments, the processor108includes hardware, software, firmware, and/or a combination thereof, that controls the print mechanisms116to cause the print mechanisms116to print on, output, and/or otherwise engage or interact with the print media120. Additionally or alternatively, in some embodiments, the processor108includes hardware, software, firmware, and/or a combination thereof, that interacts with the sensor102, for example to receive as input the data captured by the sensor102, to generate a print position compensation that compensates for drift in print position. In some embodiments, the printer apparatus100is configurable (e.g., via the processor108) to utilize any of a myriad of user-provided print media, such that the print media is not predefined by the printer apparatus100(e.g., a “mixed mode”). In some embodiments, the processor108operates using a command that is specific to a particular type of print media and/or configuration(s) of the printer apparatus100. The print compensation circuitry114includes hardware, software, firmware, and/or a combination thereof, that supports various functionality associated with generating and/or utilizing a print position compensation. The print position compensation offsets a particular drift in a print position. In some embodiments, the print compensation circuitry114includes hardware, software, firmware, and/or a combination thereof, that determines a first edge position distance during a media output phase and a second edge position distance during a media retraction phase. Additionally or alternatively, in some embodiments, the print compensation circuitry114includes hardware, software, firmware, and/or a combination thereof, that generates a print position compensation based at least in part on the first edge position distance and the second edge position distance. Additionally or alternatively, in some embodiments, the print compensation circuitry114includes hardware, software, firmware, and/or a combination thereof, that determines an output phase timestamp differential associated with a media output phase. Additionally or alternatively, in some embodiments, the print compensation circuitry114includes hardware, software, firmware, and/or a combination thereof, that determines a retraction phase timestamp differential associated with a media retraction phase. Additionally or alternatively, in some embodiments, the print compensation circuitry114includes hardware, software, firmware, and/or a combination thereof, that generates a print position compensation based at least in part on the output phase timestamp differential and the retraction phase timestamp differential. Additionally or alternatively, in some embodiments, the print compensation circuitry114includes hardware, software, firmware, and/or a combination thereof, that initiates a print operation based at least in part on a print position compensation. Additionally or alternatively, in some embodiments, the print compensation circuitry114includes hardware, software, firmware, and/or a combination thereof, that executes a boundary check based at least in part on a print position compensation. It will be appreciated that, in some embodiments, print compensation circuitry114may include a separate processor, specially configured field programmable gate array (FPGA), or a specially programmed application specific integrated circuit (ASIC). Additionally or alternatively, in some embodiments, the print compensation circuitry114is combined with one or more other sets of circuitry. For example, in some embodiments, the print compensation circuitry114is combined with the processor108, such that the two sets of circuitry are embodied in a single component. Similarly, in some embodiments, the print compensation circuitry114is combined such that the processor108performs one or more operations described above with respect to the print compensation circuitry114. FIG.3illustrates example sensor output in accordance with at least some example embodiments of the present disclosure. Specifically,FIG.3illustrates an example graph300of output from a sensor, such as the sensor102. In some embodiments, the depicted values may represent analog values that are converted and/or output as digital values by a digital-to-analog converter associated with the corresponding sensor, for example the sensor ADC104associated with the sensor102. The graph300represents the voltage output of a sensor, such as the sensor102, taken throughout a print job. When the job begins, the sensor is activated, for example to detect an edge of a printable portion of a print media (e.g., indicated by a black mark), a gap between printable portions of a print media, and/or the like. In this regard, as the print media in front of the sensor is moved, the sensor output begins to change at different times as the print media is moved. At timestamp302, for example, the sensor is activated to at a baseline value associated with output from the sensor (e.g., during which a printable portion is in front of the sensor). At timestamp304, for example, the sensor output begins to rise, for example due to light that is reflecting from a trailing edge of a printable portion of the print media. The sensor output reaches a peak and subsequently subsides up until the timestamp306, for example based at least in part on light reflecting from a starting edge of a next printable portion, back to the baseline value. In this regard, the time between timestamp302and timestamp304the sensor output indicates presence of a particular printable portion of a print media in front of the sensor (e.g., where a single label is traversed across the sensor). Further, at timestamp304, the sensor output indicates presence of a trailing edge associated with a particular printable portion of a print media (e.g., where a single label has ended and subsequent data indicates a change in the print media in front of the sensor, indicating beginning of a gap). Further still, at timestamp306, the sensor output indicates presence of a leading edge associated with a next printable portion of a print media (e.g., where a detected gap has ended and a baseline value is again output). In this regard, it should be appreciated that the sensor output may be processed to determine one or more event(s) and/or timestamps at which such events occur. For example, based at least in part on a change in the sensor output from a baseline value to another value, an edge detection event may be detected associated with a trailing edge of a current printable portion. Additionally or alternatively, based at least in part on a change in the sensor output from a changing value back to a baseline value, an edge detection event may be detected associated with a leading edge of a new printable portion. Additionally or alternatively, upon detecting an edge, based at least in part on the sensor output at any given time, an edge detection event and/or an edge movement event (e.g., indicating movement of the edge) may be detected. It will be appreciated that the timestamp at which a particular event is detected may be identified, stored, and/or processed by the sensor itself and/or associated processing circuitry (e.g., a processor such as the processor108). It should be appreciated that this sensor output pattern, and/or the like, may repeat for any number of printable portions on a print media. In this regard, the sensor output may be repeated any number of times as the print media is moved (e.g., output or retracted) within the printer apparatus. Thus, the continuous sensor output may be utilized to detect how many printable portion(s) have passed the sensor, how long has passed since a particular edge of a printable portion passed the sensor, and/or the like. Additionally, it should be appreciated that the timestamps associated with one or more detected event(s), alone and/or in addition to predetermined and/or known data values such as the size of a label and/or a speed at which the printer moves a print media therein, may be used to determine one or more distances travelled by an edge, multiple edges, and/or the like Example Visualizations for Edge Position Distance Determinations Having described example systems and apparatuses in accordance with the present disclosure, example visualizations of process(es) for edge position distance determination in accordance with the present disclosure will now be discussed. The edge determination distance determination process(es) may be utilized for any of a myriad of purposes, for example in generating a print position compensation. In some embodiments, the edge position distance determination is performed by a specially configured printer, for example the printer apparatus100. It will be appreciated that the depicted distances are for illustration purposes and not to limit the scope and spirit of this disclosure. FIG.4illustrates an example visualization of edge position distance determination during a media output phase, in accordance with at least some example embodiments of the present disclosure. Specifically, the example visualization depicts a print media400including a plurality of printable portions410A-410G, each separated by a plurality of gaps408. It will be appreciated that, in some embodiments, each of the plurality of gaps408is of the same size. The visualization further includes a location402at which a sensor is located, a location404at which a print head is located, and a location406at which a tear bar is located. The print media400may be maintained within a printer, for example embodied by the printer apparatus100, that includes print mechanisms at the locations defined by the locations402,404, and406to facilitate printing on the print media400. Additionally or alternatively, in other embodiments, any number of printable portions may fall between the location of the sensor402and the location of the tear bar406that have not been used in a previous print job. FIG.4may depict the location of each of the plurality of printable portions410A-410G at the end of a previous print job (e.g., a calibration print job or another previous print job). As illustrated, the printable portion410G may be the last printable portion that was printed on during the previous print job. In this regard, the printable portion410G extends past the location406tear bar, and may be torn off and/or otherwise removed from the print media400upon completion of the print job. The remaining plurality of printable portions110A-110F may utilized for performing a subsequent print job involving one or more printable portion(s), for example as described with respect toFIGS.4and5. In this regard, printer apparatus100may utilize the print position compensation at least for printing on each of the printable portions410A-410F during the subsequent print job. In some such embodiments, the subsequent print job begins with a media retraction phase as depicted and described with respect toFIG.5. During the media output phase, the printer apparatus100manipulates the print media400to move the print media400in the output direction416. The print media400may be moved in the output direction416during performance of a print job, for example a print of desired label data, a calibration print, and/or the like. In this regard, the print media400is moved towards the location406of the tear bar. The sensor at location402may be used to track a location of an edge of a particular printable portion of the print media400. For example, the sensor at location402may be used to detect each edge of the print positions410A-410G as each of the edges passes by the sensor at location402. In this regard, the sensor at location402may be used to track the location of each of the printable portions410A-410G. For example, for any one of the printable portions410A-410G, the sensor at location402may be used to detect the leading edge of the printable portion, and the location of this leading edge may be tracked based on a timestamp interval for which printing continues, and a predetermined or determinable speed at which the print media400is being output. The sensor at location402may similarly be used to detect and track the trailing edge of a printable portion, thereby defining a distance and/or area covered by the printable portion. It will be appreciated that the printer apparatus100may simultaneously track any number of printable portions of the print media400, and/or particular edges thereof. In some embodiments, the sensor may be used to track a position of a leading edge for a particular printable portion of the print media400closest to the sensor at location402at the completion of print job. As illustrated, the sensor may be used to determine and/or track the location412of the last edge that passed the sensor at location402, specifically the leading edge associated with the printable portion410A that is closest to and has passed the location402of the sensor. In some embodiments, the printer apparatus100utilizes the sensor at location402to determine the location412by detecting a timestamp at which the leading edge passed the sensor at location402and a timestamp where the print media400ceased moving (e.g., the print job was completed). The difference between the timestamp when the leading edge at location412passed the sensor and the timestamp when the print media400ceased movement may then be multiplied with a predetermined (e.g., static) or determinable speed to determine how far the leading edge has moved during that time (e.g., the distance between the location412and the location402of the sensor). Alternatively or additionally, in some embodiments, the leading edge of the printable portion410A of the print media400may be determined at the location412based at least in part on a known width of each printable portion and/or output from the sensor at location402. In some embodiments, the leading edge of the printable portion410A, illustrated at the location412, is used to determine a first edge position distance414associated with a media output phase. For example, the leading edge of the printable portion410A may be tracked to determine a first edge position distance414representing the distance between the location412and the location404of the print head. In this regard, the distance between the location412and the location402of the sensor is determined, and subtracted from a known, objective distance between the sensor at location402and the location of the print head404. The known, objective distance between the sensor at location402and the location of the print head404may be statically maintained by the printer apparatus100, for example in a memory, maintained by a processor, and/or the like, as a static value based at least in part on the configuration of the printer apparatus100. In some embodiments, the timestamp between detection of the last edge that passed the sensor at location402, or the last edge of a particular edge type, is utilized together with the timestamp at which the print media400ceased moving to determine the location412, the distance between the location412and the sensor at location402, and/or the distance between the location412and the location of the print head404. In some embodiments, the sensor at location402may be utilized to track a number of dot lines as the print media400is moved (e.g., by a motor attached to a platen roller that controls movement of the print media400). Alternatively or additionally, in some embodiments, the sensor is used to determine timestamp(s) for particular events, and generating the first edge position distance414based at least in part on such timestamps(s) and known data associated with speed at which the print media400is moved, a predetermined force applied, and/or the like. It should be appreciated that, in other embodiments, a trailing edge of a particular printable portion (e.g., the printable portion410A) is tracked for use in generating the first edge position distance414. FIG.5illustrates an example visualization of edge position distance determination during a media retraction phase, in accordance with at least some example embodiments of the present disclosure. It will be appreciated that the media retraction phase may occur after and/or before the media output phase as described herein with respect toFIG.4. For example, in some embodiments, the media retraction phase begins at the initiation of a new print job subsequent to completion of a previous print job. The previous print job may be a calibration print job or an actual print job with user-inputted data for printing. As described herein, the printable portion410G as depicted and described may be printed for removal from the print media400during the previous print job. Accordingly,FIG.5is depicted with the printable portion410G removed. In some embodiments, the printer apparatus100maintains the location of each of the printable portions remaining (e.g., not printed on during a previous print job). For example, in some embodiments, the printer apparatus100continues to track the location of each of the printable portions410A-410F that were not printed on during the previous print job described with respect toFIG.4. In some such embodiments, the printer apparatus100tracks each of leading edge and/or trailing edge for each of the printable portions410A-410F, and maintains such locations in a permanent or temporary storage for use in the subsequent print job. It will be appreciated that the printer apparatus100may maintain the locations of the printable portions410A-410F (and/or edge(s) thereof) throughout an idle period during which the printer apparatus100enters an idle state (e.g., in the memory112). Accordingly, the printer apparatus100may retrieve such locations and utilize them for performing one or more determinations during the subsequent retraction phase, for example as depicted and described with respect toFIG.5. For example, in some embodiments, the printer apparatus100utilizes such stored data representing stored locations for retracting such that the printable portion410F is approximately at a particular print position in line with the location of the print head404for printing. Additionally or alternatively, the printer apparatus100may utilize such stored data representing stored locations for determining the location502for use in generating the print position compensation. During the media retraction phase, the printer apparatus100manipulates the print media400to move the print media400in the retraction direction506. The print media400may be moved in the retraction direction506while the printer apparatus100is operating in a media retraction phase. For example, the printer apparatus100may remain in the media retraction phase to retract the print media400in preparation for beginning a subsequent print job from a first printable portion of the print media400, such as the printable portion410G of the print media400. It will be appreciated that the retraction direction506may be opposite the output direction416as depicted and described with respect toFIG.4. The sensor at location402may be used to track a location of an edge of a particular printable portion of the print media400. In some embodiments, the sensor may be used to track the position of the same edge tracked during a corresponding media output phase. As illustrated, for example, the printer apparatus100tracks the position of the leading edge for the printable portion410A of the print media400as the print media400is retracted. Alternatively or additionally, in some embodiments, the printer apparatus100tracks the location of an edge that is closest to, but has previously passed, the sensor at location402for determining the second edge position distance504. In some embodiments, the printer apparatus100tracks the location of an edge of a particular edge type that is closest to, but has previously passed, the sensor at location402(e.g., the closest leading edge, or the closest trailing edge). The location502may be affected by slippage that occurs during the retraction of the print media400, and thus is to be compensated for. In some embodiments, the printer apparatus100utilizes the sensor at location402to detect a timestamp at which a first edge hits reaches the location402of the sensor during retraction. In this regard, the different between this timestamp and a timestamp at which retraction was initiated may be utilized to determine how long the edge was travelling to reach the sensor at location402from its original location at the beginning of retraction (e.g., the location502). Utilizing a predetermined (e.g., statically stored) or determinable print speed, the printer apparatus100may determine the distance between the location502and the location402of the sensor. In some embodiments, the leading edge of the printable portion410A may be determined at a particular location502based at least in part on any other data from the sensor at the location402, known distance(s), and/or a combination thereof. In the depicted visualization, as illustrated, the leading edge of the printable portion410A is retracted to a particular location502. The sensor at the location402may track the leading edge as it is retracted to the location502during the media retraction phase. In some embodiments, the location502of the leading edge of the printable portion410A is used to determine a second edge position distance504associated with a media retraction phase. For example, the leading edge of the printable portion410A may be tracked to determine a second edge position distance504representing the distance between the location404of the print head and the location502. In some embodiments, the sensor at location402may be utilized to track a number of dot lines as the print media400is moved (e.g., by a motor attached to a platen roller that controls movement of the print media400). Alternatively or additionally, in some embodiments, the sensor is used to determine timestamp(s) for particular events, and to generate the second edge position distance504based at least in part on such timestamps(s) and known data associated with speed at which the print media400is moved, a predetermined force applied during retraction, and/or the like. It should be appreciated that, in other embodiments, a trailing edge of a particular printable portion (e.g., the printable portion410A) is tracked for use in generating the second edge position distance504. In some embodiments, the printer apparatus100utilizes the edge position distances to generate a print position compensation. In some embodiments, for example, the first edge position distance associated with the media output phase and the second edge position distance associated with the media retraction phase are processed utilizing a determined algorithm for generating the print position compensation. One non-limiting example algorithm includes subtracting the second edge position distance associated with the media retraction phase from the first edge position distance associated with the media output phase to generate a differential edge position, and dividing the differential edge position by a particular divisor factor (e.g., a factor of two). The determined print position compensation may subsequently be utilized to offset the print position for one or more printable portions of the print media400. In some embodiments, the print position compensation is utilized to begin printing on each printable portion that had previously passed the sensor but was not utilized in completing a previous print job. For example, in some embodiments, the printer apparatus100may utilize the print position compensation to initiate printing at particular positions on each of the printable portions410F,410E,410D,410C,410B, and410A as they are printed in a subsequent print job. For example, the printer apparatus100may retract the print media400sufficient so that the printable portion410F reaches the location of the print head404based at least in part on the previously stored location(s) of the printable portion410F (or edges thereof). The printer apparatus100may then begin printing data on the pintable position410F at a default print position offset by the print position compensation. The default print position may be offset by the print position compensation for at least the remaining printable positions410E,410D,410C,410B, and410A, and in other embodiments may be utilized for each of the printable positions to be printed in a particular, subsequent print job. Example Processes Using Edge Position Distances of the Disclosure Having described example systems, apparatuses, and visualizations for edge position distance determination in accordance with the present disclosure, example processes using edge position distances will now be discussed. For example, example processes for generating print position compensation utilizing edge position distances, and additional and/or alternative operations associated therewith, are further discussed. It will be appreciated that each of the flowcharts depicts an example computer-implemented process that may performed by one or more of the apparatuses, systems, devices, and/or computer program products described herein, for example using one or more of the specially configured components thereof. The blocks depicted indicate operations of each process. Such operations may be in any of a number of ways, including, without limitation, in the order and manner as depicted and described herein. In some embodiments, one or more blocks of any of the processes described herein occur in-between one or more blocks of another process, before one or more blocks of another process, in parallel with one or more blocks of another process, and/or as a sub-process of a second process. Additionally or alternatively, any of the processes may include some or all operational steps described and/or depicted, including one or more optional blocks in some embodiments. With regard to the flowcharts illustrated herein, one or more of the depicted blocks may be optional in some, or all, embodiments of the disclosure. Optional blocks are depicted with broken (or “dashed”) lines. Similarly, it should be appreciated that one or more of the operations of each flowchart may be combinable, replaceable, and/or otherwise altered as described herein. FIG.6illustrates a flowchart depicting example operations of an example process for generating and/or utilizing a print position compensation based at least in part on one or more determined edge position distances, in accordance with at least some example embodiments of the present disclosure. Specifically,FIG.6illustrates operations of an example process600. In some embodiments, the example process600is embodied by computer program code stored on a non-transitory computer-readable storage medium of a computer program product configured for execution to perform the process as depicted and described. Alternatively or additionally, in some embodiments, the process600is performed by one or more specially configured computing devices, such as the printer apparatus100alone or in communication with one or more other component(s), device(s), system(s), and/or the like. In this regard, in some such embodiments, the printer apparatus100is specially configured by computer-coded instructions (e.g., computer program instructions) stored thereon, for example in the memory112and/or another component depicted and/or described herein and/or otherwise accessible to the printer apparatus100, for performing the operations as depicted and described. In some embodiments, the printer apparatus100is in communication with one or more external apparatus(es), system(s), device(s), and/or the like, to perform one or more of the operations as depicted and described. For purposes of simplifying the description, the process600is described as performed by and from the perspective of the printer apparatus100, for example embodying a particular label printer. The process600begins at operation602. At operation602, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to determine, via a sensor, a first edge position distance between a first edge and a print head. The first edge position distance may be determined during a media output phase, for example based at least in part on a determined location of a first edge tracked as the first edge is moved during the media output phase. In some embodiments, the location of the first edge is determined based at least in part on one or more timestamps at which the edge is detected by the sensor, a phase begins and/or ends, and/or the like. In some embodiments, the location of the print head is stored by and/or otherwise known by the printer apparatus100for use in determining the first edge position distance. As described herein, the printer apparatus100may utilize stored locations of one or more edge(s), printable position(s), and/or the like, from a previous print job for determining the first edge position distance. Alternatively or additionally, in some embodiments, the printer apparatus100retrieves a first edge position distance that was stored during and/or upon completion of a previous print job. One non-limiting example algorithm for determining the first edge position distance is described herein with respect toFIG.8, for example based on a location of the first edge during the media output phase. At operation604, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to determine, via the sensor, a second edge position distance between the first edge and the print head. The second edge position distance may be determined during a media retraction phase. For example, the first edge position distance may be determined based at least in part on a determined location of the first edge tracked as the first edge is moved during the media retraction phase. It will be appreciated, as described, that the location of the print head may be known to and/or determined via the sensor of the printer apparatus100. It will be appreciated that, in some embodiments, the media retraction phase and the media output phase described with respect to operation602are a part of different print jobs, for example where the first edge position distance is determined for a previous print job corresponding to the media output phase and the media retraction phase begins a subsequent print job. One non-limiting example algorithm for determining the second edge position distance is determined herein with respect toFIG.8, for example based on a location of the first edge during the media retraction phase. At operation606, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to generate a print position compensation based at least in part on the first edge position distance and the second edge position distance. The print position compensation represents an offset to be applied to a determined position at which printing is to begin for one or more printable portions of a print media. In some embodiments, the print position compensation represents a value based on the difference between the first edge position distance and the second edge position distance. In this regard, the print position compensation may represent a particular offset for print position drift occurring during output and/or retraction of a print media. One non-limiting example algorithm for generating a print position compensation is determined herein with respect toFIG.7, for example based at least in part on the first edge position distance and the second edge position distance. At optional operation608, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to execute a boundary check based at least in part on the print position compensation. In some embodiments, the boundary check embodies one or more algorithms that compares the print position compensation to an acceptable maximum threshold. In this regard, the printer apparatus100may initiate a boundary check by comparing the print position compensation to a maximum allowable compensation. In a circumstance where the printer apparatus100determines the print position compensation exceeds the maximum allowable compensation, the printer apparatus100may adjust the print position compensation to equal the maximum allowable compensation. Alternatively or additionally, in some embodiments, the printer apparatus100compares the print position compensation with a range of allowable compensation values to determine whether the print position compensation falls within the range. In a circumstance where the print position compensation does not fall within the range, the print position compensation may be adjusted to the nearer of the maximum and/or minimum compensation of the range, rejected and retried, or used to produce an error to an operator of the printer apparatus100. In some other embodiments, the printer apparatus100determines whether a new print position adjusted based at least in part on the print position compensation is located above a minimum threshold range from one or more edge(s) of a printable portion of a print media. Alternatively or additionally, in some embodiments, the boundary check determines whether a new print position adjusted based on the print position compensation to compensate for drift of a print position falls within an acceptable threshold range of compensations. In some contexts where the printer apparatus100determines the boundary check is not satisfied, the printer apparatus100restarts the print job and/or indicates one or more action(s) to be performed to reduce drift of the print position (e.g., a notification to replace the print media with a new roll of print media, alter the print job, and/or the like). At optional operation610, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to initiate a print operation based at least in part on the print position compensation. In some embodiments, the printer apparatus100initiates a print operation based at least in part on the print position compensation to cause data to be printed starting at a particular position offset from a default or other print position based at least in part on the print position compensation. For example, the print position compensation may indicate a number of dot lines before or after a default print position (a default dot line) at which printing is to begin. In this regard, the printer apparatus100may initiate printing onto any number of printable portions of a print media based at least in part on the print position compensation to print data at a particular location that accounts for drift in the print position. In some embodiments, the printer apparatus100at least utilizes the print position compensation to adjust the print position utilized for printing on each printable position that had already passed, in whole or in part, the sensor of the printer apparatus100prior to the beginning of the media retraction phase. FIG.7illustrates a flowchart depicting example operations of an example process for generating a print position compensation based at least in part on a print position compensation and a divisor factor, in accordance with at least some example embodiments of the present disclosure. Specifically,FIG.7depicts operations of an example process700. In some embodiments, the process700is embodied by computer program code stored on a non-transitory computer-readable storage medium of a computer program product configured for execution to perform the process as depicted and described. Alternatively or additionally, in some embodiments, the process700is performed by one or more specially configured computing devices, such as the printer apparatus100alone or in communication with one or more other component(s), device(s), system(s), and/or the like. In this regard, in some such embodiments, the printer apparatus100is specially configured by computer-coded instructions (e.g., computer program instructions) stored thereon, for example in the memory112and/or another component depicted and/or described herein and/or otherwise accessible to the printer apparatus100, for performing the operations as depicted and described. In some embodiments, the printer apparatus100is in communication with one or more external apparatus(es), system(s), device(s), and/or the like, to perform one or more of the operations as depicted and described. For purposes of simplifying the description, the process700is described as performed by and from the perspective of the printer apparatus100, for example embodying a particular printer. The process700begins at operation702. In some embodiments, the process700begins after one or more operations depicted and/or described with respect to any of the other processes described herein. For example, in some embodiments as depicted, the process700begins after execution of operation604as depicted and described with respect to the process600. In this regard, some or all of the process700may replace or supplement one or more blocks depicted and/or described with respect to any of the other processes described herein, such as the operation606as depicted and described with respect to the process600. Upon completion of the process700, the flow of operations may terminate. Additionally or alternatively, as depicted, upon completion of the process700, flow may return to one or more operations of another process, for example to the operation608as depicted and described with respect to the process600. It should be appreciated that, in some embodiments, the process700embodies a subprocess of one or more other process(es), such as the process600. At operation702, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to generate a differential edge position distance. In some embodiments, the differential edge position distance represents a difference between the first edge position distance determined during a first media movement phase (e.g., a media output phase) and the second edge position distance determined during a second media movement phase (e.g., a media retraction phase). For example, in some embodiments, the differential edge position distance is generated by subtracting the second edge position distance from the first edge position distance. In this regard, the differential edge position distance represents the difference between the determined distance based on a location of a particular edge during each of a media output phase and a media retraction phase. At operation704, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to generate the print position compensation by dividing the differential edge position distance by a divisor factor. In some embodiments, the divisor factor is predetermined. For example, in one example embodiment, the printer apparatus100is configured to utilize a divisor factor of two to divide the differential edge position distance. The divisor factor of two may be used to determine a compensation between the position of an edge affected by print position drift in each of the media output phase and the media retraction phase. Alternatively or additionally, in some embodiments, the divisor factor is determined based at least in part on the first edge position distance, the second edge position distance, and/or other data values determined from operation of the printer apparatus100. FIG.8illustrates a flowchart depicting example operations of an example process for determining an edge position distance based on a tracked distance travelled during a media movement phase, in accordance with at least some example embodiments of the present disclosure. Specifically,FIG.8depicts operations of an example process800. In some embodiments, the process800is embodied by computer program code stored on a non-transitory computer-readable storage medium of a computer program product configured for execution to perform the process as depicted and described. Alternatively or additionally, in some embodiments, the process800is performed by one or more specially configured computing devices, such as the printer apparatus100alone or in communication with one or more other component(s), device(s), system(s), and/or the like. In this regard, in some such embodiments, the printer apparatus100is specially configured by computer-coded instructions (e.g., computer program instructions) stored thereon, for example in the memory112and/or another component depicted and/or described herein and/or otherwise accessible to the printer apparatus100, for performing the operations as depicted and described. In some embodiments, the printer apparatus100is in communication with one or more external apparatus(es), system(s), device(s), and/or the like, to perform one or more of the operations as depicted and described. For purposes of simplifying the description, the process800is described as performed by and from the perspective of the printer apparatus100, for example embodying a particular printer. The process800begins at operation802. In some embodiments, the process800begins after one or more operations depicted and/or described with respect to any of the other processes described herein. For example, in some embodiments as depicted, the process800begins after execution of operation602as depicted and described with respect to the process600. In this regard, some or all of the process800may replace or supplement one or more blocks depicted and/or described with respect to any of the other processes described herein, such as the operation602and/or604as depicted and described with respect to the process600. Upon completion of the process800, the flow of operations may terminate. Additionally or alternatively, as depicted, upon completion of the process800, flow may return to one or more operations of another process, for example to the operation604and/or606as depicted and described with respect to the process600. It should be appreciated that, in some embodiments, the process800embodies a subprocess of one or more other process(es), such as the process600. At operation802, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to the sensor, a first edge associated with a first edge type of at least a printable portion of a print media. For example, sensor output may be processed to detect an edge detection event indicating existence of a particular edge and/or particular edge type. For example, the sensor data at a particular timestamp and/or previous sensor data outputted by the sensor may be processed to detect a particular edge and/or determine whether the particular edge is of a particular edge type (e.g., a leading edge or a trailing edge). In this regard, a leading edge may be indicated by changing sensor data followed by a timestamp or range of timestamps corresponding to a particular baseline value, and/or a trailing edge may be indicated by a particular baseline value followed by changing sensor data. In some embodiments, the printer apparatus100detects a particular first edge, for example a first edge associated with a location closest to a sensor during a media output phase. Alternatively or additionally, in some embodiments, the printer apparatus100repeats for a particular first edge associated with each printable portion of a plurality of printable portions of a print media, for example for determining a print position compensation associated with each printable portion of the plurality of printable portions. At operation804, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to track a distance traveled by the first edge as a predetermined force is applied to the print media during a media movement phase. In some embodiments, the predetermined force is applied to move the print media in a particular direction based on the media movement phase. For example, in some embodiments, the predetermined force advances the print media for outputting, printing, and/or feeding, such as during a media movement phase embodying a media output phase. In some embodiments, the predetermined force advances the print media for retraction, such as during a media movement phase embodying a media retraction phase. As described herein, the predetermined force may cause the print media to move at a different rate based on slippage of the print media, thus resulting in print position drift. In some embodiments, the printer apparatus100tracks the distance traveled by the first edge based on movement detected based on sensor data from the sensor. At operation806, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to determining the first edge position distance based at least in part on the tracked distance traveled by the first edge during the media movement phase. In some embodiments, for example, the printer apparatus100determines the first edge position distance corresponding to the tracked distance travelled by the first edge until a particular target location is reached. In one example context, the printer apparatus100determines the first edge position distance based on the tracked movement of the first edge to a location associated with a print head of the printer apparatus100. FIG.9illustrates a flowchart depicting example operations of an example process for resetting a print position compensation, in accordance with at least some example embodiments of the present disclosure. Specifically,FIG.9depicts operations of an example process900. In some embodiments, the process900is embodied by computer program code stored on a non-transitory computer-readable storage medium of a computer program product configured for execution to perform the process as depicted and described. Alternatively or additionally, in some embodiments, the process900is performed by one or more specially configured computing devices, such as the printer apparatus100alone or in communication with one or more other component(s), device(s), system(s), and/or the like. In this regard, in some such embodiments, the printer apparatus100is specially configured by computer-coded instructions (e.g., computer program instructions) stored thereon, for example in the memory112and/or another component depicted and/or described herein and/or otherwise accessible to the printer apparatus100, for performing the operations as depicted and described. In some embodiments, the printer apparatus100is in communication with one or more external apparatus(es), system(s), device(s), and/or the like, to perform one or more of the operations as depicted and described. For purposes of simplifying the description, the process900is described as performed by and from the perspective of the printer apparatus100, for example embodying a particular printer. The process900begins at operation902. In some embodiments, the process900begins after one or more operations depicted and/or described with respect to any of the other processes described herein. For example, in some embodiments as depicted, the process900begins after execution of operation606as depicted and described with respect to the process600. In this regard, some or all of the process900may replace or supplement one or more blocks depicted and/or described with respect to any of the other processes described herein, such as the operation904as depicted and described with respect to the process600. Upon completion of the process900, the flow of operations may terminate. Additionally or alternatively, as depicted, upon completion of the process900, flow may return to one or more operations of another process, for example to the operation608as depicted and described with respect to the process600. It should be appreciated that, in some embodiments, the process900embodies a subprocess of one or more other process(es), such as the process600. At operation902, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to detecting occurrence of an idle state. In some embodiments, the printer apparatus100maintains a timestamp associated with each previously initiated and/or completed print job. The printer apparatus100may further maintain or otherwise be associated with a particular maximum timestamp threshold before the printer apparatus100initiates an idle state. In this regard, the printer apparatus100may determine data representing a time since a stored timestamp at which a previous print job was completed. Additionally, the printer apparatus100may compare the data representing the time since the stored timestamp with the maximum timestamp threshold to detect occurrence of the idle state in a circumstance where a new print job has not been initiated within the time represented by the maximum timestamp threshold. At operation904, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to reset the print position compensation in response to detecting occurrence of the idle state. In this regard, the print position compensation may be re-generated upon the next activation of the printer apparatus100and/or initiation of a new print job. Alternatively, in some embodiments where the idle state is initiated upon completion of each print job, the print position compensation is re-generated for each print job to maximize the likelihood the print position compensation remains correct for subsequent printing. Example Visualizations for Phase Timestamp Differential Determinations Having described example systems, apparatuses, visualizations for edge position distance determination, and flowcharts for printer position compensation based at least in part on edge distance determinations in accordance with the present disclosure, example visualizations of for phase timestamp differential determinations in accordance with the present disclosure will now be discussed. The phase timestamp differential determination process(es) may be utilized for any of a myriad of purposes, for example in generating a print position compensation. In some embodiments, the phase timestamp differential determinations are performed by a specially configured printer, for example the printer apparatus100. It will be appreciated that the depicted distances are for illustration purposes and not to limit the scope and spirit of this disclosure. FIG.10illustrates an example visualization of phase timestamp differential determination during a media output phase, in accordance with at least some example embodiments of the present disclosure. Specifically, the example visualization depicts a print media400including a plurality of printable portions410A-410G, each separated by a plurality of gaps408. The visualization further includes a location1006at which a sensor (such as a label stop sensor) is located, a location404at which a print head is located, and a location406at which a tear bar is located. The print media400may be maintained within a printer, for example embodied by the printer apparatus100, that includes print mechanisms at the locations defined by the locations1006,404, and406to facilitate printing on the print media400. It will be appreciated that in this regard, the components depicted and described with respect toFIG.11perform functionality as similarly described with respect toFIG.4. FIG.10depicts the location of each of the plurality of printable portions410A-410G at the end of a previous print job (e.g., a calibration print or another previous print job). As illustrated, the printable portion410G may be the last printable portion that was printed on during the previous print job. In this regard, the printable portion410G extends past the location406of the tear bar, and may be torn off and/or otherwise removed from the print media400upon completion of the previous print job. The remaining plurality of printable portions110A-110F may utilized for performing a subsequent print job involving one or more printable portion(s), for example as described with respect toFIGS.10and11. In this regard, printer apparatus100may utilize the print position compensation at least for printing on each of the printable portions410A-410F during the subsequent print job. In some such embodiments, the subsequent print job begins with a media retraction phase as depicted and described with respect toFIG.11. In some embodiments, the sensor located at the location1006embodies a label stop sensor. The label stop sensor may be configured to detect particular events (e.g., existence of an edge, beginning and end of a printable portion such as a label, and/or the like) and/or timestamps associated with such detections. In this regard, the timestamps may be utilized alone or in combination with one or more other portions of data (e.g., a known or determined speed at which a print media is output via the printer apparatus100) to determine a distance travelled by the print media. For example, the label stop sensor at location1006may be used to detect each or at least one edge of the printable portions410A-410G. In some embodiments, the label stop sensor at location1006is used to detect each edge, or each edge of a particular edge type (e.g., a leading edge or a trailing edge) that passes the label stop sensor at location1006. For any one of the printable portions410A-410G, the label stop sensor at position1006may be used to detect the leading edge of the printable portion and the location of this leading edge may be tracked as output continues. It will be appreciated that the printer apparatus100may simultaneously track any number of printable portions of the print media400, and/or particular edges thereof. As illustrated, the label stop sensor located at the location1006determines timestamps associated with a particular defined distance (e.g., one printable portion and one gap). In some embodiments, the label stop sensor detects a first edge associated with a first printable portion of the print media400, such as the printable portion410B as illustrated. The first edge may embody a leading edge associated with the printable portion410B, and may be detected first based on the movement direction of the print media400during a particular media movement phase, such as in the output direction416. Additionally, the label stop sensor detects a second edge associated with a second printable portion of the print media400. The second edge may embody a leading edge associated with the next, subsequent printable portion on the print media400, for example the printable portion410A as illustrated. The label stop sensor may detect the second edge after the first edge has been detected. In some embodiments, the label stop sensor at location1006is used to track each of the printable portions410A-410G, and/or edges thereof. For example, the distance an edge travelled from the label stop sensor at location1006in the output direction416may be determined based at least in part on a timestamp at which the edge is detected and a known or otherwise determinable print speed associated with the printer apparatus100. In this regard, the label stop sensor at location1016may be used to detect the edges defining boundaries of each of the printable portions410A-410G, and/or track such edges as they move in the output direction416. It will be appreciated that, similarly to that as described with respect toFIGS.4and5, the printer apparatus100may store the location, or at least equivalent data usable to regenerate the location, of each of the detected edges (or at least edges of a particular type) in a memory, storage, or the like to enable retrieval of such locations during a subsequent print job and/or media movement phase, for example as described with respect toFIG.11. The label stop sensor may store a timestamp associated with detection of each relevant edge. For example, in some embodiments, the label stop sensor at the location1006detects the leading edge that began at1002B of the printable portion410B and stores a timestamp representing the time at which the leading edge that began at1002B of the printable portion410B was detected. Additionally, in some embodiments, the label stop sensor at the location1006detects the leading edge that began at1002A of the printable portion410A and stores a timestamp representing the time at which the leading edge that began at1002A of the printable portion410A was detected. It will be appreciated, as described herein, that the second edge (e.g., the leading edge that began at1002A of the printable portion410A) may be detected based on first detecting a gap between printable portions, for example one of the plurality of gaps408after detecting the leading edge that began at1002B, and/or the trailing edge, of the printable portion410B. It will be appreciated that, in other embodiments, another edge type may be detected and used. For example, in some embodiments, the label stop sensor is used to detect the trailing edge of a printable portion of a print media400and the trailing edge of a subsequent printable portion of the print media400. In this regard, the particular edges depicted inFIG.10should not limit the scope and/or spirit of this disclosure. The timestamps associated with the detection of the first leading edge that began at1002B and the second leading edge that began at1002A may be utilized to determine an output phase timestamp differential1004. The output phase timestamp differential1004may represent the difference in time between detection of the first leading edge that began at1002B and the second leading edge that began at1002A during the media output phase. In this regard, the printer apparatus100may detect and store the output phase timestamp differential1004for further processing, such as for determining a timestamp-based distance value and/or a print position compensation associated therewith as described herein. FIG.11illustrates an example visualization of phase timestamp differential determination during a media retraction phase, in accordance with at least some example embodiments of the present disclosure. It will be appreciated that the media retraction phase may occur after and/or before the media output phase as described with respect toFIG.10. For example, in some embodiments, the media retraction phase begins at the initiation of a new print job subsequent to completion of a previous print job, such as at completion of the operations described with respect toFIG.10. The previous print job may be A CALIBRATION PRINT JOB OR AN ACTUAL PRINT JOB WITH USER-INPUTTED DATA FOR PRINTING. As described herein, the printable portion410G as depicted and described may be printed for removal from the print media400during the previous print job. Accordingly,FIG.11is depicted with the printable portion410G removed. In some embodiments, the printer apparatus100maintains the location of each of the printable portions remaining (e.g., not printed on during a previous print job). For example, in some embodiments, the printer apparatus100continues to track the location of each of the printable portions410A-410F that were not printed on during the print job described with respect toFIG.10. In some such embodiments, the printer apparatus100tracks each of the leading edges and/or trailing edges for each of the printable portions410A-410F, and maintains such locations in a permanent or temporary storage for retrieval and use during the subsequent print job. It will be appreciated that the printer apparatus100may maintain the locations of the printable portions410A-410F (and/or edge(s) thereof) throughout an idle period during which the printer apparatus100enters an idle state (e.g., in the memory112). Accordingly, the printer apparatus100may retrieve such locations and utilize them for performing one or more determinations during the subsequent retraction phase, for example as depicted and described with respect toFIG.10. For example, in some embodiments, the apparatus100utilizes sch stored data representing stored locations for retracting such that the printable portion410F is at or approximately at a particular print location in line with the location of the print head404for printing. Additionally or alternatively, the printer apparatus100may utilize such stored data representing stored locations for determining the location1002B and/or1002A for use in generating the print position compensation. During the media retraction phase, the printer apparatus100manipulates the print media400to move the print media400in the retraction direction506. The print media400may be moved in the retraction direction506while the printer apparatus100is operating in a media retraction phase. For example, the printer apparatus100may remain in the media retraction phase to retract the print media400in preparation for beginning a subsequent print job from a first printable portion of the print media400, such as the printable portion410F of the print media400. It will be appreciated that the retraction direction506may be opposite the output direction416as depicted and described with respect toFIG.10. The label stop sensor at the location1016may be used to determine timestamps associated with another particular reference distance (e.g., one printable portion and one gap) while the print media400is moving in the retraction direction506during a media retraction phase. In some embodiments, the label stop sensor at the location1006detects a first edge associated with a first printable portion based on the retraction direction506. For example, the label stop sensor at the location1016may detect a first edge associated with a first printable portion of the print media400, such as the printable portion410A as illustrated. The first edge that began at location1102A may embody a trailing edge associated with the printable portion410A, and may be detected first based on the movement direction of the print media400during a particular media movement phase, such as the retraction direction506. Additionally, the label stop sensor detects a second edge associated with a second printable portion of the print media400. The second edge may similarly embody a trailing edge that began at location1102B associated with the next, subsequent printable portion on the print media400, for example the printable portion410B as illustrated. The label stop sensor may detect the second edge after the first edge has been detected. The label stop sensor may store a timestamp associated with detection of each relevant edge. For example, in some embodiments, the label stop sensor at location1006detects the trailing edge that started at the location1102A of the printable portion410A and stores a timestamp representing the time at which the trailing edge that started at location1102A was detected. Additionally, in some embodiments, the label stop sensor at the location1006detects the trailing edge that started at location1102B of the printable portion410B and stores a timestamp representing the time at which the trailing edge that started at location1102B of the printable portion410B was detected. It will be appreciated, as described herein, that the second edge (e.g., the trailing edge of the printable portion410B) may be detected based on first detecting a gap between printable portions, for example one of the plurality of gaps408after detecting the trailing edge that started at location1102A, and/or the leading edge, of the printable portion410A. Additionally or alternatively, in some embodiments, the printer apparatus100determines the locations1102A and/or1102B based at least in part on a timestamp at which retraction begins and a timestamp at which the first edge of a particular edge type is detected (e.g., corresponding to location1102A) and a timestamp at which the second edge of a particular edge type is detected (e.g., corresponding to location1102B). The printer apparatus100may utilize such timestamps together with stored locations and/or distances from a previous print job, for example as described with respect toFIG.10. For example, in some embodiments, the label stop sensor at location1016detects a timestamp at which the closest leading edge is detected (e.g., the leading edge of the printable portion410A). The printer apparatus100may determine a difference between the timestamp at which retraction began and the timestamp at which the leading edge associated with the printable portion410A was detected, indicating how long the edge travelled to reach the label stop sensor at location1016. The printer apparatus100may then determine the location1102A by multiplying the difference between the two timestamps by a print speed known to (e.g., stored in memory112) or otherwise determinable by the printer apparatus100. The printer apparatus100may similarly detect a timestamp at which the leading edge of the printable portion410B is detected, determine the difference between this timestamp and the timestamp at which retraction began, and multiply by a speed to determine the location1102B at which the leading edge for the printable portion410B began. It will be appreciated that, due to slippage, the locations1102A and/or1102B may represent different distances from the location1016of the label stop sensor than those depicted and described with respect toFIG.10. It will be appreciated that, in other embodiments, another edge type may be detected and used. For example, in some embodiments, the label stop sensor is used to detect the leading edge of each printable portion of a print media400based on a particular movement direction and/or corresponding media movement phase. In this regard, the particular edges depicted inFIG.11should not limit the scope and/or spirit of this disclosure. The timestamps associated with the detection of the first trailing edge that began at location1102A and the second trailing edge that began at location1102B may be utilized to determine a second media movement phase timestamp differential, such as a retraction phase timestamp differential1104. The retraction phase timestamp differential1104may represent the difference in time between detection of the first trailing edge that began at location1102A and the second trailing edge that began at location1102B during the media retraction phase. In this regard, the printer apparatus100may detect and store the retraction phase timestamp differential1104for further processing, such as for determining a timestamp-based distance value and/or a print position compensation associated therewith, as described herein. In some embodiments, the printer apparatus100utilizes the media movement phase timestamp differentials to generate a print position compensation. In some embodiments, for example, the output phase timestamp differential associated with the media output phase and the retraction phase timestamp differential associated with the media retraction phase are processed utilizing a determined algorithm for generating the print position compensation. One non-limiting example algorithm includes subtracting the retraction phase timestamp differential associated with a media retraction phase from the output phase timestamp differential associated with a media output phase to generate a timestamp-based distance value, and multiplying the timestamp-based distance value by a print speed (e.g., a known or determined speed at which the print media400is moving). The determined print position compensation may subsequently be utilized to offset the print position for one or more printable portions of the print media400. In some embodiments, the printer apparatus100performs the operations described with respect toFIGS.10and/or11a plurality of times for one or more media movement phases. For example, in some embodiments the printer apparatus100calibrates a reference media movement phase timestamp differential for a particular movement media phase using a first, reference print media. In some non-limiting example contexts, the printer apparatus100generates the media movement phase timestamp differential by performing the operations described using a free-hanging media. The reference media movement phase timestamp differential may be stored as a calibration reference associated with the corresponding media movement phase. The printer apparatus100may subsequently store some or all media movement phase timestamp differentials during operation of a particular media movement phase (e.g., each duration to move 1 printable portion of a print media, such as 1 label, and 1 gap). It will be appreciated that other reference distances may be used in other embodiments. In some embodiments, the stored media movement phase timestamp differentials and the reference media movement phase timestamp differential may subsequently be utilized to generate a print position compensation. The print position compensation may represent a time difference that is used to offset the beginning of printing during a print job. In this regard, the print position compensation defining a time offset may serve as a proxy for a distance offset that accounts for slippage in the print media to be printed. For example, in some embodiments, the printer apparatus100compares a reference media movement phase timestamp differential corresponding to a particular media movement phase in which the printer apparatus100is operating with a media movement phase timestamp differential associated with operation without free-hanging the same print media during the same media movement phase. In one example context, the printer apparatus100calibrates a reference movement timing embodying an output phase timestamp differential to move a particular reference distance (e.g., 1 label embodying a printable portion of a print media and 1 gap) during a media output phase using a free-hanging media. The printer apparatus100then stores all durations when moving the same media while printing. In a circumstance where the media movement phase timestamp differential during operation in the particular media movement phase exceeds the reference media movement phase timestamp differential corresponding to the same media movement phase, the printer apparatus100may generate a print position compensation to compensate for slippage causing a difference in the time. The print position compensation may embody a forward-moving (e.g., in accordance with an output media phase) time difference to be applied when determining when to begin printing as the print media moves. For example, in an example context where “X” is defined as a particular media movement phase timestamp differential corresponding to operation of the printer apparatus100in a particular media movement phase, and where “Y” is defined as a reference media movement timestamp differential corresponding to the particular media movement phase, the printer apparatus100may determine whether X>Y. In a circumstance where X>Y, the printer apparatus100may generate a print position compensation as described herein, for example based on the algorithm print position compensation=(X−Y)/Y*100% in percentage of time. It will be appreciated that other algorithms, for example as described herein, may similarly be used. In some such embodiments, the print position compensation embodies a forward-movement time difference to be applied only during a media output phase embodying a print operation. Example Processes Using Media Movement Phase Timestamp Differentials of the Disclosure Having described example systems, apparatuses, visualizations for edge position distance determination, processes for printer position compensation based at least in part on edge distance determinations, and visualizations of phase timestamp differential determinations, in accordance with the present disclosure, example processes using phase timestamp differential determinations will now be discussed. For example, example processes for generating print position compensation utilizing media movement phase timestamp differentials, and additional and/or alternative operations associated therewith, are further discussed. It will be appreciated that each of the flowcharts depicts an example computer-implemented process that may performed by one or more of the apparatuses, systems, devices, and/or computer program products described herein, for example using one or more of the specially configured components thereof. The blocks depicted indicate operations of each process. Such operations may be in any of a number of ways, including, without limitation, in the order and manner as depicted and described herein. In some embodiments, one or more blocks of any of the processes described herein occur in-between one or more blocks of another process, before one or more blocks of another process, in parallel with one or more blocks of another process, and/or as a sub-process of a second process. Additionally or alternatively, any of the processes may include some or all operational steps described and/or depicted, including one or more optional blocks in some embodiments. With regard to the flowcharts illustrated herein, one or more of the depicted blocks may be optional in some, or all, embodiments of the disclosure. Optional blocks are depicted with broken (or “dashed”) lines. Similarly, it should be appreciated that one or more of the operations of each flowchart may be combinable, replaceable, and/or otherwise altered as described herein. FIG.12illustrates a flowchart depicting example operations of an example process for generating and/or utilizing a print position compensation based at least in part on one or more determined phase timestamp differentials, in accordance with at least some example embodiments of the present disclosure. Specifically,FIG.12illustrates operations of an example process1200. In some embodiments, the example process1200is embodied by computer program code stored on a non-transitory computer-readable storage medium of a computer program product configured for execution to perform the process as depicted and described. Alternatively or additionally, in some embodiments, the process1200is performed by one or more specially configured computing devices, such as the printer apparatus100alone or in communication with one or more other component(s), device(s), system(s), and/or the like. In this regard, in some such embodiments, the printer apparatus100is specially configured by computer-coded instructions (e.g., computer program instructions) stored thereon, for example in the memory112and/or another component depicted and/or described herein and/or otherwise accessible to the printer apparatus100, for performing the operations as depicted and described. In some embodiments, the printer apparatus100is in communication with one or more external apparatus(es), system(s), device(s), and/or the like, to perform one or more of the operations as depicted and described. For purposes of simplifying the description, the process1200is described as performed by and from the perspective of the printer apparatus100, for example embodying a particular label printer. The process1200begins at operation1202. At operation1202, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to detect, via a sensor and during a media output phase, an output phase timestamp differential. The output phase timestamp differential is based at least in part on a first edge associated with a first printable portion of a print media. The output phase timestamp differential is further based at least in part on a second edge associated with a second printable portion of a print media. In some embodiments, the first edge and the second edge are the same edge type. Additionally or alternatively, in some embodiments, the second printable portion of the print media is subsequent to the first printable portion of the print media based at least in part on an output direction corresponding to the media output phase. In some embodiments, the output phase timestamp differential is determined based on the difference between a timestamp associated with detection of the first edge and a second timestamp associated with detection of a second edge. Non-limiting example processes for determining an output phase timestamp differential are described herein with respect toFIGS.13and14. At operation1204, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to detect, via the sensor and during a media retraction phase, a retraction phase timestamp differential. The retraction phase timestamp differential is based at least in part on a third edge associated with a third printable portion of the print media. The retraction phase timestamp differential is further based at least in part on a fourth edge associated with a fourth printable portion of the print media. In some embodiments, the first printable portion and the second printable portion as described with respect to operation1202correspond to the third printable portion and the fourth printable portion, such that edges of the same printable portions are utilized for determination of the output phase timestamp differential and the retraction phase timestamp differential. Additionally or alternatively, in some embodiments, the same edges of the same printable portions are processed for each media movement phase. In yet some other embodiments, opposite edges of the same printable portions of the print media are processed, such that the same type of edges is processed accounting for the change in movement direction. Non-limiting example processes for determining a retraction phase timestamp differential are described herein with respect toFIGS.13and14. At operation1206, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to generate a print position compensation. In some embodiments, the printer apparatus100generates the print position compensation based at least in part on the output phase timestamp differential and the retraction phase timestamp differential. In some embodiments, the print position compensation represents an offset at which printing should begin based at least in part on a difference between the output phase timestamp differential and the retraction phase timestamp differential. In this regard, the print position compensation may be generated based at least in part on the output phase timestamp differential and the retraction phase timestamp differential to account for a drift to a print position indicated by such media movement phase timestamp differentials. A non-limiting example process for generating a print position compensation based at least in part on the output phase timestamp differential and the retraction phase timestamp differential is described herein with respect toFIG.15. Optionally, in some embodiments, the printer apparatus100performs one or more operations based at least in part on the print position compensation. For example, in some embodiments, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to execute a boundary check based at least in part on the print position compensation, as described herein with respect to the operation608. Additionally or alternatively, optionally in some embodiments, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to initiate a print operation based at least in part on the print position compensation determined at operation1206. In some embodiments, the print position compensation is utilized to compensate for the forward movement by altering the timing at which a print head is activated to print on a particular printable portion of a print media. It will be appreciated that these optional operations may otherwise perform similarly to the operations described with respect to operations608and610respectively. Accordingly, in the interest of brevity and clarity of this description, repeated disclosure of such functions is omitted. FIG.13illustrates a flowchart depicting example operations of an example process for determining a media movement phase timestamp differential associated with a particular media movement phase, in accordance with at least some example embodiments of the present disclosure. Specifically,FIG.13depicts operations of an example process1300. In some embodiments, the process1300is embodied by computer program code stored on a non-transitory computer-readable storage medium of a computer program product configured for execution to perform the process as depicted and described. Alternatively or additionally, in some embodiments, the process1300is performed by one or more specially configured computing devices, such as the printer apparatus100alone or in communication with one or more other component(s), device(s), system(s), and/or the like. In this regard, in some such embodiments, the printer apparatus100is specially configured by computer-coded instructions (e.g., computer program instructions) stored thereon, for example in the memory112and/or another component depicted and/or described herein and/or otherwise accessible to the printer apparatus100, for performing the operations as depicted and described. In some embodiments, the printer apparatus100is in communication with one or more external apparatus(es), system(s), device(s), and/or the like, to perform one or more of the operations as depicted and described. For purposes of simplifying the description, the process1300is described as performed by and from the perspective of the printer apparatus100, for example embodying a particular printer. The process1300begins at operation1302. In some embodiments, the process1300begins after one or more operations depicted and/or described with respect to any of the other processes described herein. For example, in some embodiments as depicted, the process1300begins after execution of operation1202and/or1204as depicted and described with respect to the process1200. In this regard, some or all of the process1300may replace or supplement one or more blocks depicted and/or described with respect to any of the other processes described herein, such as the operation1204and/or1206as depicted and described with respect to the process1200. Upon completion of the process1300, the flow of operations may terminate. Additionally or alternatively, as depicted, upon completion of the process1300, flow may return to one or more operations of another process, for example to the operation1204and/or1206as depicted and described with respect to the process1200. It should be appreciated that, in some embodiments, the process1300embodies a subprocess of one or more other process(es), such as the process600. At operation1302, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to identify, during a media movement phase, a first event timestamp associated with a first edge detection event associated with the first edge. In some embodiments, the sensor detects the first edge detection event, and identifies the first event timestamp representing the current time at which the first edge detection event was detected. Alternatively or additionally, in some embodiments, one or more other components of the printer apparatus100receives data indicating detection of the first edge detection event from the sensor, and identifies the first event timestamp representing the current time. In some embodiments for example, the sensor102, the print compensation circuitry114, and/or the processor108maintains access to a current timestamp, such that the current timestamp can be retrieved and stored as the first event timestamp upon detection of the first edge detection event associated with the first edge. At operation1304, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to identify, during the media movement phase, a second event timestamp associated with a second edge detection event associated with the second edge. In this regard, the second event timestamp may represent a timestamp at which a subsequent edge of a particular edge type was detected for a subsequent printable portion on the print media. In some embodiments, the sensor similarly detects the second edge detection event, and identifies the second event timestamp representing the current time at which the second edge detection event was detected. Alternatively or additionally, in some embodiments, the one or more other components of the printer apparatus100receives data indicating detection of the second edge detection event from the sensor, and identifies the second event timestamp representing the current time. At operation1306, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to determine the media movement phase timestamp differential. The media movement phase timestamp differential is determined based at least in part on the first event timestamp and the second event timestamp. In some embodiments, for example, the media movement phase timestamp differential for a particular media movement phase is determined based on the difference between the first event timestamp and the second event timestamp. In this regard, the media movement phase timestamp differential may indicate the time difference between a first edge crossing and/or otherwise being detected by the sensor, and a second edge crossing and/or otherwise being detected by the sensor. It will be appreciated that the media movement phase timestamp differential determined may correspond particularly to the current media movement phase that the printer apparatus100is set to during identification of the first event timestamp and the second event timestamp (e.g., a media output phase or a media retraction phase). FIG.14illustrates a flowchart depicting example operations of an example process for generating a media movement phase timestamp differential associated with a media movement phase, in accordance with at least some example embodiments of the present disclosure. Specifically,FIG.14depicts operations of an example process1400. In some embodiments, the process1400is embodied by computer program code stored on a non-transitory computer-readable storage medium of a computer program product configured for execution to perform the process as depicted and described. Alternatively or additionally, in some embodiments, the process1400is performed by one or more specially configured computing devices, such as the printer apparatus100alone or in communication with one or more other component(s), device(s), system(s), and/or the like. In this regard, in some such embodiments, the printer apparatus100is specially configured by computer-coded instructions (e.g., computer program instructions) stored thereon, for example in the memory112and/or another component depicted and/or described herein and/or otherwise accessible to the printer apparatus100, for performing the operations as depicted and described. In some embodiments, the printer apparatus100is in communication with one or more external apparatus(es), system(s), device(s), and/or the like, to perform one or more of the operations as depicted and described. For purposes of simplifying the description, the process1400is described as performed by and from the perspective of the printer apparatus100, for example embodying a particular printer. The process1400begins at operation1402. In some embodiments, the process1400begins after one or more operations depicted and/or described with respect to any of the other processes described herein. For example, in some embodiments as depicted, the process1400begins after execution of operation1202and/or1204as depicted and described with respect to the process1200. In this regard, some or all of the process1400may replace or supplement one or more blocks depicted and/or described with respect to any of the other processes described herein, such as the operation1204and/or1206as depicted and described with respect to the process1200. Upon completion of the process1400, the flow of operations may terminate. Additionally or alternatively, as depicted, upon completion of the process1400, flow may return to one or more operations of another process, for example to the operation1204and/or1206as depicted and described with respect to the process1200. It should be appreciated that, in some embodiments, the process1400embodies a subprocess of one or more other process(es), such as the process600. At operation1402, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to detect, via the sensor, a first edge detection event during a media movement phase. In some embodiments, the printer apparatus100detects an edge detection event based at least in part on a change in a value represented in the sensor output to and/or from a baseline value (e.g., indicating a leading and/or trailing edge, respectively, in accordance with a particular movement direction). In this regard, the sensor and/or another component of the printer apparatus100may monitor and/or otherwise process the sensor output to detect a particular edge detection event based at least in part on such changes in the sensor output. Additionally, in some embodiments, the printer apparatus100determines an edge type associated with an edge detected via the first edge detection event, for example based on the changes in the sensor output corresponding to the first edge detection event. At operation1404, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to determine, via the sensor, a first event timestamp associated with the first edge detection event. In some embodiments, the sensor outputs the first event timestamp representing the time the first edge detection event was detected. Alternatively or additionally, in some embodiments, in circumstances where the printer apparatus100detects the first edge detection event the printer apparatus100determines the first event timestamp associated with the first edge detection event embodying the time at which the change in the sensor data occurred and/or was captured by the sensor. In some embodiments, for example, the printer apparatus100maintains the sensor output associated with a timestamp at which the sensor output was captured by the sensor and/or received by other components of the printer apparatus100for processing. At operation1406, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to detect, via the sensor, a second edge detection event during the media movement phase. The second edge detection event may correspond to detection of a second edge associated with the same edge type as the first edge detected with respect to the first edge detection event. For example, the second edge detection event may represent detection of the same edge type for a second printable portion of a particular print media, such as the subsequent printable portion of a print media after a first printable portion associated with the first edge. It will be appreciated that the second edge detection event may similarly be detected based at least in part on a change in value represented in the sensor output to and/or from a baseline value, as described with respect to operation1402. At operation1408, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to determine, via the sensor, a second event timestamp associated with the second edge detection event. The second event timestamp may similarly represent the time the second edge detection event was detected. It will be appreciated that the second event timestamp associated with the second edge detection event may be determined in a manner similar to that described herein with respect to operation1404. At operation1410, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to generate the media movement phase timestamp differential associated with the media movement phase. In some embodiments, the media movement phase timestamp differential associated with the media movement phase is generated by subtracting the second event timestamp associated with the media movement phase from the first timestamp associated with the media movement phase. In this regard, it will be appreciated that the media movement phase timestamp differential represents the difference in the timestamps at which the first edge detection event and the second edge detection event were detected for the particular media movement phase. Such operations may be repeated for any number of media movement phases (e.g., for both and/or either of a media output phase and a media retraction phase). FIG.15illustrates a flowchart depicting example operations of an example process for generating a print position compensation based at least in part on a timestamp-based distance value, in accordance with at least some example embodiments of the present disclosure. Specifically,FIG.15depicts operations of an example process1500. In some embodiments, the process1500is embodied by computer program code stored on a non-transitory computer-readable storage medium of a computer program product configured for execution to perform the process as depicted and described. Alternatively or additionally, in some embodiments, the process1500is performed by one or more specially configured computing devices, such as the printer apparatus100alone or in communication with one or more other component(s), device(s), system(s), and/or the like. In this regard, in some such embodiments, the printer apparatus100is specially configured by computer-coded instructions (e.g., computer program instructions) stored thereon, for example in the memory112and/or another component depicted and/or described herein and/or otherwise accessible to the printer apparatus100, for performing the operations as depicted and described. In some embodiments, the printer apparatus100is in communication with one or more external apparatus(es), system(s), device(s), and/or the like, to perform one or more of the operations as depicted and described. For purposes of simplifying the description, the process1500is described as performed by and from the perspective of the printer apparatus100, for example embodying a particular printer. The process1500begins at operation1502. In some embodiments, the process1500begins after one or more operations depicted and/or described with respect to any of the other processes described herein. For example, in some embodiments as depicted, the process1500begins after execution of operation1204as depicted and described with respect to the process1200. In this regard, some or all of the process1500may replace or supplement one or more blocks depicted and/or described with respect to any of the other processes described herein, such as the operation1206as depicted and described with respect to the process1200. Upon completion of the process1500, the flow of operations may terminate. Additionally or alternatively, as depicted, upon completion of the process1500, flow may return to one or more operations of another process, for example to the operation1204and/or1206as depicted and described with respect to the process1500. It should be appreciated that, in some embodiments, the process1400embodies a subprocess of one or more other process(es), such as the process600. At operation1502, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to generate a timestamp-based distance value. In some embodiments, the timestamp-based distance value is generated by subtracting the retraction phase timestamp differential from the output phase timestamp differential. The timestamp-based distance value represents a difference in the time a particular edge took to travel a particular distance between the media output phase and the media retraction phase. It will be appreciated that, in other embodiments, the output phase timestamp differential is subtracted from the retraction phase timestamp differential to generate the timestamp-based distance value. At operation1504, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to generate the print position compensation by multiplying the timestamp-based distance value with a print speed. The print speed may represent a speed at which a print media moves within the printer apparatus100during printing and/or output generally. In some embodiments, the printer apparatus100maintains and/or otherwise stores a known print speed, and retrieves the known print speed for processing. Alternatively or additionally, in some embodiments, the printer apparatus100determines the print speed by processing data output by the sensor (e.g., a number of dot lines over a particular change in timestamp). In some embodiments, the print speed is based at least in part on a determinable step size (e.g., one dot line) that the sensor measures over a particular timestamp interval. FIG.16illustrates a flowchart depicting example operations of an example process for determining a media movement phase timestamp differential based on edge and timestamp detection and storage via a sensor, in accordance with at least some example embodiments of the present disclosure. Specifically,FIG.16depicts operations of an example process1600. In some embodiments, the process1600is embodied by computer program code stored on a non-transitory computer-readable storage medium of a computer program product configured for execution to perform the process as depicted and described. Alternatively or additionally, in some embodiments, the process1600is performed by one or more specially configured computing devices, such as the printer apparatus100alone or in communication with one or more other component(s), device(s), system(s), and/or the like. In this regard, in some such embodiments, the printer apparatus100is specially configured by computer-coded instructions (e.g., computer program instructions) stored thereon, for example in the memory112and/or another component depicted and/or described herein and/or otherwise accessible to the printer apparatus100, for performing the operations as depicted and described. In some embodiments, the printer apparatus100is in communication with one or more external apparatus(es), system(s), device(s), and/or the like, to perform one or more of the operations as depicted and described. For purposes of simplifying the description, the process1600is described as performed by and from the perspective of the printer apparatus100, for example embodying a particular printer. The process1600begins at operation1602. In some embodiments, the process1600begins after one or more operations depicted and/or described with respect to any of the other processes described herein. For example, in some embodiments as depicted, the process1600begins after execution of operation1202and/or1204as depicted and described with respect to the process1200. In this regard, some or all of the process1600may replace or supplement one or more blocks depicted and/or described with respect to any of the other processes described herein, such as the operation1204and/or1206as depicted and described with respect to the process1200. Upon completion of the process1600, the flow of operations may terminate. Additionally or alternatively, as depicted, upon completion of the process1600, flow may return to one or more operations of another process, for example to the operation1204and/or1206as depicted and described with respect to the process1600. It should be appreciated that, in some embodiments, the process1600embodies a subprocess of one or more other process(es), such as the process1200. At operation1602, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to progress a print media by one dot line. The printer apparatus100may progress the print media by one dot line in a particular direction consistent with a current media movement phase. For example, the printer apparatus100may progress the print media in a first direction during a media output phase (e.g., towards output of the print media), and progress the print media in a second direction during a media retraction phase (e.g., towards retracting of the print media). In some embodiments, to progress the print media, the printer apparatus100activates the motor110that applies a predetermined force to the print media, for example via a platen roller of the printer apparatus100. At operation1604, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to determine sensor data corresponding to the sensor from an analog-to-digital converter associated with the sensor. In this regard, the analog-to-digital converter associated with the sensor may be used to convert analog signals captured by the sensor to digital data output representing such analog signals. For example, the sensor data may represent data values generated from light rays interacting with the sensor through a print media, with different voltages based at least in part on the intensity of the light rays that reach the sensor. At operation1606, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to determine if the sensor data indicates edge of a particular edge type. For example, in some embodiments, the printer apparatus100processes the sensor data to detect an edge detection event corresponding to a particular edge type (e.g., a leading edge or a trailing edge corresponding to a movement direction for a current media movement phase). The edge detection event may be detected based at least in part on the current sensor data and/or previous sensor data output at one or more previous timestamps. For example, the printer apparatus100may process the sensor data and previous sensor data to detect changes in the sensor data that are indicative of an edge of a particular edge type (e.g., as described herein with respect toFIG.3). In some embodiments, the particular edge type to be determined is predetermined and/or otherwise set based at least in part on a configuration of the printer apparatus100. For example, in some embodiments, the printer apparatus100processes the sensor data to determine if the sensor data indicates a leading edge of a printable portion of a print media. In circumstances where the printer apparatus100determines the sensor data does not indicate an edge of a particular edge type (e.g., the sensor data does not indicate an edge or indicates an edge of the incorrect edge type), flow returns to operation1602. In this regard, the flow may proceed to continuously progress the print media while searching for the next edge of a particular edge type. In circumstances where the printer apparatus100determines the sensor data indicates an edge of a particular edge type, flow proceeds to operation1608. At operation1608, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to store data indicating the edge and/or the timestamp at which the edge was detected. In some embodiments, the printer apparatus100stores data embodying, associated with, and/or otherwise indicating whether the edge is the first detected edge of the particular edge type or second. Additionally or alternatively, in some embodiments, the printer apparatus100stores data embodying, associated with, and/or otherwise indicating the timestamp at which the edge was detected. In some embodiments, a timestamp is determined based at least in part on the timestamp at which the sensor data was captured. The timestamp may be received from the sensor, determined by a processor108of the printer apparatus100, and/or the like. In some embodiments, the printer apparatus100stores the data indicating the edge and/or the timestamp in a cache, memory (e.g., the memory112), permanent storage, and/or the like. At operation1610, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to determine whether the detected edge is the second edge of a particular edge type. In some embodiments, the printer apparatus100queries and/or otherwise checks for stored edge and/or timestamp data to determine whether data associated with another edge had been previously detected and/or stored. The printer apparatus100may determine the edge is a second edge in circumstances where the printer apparatus100retrieves and/or identifies previously stored data indicating a detected edge and/or timestamp associated therewith. In circumstances where the printer apparatus100determines the detected edge is not the second edge of the particular edge type, flow returns to operation1602. In this regard, the printer apparatus100continues to progress the print media until the second edge of a particular edge type is detected. For example, the second edge of the particular edge type may indicate that the print media has moved a particular distance (e.g., corresponding to a width of a printable portion of the print media and a gap between a first printable portion and a second, next printable portion) of the print media. In circumstances where the printer apparatus100determines the detected edge is the second edge of the particular edge type, flow continues to operation1612. At operation1602, the printer apparatus100includes means, such as the sensor102, the print compensation circuitry114, the motor110, the light source106, the print mechanisms116, the processor108, and/or the like, or a combination thereof, to determine a media movement phase timestamp differential from a first timestamp associated with detection of a first edge and a second timestamp associated with detection of a second edge. In some embodiments, the media movement phase timestamp differential is determined by subtracting the timestamp representing a time at which the second edge of the particular edge type was detected from the timestamp representing a time at which the first edge of the particular edge type was detected. Alternatively or additionally, in some embodiments, the media movement phase timestamp differential is determined by subtracting the timestamp representing a time at which the first edge of the particular edge type was detected from the timestamp representing a time at which the second edge of the particular edge type was detected. In this regard, the media movement phase timestamp differential represents the amount of time that passed during movement of the print media the distance between the first detected edge of the particular edge type and the second detected edge of the particular edge type. The media movement phase timestamp differential may correspond to the particular media movement phase to which the printer apparatus100is currently set. For example, the media movement phase timestamp differential may represent an output phase timestamp differential corresponding to a media output phase in circumstances where the printer apparatus100is currently set to the media output phase, and the media movement phase timestamp differential may represent a retraction phase timestamp differential corresponding to a media retraction phase in a circumstance where the printer apparatus100is currently set to the media retraction phase. In some embodiments, the printer apparatus100temporarily or permanently stress the media movement phase timestamp different corresponding to the media movement phase to which the printer apparatus is currently set. In some embodiments, the media movement phase timestamp differential may be subsequently processed for any of a myriad of purposes. For example, in some embodiments, the printer apparatus100performs the process1600to generate a media movement phase timestamp differential embodying a media output phase timestamp differential corresponding to a media output phase, and the printer apparatus100similarly performs the process1600to generate a media movement plan timestamp embodying a media retraction timestamp differential corresponding to a media retraction phase. Such resulting media movement phase timestamp differentials may subsequently be processed to determine a print position compensation for further processing, for example as described herein with respect toFIGS.12and/or15, based at least in part on a print speed. The resulting print position compensation may be utilized to offset the starting position at which a print job is initiated for one or more printable positions on a print media (e.g., for printing on one or more labels on the print media). Conclusion Although an example processing system has been described above, implementations of the subject matter and the functional operations described herein can be implemented in other types of digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter and the operations described herein can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described herein can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on computer storage medium for execution by, or to control the operation of, information/data processing apparatus. Alternatively, or in addition, the program instructions can be encoded on an artificially-generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, which is generated to encode information/data for transmission to suitable receiver apparatus for execution by an information/data processing apparatus. A computer storage medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them. Moreover, while a computer storage medium is not a propagated signal, a computer storage medium can be a source or destination of computer program instructions encoded in an artificially-generated propagated signal. The computer storage medium can also be, or be included in, one or more separate physical components or media (e.g., multiple CDs, disks, or other storage devices). The operations described herein can be implemented as operations performed by an information/data processing apparatus on information/data stored on one or more computer-readable storage devices or received from other sources. The term “data processing apparatus” encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or multiple ones, or combinations, of the foregoing. The apparatus can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). The apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a repository management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of them. The apparatus and execution environment can realize various different computing model infrastructures, such as web services, distributed computing and grid computing infrastructures. A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or information/data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. The processes and logic flows described herein can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input information/data and generating output. Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and information/data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive information/data from or transfer information/data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such devices. Devices suitable for storing computer program instructions and information/data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any disclosures or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular disclosures. Certain features that are described herein in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous. | 147,100 |
11861436 | DETAILED DESCRIPTION OF EMBODIMENTS First, a game token coin that is used in a management system for a game token coin according to an embodiment of the present disclosure will be described.FIG.1shows a game token coin1that is used in the management system. InFIG.1, in the game token coin1, an RFID tag2, on which various items of information can be stored, is embedded. The RFID tag2includes a data non-rewritable region21and a data rewritable region22. In the data non-rewritable region21, as constant information (or fixed information)3, information that is not changed during the use of the game token coin1, or information that cannot be not changed, or information that does not have to be changed is stored. More specifically, as the constant information3, production information, product information, casino information, value information, serial numbers, and any other information of the game token coin1are stored. The production information includes a date and time when the game token coin is produced, a production machine, and the like. The product information includes information indicating a chip for the VIP area of a casino, for example, information that indicating a type of chip (e.g., information indicating that the game token coin1is a rolling chip or a cash chip), and any other information. The data non-rewritable region21may be a region in which no data write is functionally allowed due to the specifications of the RFID tag, or may be a region in which necessary information is written in the region in which a write is allowed and then the region is locked such that no data write is allowed. The data rewritable region22stores information, as variable information4, that changes during the use of the game token coin1. For example, as shown inFIG.1, as the variable information4, location relating information220is stored, including information221relating to a date and time, information222relating to a place and an event, information223relating to an owner, and any other information. The constant information3and the variable information4may be encrypted in order to avoid unauthorized information read or write by an unauthorized person. The constant information3and the variable information4may be stored as metadata. FIG.2shows an example recording method for data of the variable information4of the game token coin1. As shown inFIG.2, a configuration is provided in which the location relating information220compose of the date-and-time information221, the place-and-event information222, and the owner information223is formed in one block, the block information is connected in a chain, and thus the history of the location relating information220of the game token coin1can be determined. FIG.2shows an example of the variable information4stored on the game token coin1, from which the movement of the game token coin1can be determined as follows. At 16:02 on Jan. 28, 2019, Player-A exchanges cash for the game token coin1at a cage5. At 16:15 on January 28, Player-A makes a bet at table No. 325 using the game token coin1. At 16:43 on January 28, Player-B receives the game token coin1as redemption money for winning a bet at table No. 325. At 17:01 on January 28, Player-B leaving the casino with the game token coin1. The variable information4may be configured in which the latest information alone is stored as shown inFIG.2, or may be configured in which all sets of the location relating information written in the past is stored. In a plurality of sets of the location relating information, some sets of the location relation information may be selected and stored. In the case in which some sets of the location relating information are selected and stored, the selected sets of the location relating information may include at least the latest location relating information220among the all sets of the location relating information220. With the configuration above, from the variable information4of a game token coin1, the history of use or transfer of the game token coin1can be known. That is, a kind of traceability information of the game token coin1is written on the game token coin1itself, and the game token coin1has a configuration in which its hi story is detectable. Next, a management system of the game token coin1according to the present embodiment will be described.FIG.3shows the outline of the overall management system. First, at a factory6, for the game token coin1, a writer14writes the variable information4including information about the completion of production or the factory shipment to the data rewritable region22of the RFID tag2as the location relating information220. At a backyard7of the casino an acceptance process of the game token coin1shipped from the factory6is performed. At the backyard7, a reader13reads the variable information4stored in the data rewritable region22of the game token coin1, and a management controller15determines whether the location relating information220indicating that information is written at the factory6as information that has to be written is written as the latest variable information4based on a read result. Supposing that when no write record is available at the factory6, an error signal is generated as a game token coin that is suspected as a fraud game token coin. Thus, the casino side rejects the acceptance of that game token coin, or asks the factory side to make investigation. When the write record has no problem, the writer14writes the location relating information220indicating that the backyard in the data rewritable region22of the RFID tag2. In writing, location information indicating the backyard may be described in addition to location information indicating the factory, or location information indicating the backyard may be described with location information indicating the factory deleted. The read and write of the game token coin at the backyard7described above may be performed in combination with the general validation work or activation work of the game token coin. The game token coin1on which acceptance tests are completed at the backyard7is carried to a warehouse8or a cage9of the casino. In the warehouse8or the cage9, the reader13reads the variable information4stored in the data rewritable region22of the game token coin1, and the management controller15determines whether the location relating information220indicating that information is written at the backyard7as information that has to be written is written as the latest variable information4based on a read result. Tests may be performed together with determination whether information that indicates the factory6is written in the history of the variable information4. Similarly to the tests at the backyard7as described above, the management controller15determines whether the history of the location information has no abnormality. When the read result has no problem, the writer14writes the location relating information220indicating the warehouse8or the cage9in the data rewritable region22. A player exchanges cash for the game token coin1at the cage9. When the player purchases a game token coin1, the writer14at the cage9writes information indicating time at which the game token coin is purchased and information indicating that the owner of the game token coin is changed from the casino to a customer as the variable information4. A configuration may be provided in which the player is identified and recorded by a face recognition technique, or an ID card, such as a member's card of the casino, or an Individual Number Card. The player makes a bet by placing the game token coin1on a bet area11of a game table using the purchased game token coin1. The reader13reads the location relating information220composed of the date-and-time information221, the place-and-event information222, and the owner information223from the RFID tag2of the bet game token coin1, and the management controller15determines whether the location relating information220has no abnormality. For example, the management controller15can determine an abnormality in the case in which a certain period has elapsed from use information on the game table or from the date and time on conversion information at the cage, which are lastly written, or in the case in which the owner information223stored on the game token coin1is different from the owner information223identified by face recognition or an ID card. In the case in which the read result has no problem, the writer14additionally writes the location relating information220in the data rewritable region22of the bet game token coin1. The location relating information220includes the date-and-time information221, the place-and-event information222, and the owner information223. At the table, the writer14writes the variable information4including information on a player position number of the game token coin1placed and a player, for example. The information on the player may be identified by an ID card or face recognition. Generally, since a plurality of game token coins1is stacked and placed in the bet area11, the game token coins1are collectively written in the stacked state. When the player brings out the game token coin1from the casino floor, a leaving process is performed at a gate10. In leaving, the reader13reads the location relating information220, and the management controller15determines whether the variable information4has no abnormality. At the gate10, the writer14writes the variable information4including the place-and-event information222indicating leaving and the owner information223. When a player enters the casino floor, in the case in which the player brings in a game token coin1having been brought out from the casino floor, the reader13reads the location relating information220, and the management controller15determines whether the variable information4has no abnormality. For example, for the game token coin1that has been once brought out from the casino, in the case in which the owner when leaving the casino is different from the owner when entering the casino, it can be determined that there has been the act of transfer the game token coin outside the casino. In the case in which the determination based on the read result by the reader13is abnormal, the management controller15described above may output an alarm to the cage9or the gate10. When the owner of the game token coin1converts the game token coin1into money at the cage9or the game token coin1leaves the gate10, the conversion of the game token coin1into money can be refused or the owner can be checked separately, based on the alarm outputted from the management controller15. In the case in which the determined result is abnormal, the management controller15may determine a response, such as sending a warning to dealers at the game tables to interrupt games, sending a warning to a pit where a manager determines whether to interrupt or continue games, for example, or the game token coin1is exchanged. Alternatively, a configuration may be provided in which the management controller15is connected to a total management controller18of the casino and the management controller15sends a warning to the total management controller18. Similarly, when the game token coin1is converted into money at the cage, the reader13may read the variable information4, and the management controller15may determine an abnormality. More specifically, the management controller15may determine an abnormality, in the case in which a certain period has elapsed from use information on the game table or from the date and time on conversion information at the cage, which are lastly written, or in the case in which a person who is different from the owner information223stored on the game token coin1is to convert the game token coin1into money, for example. In the case in which an abnormality is determined, the management controller15may output a warning and refuse the conversion of the game token coin1whose abnormality is determined into money. In the description above, an non-limiting example embodiment is described in which the reader13reads information, the management controller15makes determination, and then the writer14writes information. However, a configuration may be provided in which information is simultaneously read and written. In the following, the detail of an embodiment to which the present disclosure is applied. At the factory6, in the completion of production or in shipment, the RFID tag2is read, written, or read and written. The production information or product information is written as the constant information3, the RFID tag2is locked as necessary so as not to be rewritten, and the variable information4is written. At the backyard7, the game token coin1shipped from the factory6is accepted, and the RFID tag2is read, written, or read and written when the game token coin1is activated as a usable game token coin1. At the warehouse8, the RFID tag2is read, written, or read and written when the game token coin1is transferred from the warehouse8to the cage9or the game token coin1is transferred from the cage9to the warehouse8. Alternatively, to the game token coin1kept in custody at the warehouse8, the RFID tag2is read, written, or read and written at certain time intervals, or a predetermined timing. At the cage9, the RFID tag2is read, written, or read and written in the transfer from or to the warehouse8and in the conversion of the game token coin1into money. In the case in which the game token coin1is converted into money at the cage9, the place-and-event information222indicating conversion into money and information on a player who converts the game token coin1into money is written on the variable information4as the owner information223. The owner information223can be acquired from the casino ID card of the player, a face recognition the system, a credit card, and any other device. At the gate10, the RFID tag2is read, written, or read and written, when the player leaves the casino. To all the game token coins1brought out from the casino when the player leaves the casino, the place-and-event information222indicating the game token coins1being brought out and the owner information223indicating the player who brings out the game token coins1are written as the variable information4for registration. Also when the player enters the casino, the place-and-event information222indicating the game token coins1being brought in and the owner information223indicating the player who brings in the game token coins1are similarly written as the variable information4, and the game token coins1are registered. In the bet area11, the RFID tag2of the game token coin1placed in the bet area11by the player who joins betting is read, written, or read and written. The RFID tag2of the game token coin2placed by the dealer in the bet area11as redemption money for the player is read, written, or read and written. At a chip tray12, the RFID tag2of the game token coin1collected and kept in the chip tray and the game token coin1kept on the chip tray is read, written, or read and written to. The game table may further include a payment area, in which the RFID tag2the game token coin1placed therein by the dealer for redemption money for the player may be read, written, or read and written. As described above, at the game table, the variable information4(i.e., the location relating information220) can be updated at timing at which the game token coin1is bet (bet timing), timing at which the game token coin1is collected by the dealer (collecting timing), and timing at which the dealer pays the game token coin1to the player (payment timing). The variable information4may be updated at all the bet timing, the collecting timing, and the payment timing, or the variable information4may be updated at a part of these timings. FIG.7Ashows an example in which the variable information4is updated at all the bet timing, the collecting timing, and the payment timing. The variable information4is updated at these timings, and the movement of the game token coin1can be determined as follows. At 16:27 on Jan. 28, 2019, Player-C exchanges cash for a game token coin1at the cage5(Update U701). At 16:35 on January 28, Player-C bets the game token coin1at player position No. 1 at table No. 321 (Update U702). Player-C loses the game, and at 16:38 on January 28, the game token coin1is collected on the chip tray12(Update U703). After that, Player-D wins the game at player position No. 3 at table No. 321, and at 16:52 on January 28, the dealer pays the game token coin1to Player-D (Update U704). According to this example, first, since information on the dealer and any player can be written as the owner information223in the variable information4, the history of the owners can be accurately determined. Specifically, according to the present example, the state can be achieved in which the actual owner is matched with information on the owner all the time. Note that in the betting state, the owner information223may be information on the player who makes a bet, not changed, or may be information indicating that a bet is made. FIG.7Bshows an example in which the variable information4is updated at the bet timing and the collecting timing. The variable information4is updated at these timings, and the movement of the game token coin1can be determined as follows. At 16:28 on Jan. 28, 2019, Player-E exchanges cash for a game token coin1at the cage5(Update U705). At 16:36 on January 28, Player-E bets the game token coin1at player position No. 2 at table No. 322 (Update U706). Player-E loses the game, and at 16:39 on January 28, the game token coin1is collected on the chip tray12(Update U707). The game token coin1is paid from the chip tray12to Player-F who wins the game, and at 16:53 on January 28, Player-F bets the game token coin1at player position No. 4 at table No. 322 (Update U708). According to this example, it can be determined which player the game token coin1on the chip tray12(i.e., owned by the casino) is collected from. In addition, it can be determined that the casino surely collects the game token coin1from the player. FIG.7Cshows an example in which the variable information4is updated at the bet timing and the payment timing. The variable information4is updated at these timings, and the movement of the game token coin1can be determined as follows. At 16:29 on Jan. 28, 2019, Player-G exchanges cash for a game token coin1at the cage5(Update U709). At 16:37 on January 28, Player-G bets the game token coin1at player position No. 3 at table No. 323 (Update U710). After Player-G loses the game and the game token coin1is collected on the chip tray12, at 16:53 on January 28, the dealer pays the game token coin1to Player-H who wins the game at player position No. 5 at table No. 323 (Update U711). According to this example, the betting by the players and the payment (redemption) to the players can be determined. For example, in the case in which after the betting by Player-G, Player-H makes a bet without payment to Player-H, it can be determined that the game token coin is directly delivered from Player-G to Player-H. FIG.7Dshows an example in which the variable information4is updated at the collecting timing and the payment timing. The variable information4is updated at these timings, and the movement of the game token coin1can be determined as follows. At 16:29 on Jan. 28, 2019, Player-I exchanges cash for a game token coin1at the cage5(Update U712). As a result that Player-I bets the game token coin1at player position No. 4 at table No. 324, Player-I loses the game, and at 16:40 on January 28, the game token coin1is collected on the chip tray12(Update U713). At 16:54 on January 28, the dealer pays the game token coin1to Player-J who wins the game at player position No. 6 at table No. 324 (Update U714). Also according to this example, since information on the dealer and any player can be written as the owner information223in the variable information4, the history of the owners can be accurately determined. Specifically, according to the present example, the state can be achieved in which the actual owner is matched with information on the owner all the time. FIG.7Eshows an example in which the variable information4is updated at the bet timing. The variable information4is updated at the bet timing, and the movement of the game token coin1can be determined as follows. At 16:30 on Jan. 28, 2019, Player-K exchanges cash for a game token coin1at the cage5(Update U715). At 16:38 on January 28, Player-K bets the game token coin1at player position No. 5 at table No. 325 (Update U716). After Player-K loses the game and the game token coin1is collected on the chip tray12, Player-L at player position No. 1 at table No. 325 wins the game and receives the payment of the game token coin1. After that, at 16:50 on January 28, Player-L bets the game token coin1at player position No. 1 at table No. 325 (Update U717). After Player-L wins the game and collects the game token coin1to himself, at 16:59 on January 28, Player-L bets the game token coin1at player position No. 1 at table No. 325 (Update U718). According to this example, how the players who own the game token coin1are transitioned via the chip tray12can be determined. It can be determined that the game token coin1is actually bet. Thus, it can be confirmed that the game token coin1is not used for money-laundering. FIG.7Fshows an example in which the variable information4is updated at the collecting timing. The variable information4is updated at the collecting timing, and the movement of the game token coin1can be determined as follows. At 16:28 on Jan. 28, 2019, Player-M exchanges cash for a game token coin1at the cage5(Update U719). Player-M bets the game token coin1at player position No. 6 at table No. 326 and loses the game, and at 16:39 on January 28, the game token coin1is collected on the chip tray12(Update U720). After that, Player-N wins the game at player position No. 2 at table No. 326, and receives the payment of the game token coin1. Player-N bets the game token coin1on the subsequent game at player position No. 2 at table No. 326, and loses the game. At 16:59 on January 28, the game token coin1is collected on the chip tray12(Update U721). According to this example, it can be known which game table the game token coin1is used at and when the game token coin1is used. In the case in which a fraud is suspected, video on the suspicious game table can be checked on suspicious hours, and it can be confirmed that the game token coin1is not used for money-laundering. FIG.7Gshows an example in which the variable information4is updated at the payment timing. The variable information4is updated at the payment timing, and the movement of the game token coin1can be determined as follows. At 16:29 on Jan. 28, 2019, Player-0 receives the payment of a game token coin1at a set at player position No. 7 at table No. 327 (Update U722). After that, Player-0 bets the game token coin1and loses the game at table No. 327, and the game token coin1is collected on the chip tray12. After that, Player-P wins the game at player position No. 3 at table No. 327. At 16:43 on January 28, the dealer pays the game token coin1to Player-P (Update U723). According to this example, it can be determined which game table the game token coin1owed by the player is paid at. Thus, it is known that the game token coin1is not stolen from the chip tray, and it can be confirmed that the game token coin1is not used for money-laundering. As described above, in the management system according to the present embodiment, at the game table, the variable information4(i.e., the location relating information220) is updated at least any one timing of the bet timing, the collecting timing, and the payment timing. Note that in the example, the update of the variable information4at the collecting timing may be performed in the bet area, or may be performed on the chip tray. In the operation of baccara games, the bet chip of the player who loses the game is first collected, and then payment is made to the player who wins the game. Thus, time for updating the variable information4in the bet area can be reserved for relatively long time for the game token coin1owed by the player who wins the game, and the variable information4can be relatively reliably updated also in the bet area. On the other hand, in regard to the game token coin1that has to be collected, time for which the game token coin1is present in the bet area is relatively short from the time at which the collection is determined (the game result is determined) to the time at which the game token coin1is actually collected. Thus, when it is desired to update the variable information4in the bet area, the update may not be finished. Therefore, in regard to the game token coin1to be collected, it is favorable to rewrite the variable information4in the chip tray as a collection destination. The management controller15has a function that determines whether the location relating information220obtained from the read result by the reader13at places has no abnormality. For determination of an abnormality, determination can be made whether there is any event that the game token coin1is not used for a predetermined period or more from the previous write or that the location relating information which has to be written is not written. The event that the game token coin1is not used for a predetermined period or more from the previous write is any situation in which: 1) latest location relating information is information indicating the cage9, and the subsequent information write is made after an elapse of a predetermined time or more, 2) the latest location relating information is information indicating entering at the gate10, and the subsequent information write is made after an elapse of a predetermined time or more, and 3) the latest location relating information is information indicating leaving the gate10, and the subsequent information write is made after an elapse of a predetermined time or more. The event that the location relating information which has to be written is not written is any situation in which: 4) when a player enters the gaming house, as latest location relating information on the data rewritable region22of the game token coin1, information indicating leaving the gate10is not written, 5) when the game token coin1is used at the cage9, information indicating the backyard7is not written, and 6) when a player leaves the gaming house, information indicating the cage9or the backyard7is not written. The management controller15can determine any situation as the event that is determined as an abnormality based on the owner relating information223obtained from the reader13in which: 1) a person different from the final owner stored in the data rewritable region22converts the game token coin1into money in the gaming house, or makes exchange of game token coins, or leaves the gaming house, and 2) a person different from the owner in leaving stored in the data rewritable region22brings in the game token coin in the gaming house. The management controller15determines any of situations 1) to 4) as the event that is determined as an abnormality based on information obtained from the reader13indicating information indicating the place of the bet area11or information indicating the place of the chip tray12or the payment area, in which: 1) a person different from the final owner stored in the data rewritable region22uses the game token coin1at the game table, 2) a person different from a person who purchases a game token coin1at the cage9converts the game token coin1having no use record at the game table into money at the cage9, 3) a game token coin1that does not have information indicating the cage9or the backyard7is used in the gaming house, and 4) the latest location relating information is information indicating the bet area11or the payment area, and the game token coin1is converted into money at the cage9or used at the game table after a lapse of a predetermined period or more. FIG.6shows a database according to another embodiment of the present disclosure. In addition to an RFID tag of a game token coin1which stores the constant information3and the variable information4, the management system includes a database17that records a similar constant information3and variable information4. A management controller15can record information on the database17based on a read result by a reader13, can check the constant information3and the variable information4stored the RFID tag the game token coin1against information of the database, and can determine an abnormality. In the RFID tag2according to the embodiment of the present disclosure, the region in which data is functionally non-rewritable in the data non-rewritable region21may be a TID. The region in which necessary information is written in the data non-rewritable region21and then the region is locked so as not to allow data write may be an EPC or a user region. The data rewritable region22may be an EPC or a user region. The forms of the reader13and the writer14may be changed depending on places. For example, at the factory6, the forms of the reader13and the writer14may be in a stage shape shown inFIG.4, at the gate10, the forms of the reader13and the writer14may be in a box shape, and in the bet area11or on the chip tray12, the bet area11or the chip tray12itself may include functions of the reader13and the writer14. The game token coin1may be stacked as they are, or the game token coin1may be read and written in the state in which the game token coin1is housed in a chip case, for example. The reader13and the writer14may be integrated with each other. In the embodiments above, the case is described in which the game token coin has one RFID tag, and the one RFID tag has a data non-writable region and a data rewritable region. To this, a form can also be thought in which two RFID tags are placed in a game token coin, one RFID tag stores constant information and then locked so as not to allow data write and the other RFID tag allows data rewrite in order to record variable information. However in this case, in the case in which a plurality of game token coins is collectively read, the information group of the constant information3and the information group of the variable information4are separately read as shown inFIG.5A. Thus, a problem arises that the determination of a combination of the constant information and variable information of a certain game token coin fails and the identification of a game token coin having an abnormality on variable information fails. To this, as shown inFIG.5B, in the case in which one RFID tag is available, the corresponding relationship between the constant information and the variable information read by the reader can be determined even in the case in which a plurality of game token coins is collectively read. Thus, it is thought that the game token coin is embodied using one RFID tag, which is more excellent. Generally, the RFID tag having a larger diameter more improves read accuracy. Thus, in the case in which a game token coin includes an RFID tag, the RFID tag desirably has a size having a diameter that is at least the radius of the game token coin or more. Therefore, two RFID tags are included in a game token coin to reduce the diameter of each RFID tag, which is not preferable. Since the inclusion of two RFID tags increases the number of RFID tags to be read, this leads to a slowed read rate when a plurality of game token coins is read. From the points above, it is thought that the game token coin is more effectively embodied using one RFID tag. | 31,806 |
11861437 | DETAILED DESCRIPTION Development of portable and low-cost technologies for chemical and physical sensing is important for human health, safety, and quality of life. Such systems can be used for point-of-care diagnosis of disease, detection of explosives and chemical warfare agents, prevention of spoilage of food and increasing efficiency in agriculture, analysis of oil and gas, detection of petrochemical leaks and spills, monitoring of environmental pollution, detection of radiation, and monitoring of temperature or heat energy exposure. Traditional improvements in this area increase performance through modification or re-engineering of existing platforms. Such strategies may include miniaturizing components to increase portability (e.g., portable gas chromatograph or mass spectrometer) or reducing cost (e.g., increasing the efficiency of the manufacturing). While these solutions may improve existing platforms in terms of portability, they still suffer from limitations, such as being expensive, bulky, or fragile, or requiring of trained personnel to operate. Furthermore, many traditional methods of chemical sensing require physical contact of the device with the sensing element/material via wires or solid-state circuitry to acquire data. Examples of Some Sensors The use of peroxide-based explosives has become increasing popular. Methods for determining a peroxide or a peroxide precursor can include forming a fluid mixture comprising a peroxide-reactive material, a light-emitting material, a support material or support material precursor, and, optionally, a catalyst, to produce a composition that is emissive in the presence of a peroxide, wherein the composition has a boiling point of at least 300° C. or greater. Methods for determining a peroxide can include exposing a composition comprising a peroxide-reactive material to a vapor suspected of containing a peroxide, wherein the peroxide, if present, causes the composition to generate a determinable signal, wherein the composition has a boiling point of at least 300° C. or greater, and determining the signal. One method of detecting an analyte in a sample includes a carbon-carbon multiple bond moiety comprising exposing a detection region of a detector including a heteroaromatic compound having an extrudable group and capable of undergoing Diels-Alder reaction with the analyte including a carbon-carbon multiple bond moiety to the sample, and detecting color change of a reaction mixture comprising the heteroaromatic compound based on the presence of the analyte in the sample. This method provides alkene and alkyne detection, differentiation, and quantitation that addresses the growing need of transducing relevant information (only previously attainable from sophisticated methods such as GC-analysis) with the favorable low-cost and ease-of-use attributes ascribed to more basic technologies. Using this method, a device can indicate the presence of specific classes of alkenes or alkynes in the gas phase, and can determine the total exposure of the device to said alkenes or alkynes, based on a colorimetric readout. Because this device is selective for certain classes of alkenes and alkynes, it allows for differentiation of compounds of interest that contain certain alkene or alkyne functionality. This method can make use of the color change that accompanies the transformation of an s-tetrazine moiety to a pyrimidine moiety upon reaction with unsaturated carbon-carbon bonds. See, for example, Application No. PCT/US2014/033037, which is incorporated by reference in its entirety. Another method of detecting a stimulus includes using a dosimeter, such as a thermal dosimeter, which can measure the amount of light emitted from a crystal in a detector when the crystal is heated. A dosimeter can use a triazole as described by Coulembier. See, for example, O. Coulembier et al.,Macromolecules,2006, 39, 5617-5628, which is incorporated by reference in its entirety. Sensors Using a Digital Reader Sensing platforms that have the characteristics of being simple, inexpensive, yet sensitive and quantitative can be created. One approach to the area of chemical and physical sensing can be the development of sensing materials and devices that have the characteristics of being modular (i.e., easily modified for specific applications), wirelessly readable, and easily used and interpreted by individuals with no prior technical training. Whitesides and co-workers have demonstrated chemical detection of analytes in biologically-relevant samples using smartphones. See, for example, Martinez, A. W. et al.,Anal. Chem.,2008, 80, 3699-3707, which is incorporated by reference in its entirety. These methods involve capturing an image of a colorimetric assay using an in-phone camera and analyzing it to correlate changes in color of a dye with the presence of biologically relevant analyte. This method, however, requires line-of-sight measurement that can be affected by potential artifacts arising from lighting conditions, positional angle, or hand-movement during image acquisition. Potyraillo et al. and others demonstrated electronic wireless detection of chemical analytes using RFID technology. See, for example, Potyrailo, R. A. et al.,Anal. Chem.2006, 79, 45-51, which is incorporated by reference in its entirety. While this technology has the capability to perform non-line-of sight measurements that overcome some of the limitations of the colorimetric assays, they have limited portability as they require the use of advanced electronics devices, such as inductively coupled network analyzers or impedance spectrometers. Studies have exploited custom-made, as well as commercially available RFID tags to monitor freshness of milk, freshness of fish, and growth of bacteria. See, for example, Tao, H. et al.,Adv. Mater.2012, 24, 1067-72; Potyrailo, R. A. et al., Battery-free Radio Frequency Identifi cation (RFID) Sensors for Food Quality and Safety, 2012, each of which is incorporated by reference in its entirety. These studies relied primarily on correlating the changes in dielectric environment of the RFID tags (i.e., changes in C) with changes in the resonant frequency or resonant impedance of the LCR circuit. However, they are limited by a lack of selectivity toward chemical analytes and physical stimuli, and by the requirement for expensive radio frequency analysis equipment such as impedance and network analyzers for chemical detection. Although RF technology has been recently applied towards wireless chemical sensing, current approaches have several limitations including lack of specificity to selected chemical analytes, requirements for expensive, bulky, fragile, and operationally complex impedance and network analyzers, and reliance on extensive data processing and analysis. See, Potyrailo R A, Surman C, Nagraj N, Burns A (2011) Materials and transducers toward selective wireless gas sensing.Chem Rev111:7315-7354, Lee H et al. (2011) Carbon-nanotube loaded antenna-based ammonia gas sensor.Microw Theory Tech IEEE Trans59:2665-2673, Potyrailo R A et al. (2009) Development of radio-frequency identification sensors based on organic electronic sensing materials for selective detection of toxic vapors.J Appl Phys106: 124902, Fiddes L K, Yan N (2013) RFID tags for wireless electrochemical detection of volatile chemicals.Sensors Actuators B Chem186:817-823, Fiddes L K, Chang J, Yan N (2014) Electrochemical detection of biogenic amines during food spoilage using an integrated sensing RFID tag.Sensors Actuators B Chem202:1298-1304, Occhiuzzi C, Rida a., Marrocco G, Tentzeris M M (2011) Passive ammonia sensor: RFID tag integrating carbon nanotubes. 2011IEEE Int Symp Antennas Propag:1413-1416, each of which is incorporated by reference in its entirety. Disclosed herein are a method and a system of converting inexpensive commercial NFC tags into chemical sensors that detect and discriminate analytes at part-per-thousand and part-per-million concentrations. This effort merges rational design of conductive nanostructured materials for selective chemical sensing with portable and widely distributed NFC technology to deliver a new method of acquiring chemical information about an NFC tag's local environment. A commercially available technology-Near Field Communication (NFC)— can be used for wireless, non-line-of-sight chemical sensing. Many modern smartphones and similar devices (tablet computers, video game controllers, and smartphone accessories) can be equipped with NFC readers operating at peak frequency of 13.56 MHz. These readers can be tuned to interact with many types of commercially available wireless “tags”-simple electrical circuits comprising an inductor (L), a capacitor (C), and an integrated circuit (resistor (R)) supported on the surface of a substrate, such as a polymeric sheet. The phone can achieve communication by powering the tag via electromagnetic induction at the specified frequency and then receiving reflected attenuated signal back from the tag. See, for example, Curty, J. P. et al.,Springer, New York, 2007, pp. 49-73, which is incorporated by reference in its entirety. This technology can be used in controlling access to facilities, ticketing of events, prevention of theft, and management of inventory. This technology can be applied to chemical sensing by introducing chemiresistive materials into the circuitry of the tag. Exposure of the modified tag to chemical vapors can alter the resistance of the sensing materials, and thus the resonant frequency of the modified tag, such that it becomes readable or unreadable when probed by a smartphone reader. With this method, vapors of nitric acid, ammonium hydroxide and cyclohexanone, can be detected. This technology can be extended to physical sensors as well, such as applications in temperature, heat energy exposure or radiation sensing. Commercially available RFID tags can be combined with a digital reader, such as a hand held frequency reader, for example a consumer electronic smartphone, resulting in a fully integrated chemical and physical sensing platform. The sensing platform can be available to anyone, including those without a technical background. This platform has advantages over existing methods of chemical and physical sensing. For example, the sensing method can be non-line-of-sight (high frequency radio waves), and can receive information from the sensor tag through solid objects such as packages, walls, wood, and other non-metallic objects. The sensing tag does not require a power source, as it receives its power from the incoming radio waves. The data-acquiring device can be any commercially available smartphone equipped with near field communication (NFC) reader capabilities, including but not limited to Samsung, LG, Google, Blackberry, etc. manufacturers. The method is simple: no technical knowledge is required to perform a measurement. Some differences between previous studies and this method include: i) The chemical detection is achieved using NFC technology instead of impedance spectroscopy; ii) The detector is a highly portable device such as a. Smartphone, instead of a very bulky complex instrument (e.g., a network analyzer). Besides portability, the smartphone has additional utility in chemical detection because the information obtained from the chemical sensor can be coupled with other sensors within the smartphone (e.g., GPS, email) for automated identification of position and communication of information. iii) Ability for wireless chemical sensing over distance of 5 cm of solid material was demonstrated, as opposed to through a distance of a single paper sheet. iv) This method incorporates chemiresistors into the existing circuitry of a tag by drawing as opposed to depositing sensing materials on top of the antenna. v) This method requires no data workup for signal processing, while existing methods often require substantial amount of data processing for interpreting information. vi) This method does not require additional equipment for reading the magnetic memory. vii) This method relies on changes on resistance of a selective chemiresistive or physiresistive material for chemical sensing, while existing methods rely on non-specific changes in capacitance. viii) This method relies on molecular recognition for selectivity, and does not require principal component analysis, and so on. FIG.18shows the adaptation of a nascent technology embedded in modern smartphones-Near Field Communication (NFC)—for wireless electronic, portable, non-line-of-sight selective detection of gas-phase chemicals. NFC-enabled smartphones communicate with NFC tags by simultaneously energizing the NFC tag with an alternating magnetic field (f=13.56 MHz) through inductive coupling and transferring data by signal modulation. NFC tags are converted into Chemically Actuated Resonant Devices (CARDs) by disrupting the LCR circuit (Step 1) and recompleting the circuit with a stimuli-responsive variable circuit component by drawing (Step 2) with solid sensing materials. This concept can be demonstrated by (i) incorporating carbon-based chemiresponsive materials into the electronic circuitry of commercial NFC tags by mechanical drawing, and (ii) using an NFC-enabled smartphone to relay information regarding the chemical environment (e.g., presence or absence of a chemical) surrounding the NFC tag. In this way, part-per-million (ppm) concentrations of ammonia and cyclohexanone and part-per-thousand (ppth) concentrations of hydrogen peroxide can be detected and differentiated. Wireless acquisition and transduction of chemical information can be coupled with existing smartphone functions (e.g., GPS). Many commercial smartphones and mobile devices are equipped with NFC hardware configured to communicate wirelessly with NFC “tags”—simple electrical resonant circuits comprising inductive (L), capacitive (C), and resistive (R) elements on a plastic substrate (FIG.18). The smartphone, such as the Samsung Galaxy S4 (SGS4), employed in this study, communicates with the battery-free tag by powering its integrated circuit (IC) via inductive coupling at 13.56 MHz. See, Nitkin P V., Rao K V S, Lazar S (2007) An overview of near field UHF RFID. 2007IEEE Int Conf RFID:167-174, which is incorporated by reference in its entirety. Power transferred from the smartphone to the IC is, among other variables, a function of the transmission frequency (f), the resonant frequency (f0), the quality factor (Q), and the circuit efficiency (η), which in turn are functions of L (H), C (F), and R (Ω) of the smartphone and NFC resonant circuit components. See, Jing H C, Wang Y E (2008) Capacity performance of an inductively coupled near field communication system. 2008IEEE Antennas Propag Soc Int Symp2:1-4, which is incorporated by reference in its entirety. Integration of chemiresponsive materials into commercial NFC tags produces stimuli-responsive variable circuit components that affect power transfer between the tag and a smartphone in the presence or absence of chemical stimuli. The resulting programmable Chemically Actuated Resonant Devices (CARDs) enable non-line-of-sight smartphone chemical sensing by disrupting or allowing RF communication. In one method, commercially available high frequency (HF) radio frequency identification tags compatible with a reader can be converted into chemical and physical sensors. The reader can be a digital reader, which can be a handheld frequency reader. The reader can be portable. The reader can be a smartphone. In parallel with the sensing capability, a smartphone reader can read other things, such as GPS coordinates, acceleration, light intensity, altitude, etc. Coupling these capabilities in one portable reader can have unprecedented utility. This technology can be extended to temperature, heat energy exposure and radiation sensing as well. The modification of the tag can involve integration of chemiresistive sensing materials by drawing or dropcasting onto the surface of the tag. Depending on the design, the tag can become readable or unreadable when exposed to vapors of chemicals or physical stimulus. A stimulus can include an analyte. The stimulus can include a vapor, a gas, a liquid, a solid, a temperature change, heat energy exposure and so on. The stimulus can include an ethylene, a mold, an acid, a ketone, a thiol, an amine, and so on. Using RFID, a stimulus can be detected; for example, vapors of nitric acid and cyclohexanone can be detected; and ethylene and mold can be detected; and biological warfare agents can be detected. Cumulative exposure of analytes can be detected and quantified with a dosimeter. A stimulus can include a physical stimulus. The physical stimulus can include light, heat, or radiation. Using RFID, a stimulus can be detected for example, exposure of a tag to heat can be detected; and radiation and light can be detected. Cumulative exposure of physical stimulus can be detected and quantified with an RFID dosimeter. A sensing material can produce detectable change in resistance and/or capacitance upon chemical, biological, or physical changes around the sensing device. A property of a sensing material that can change upon exposure to the environment includes, but is not limited to, change in capacitance, change in resistance, change in thickness, change in viscoelasticity, or a combination thereof. A sensing material can include a metal, an organic material, a dielectric material, a semiconductor material, a polymeric material, a biological material, a nanowire, a semiconducting nanoparticle, a carbon nanotube, a carbon nanotube network, a nanofiber, a carbon fiber, a carbon particle, carbon paste, or conducting ink, or combination thereof. Different approaches can be taken to introduce chemical and physical sensing materials. For example, sensing materials can be introduced into two different locations within a commercial RFID tags. Sensing materials include variable resistors that alter their resistance in response to a stimulus. A stimulus can be a chemical stimulus, a physical stimulus, a biological stimulus, etc. The detection of a stimulus can be achieved by switching the tag between a “readable” and “not readable” state, by exposure to a stimulus, such as chemical vapors or changes in temperature or heat energy exposure, for example. When a stimulus contacts or interacts with a sensor, the resistivity can change. The contact or interaction can produce a readable signal in a hand held frequency reader as a result of the resistivity change. Alternatively, the contact or interaction can turn off a readable signal in a hand held frequency reader as a result of the resistivity change. Output can be detected after the output is shifted by detection of the stimulus. Even after going through a physical object, the output can still be detected. Detecting the stimulus is not limited to the frequency output, but can include, but is not limited to, a change in frequency, a change in q factor, a change in bandwidth, and a combination of these. These changes can result in increasing or decreasing the power transferred between the reader and radio frequency identification tag. Increasing or decreasing the power transferred between the reader and radio frequency identification tag can result in a change of the readout of the tag. For example,FIG.19shows the estimated power transfer between the phone and CARDs, as it relates to the readability of those CARDs andFIG.26exemplifies how this information was obtained and processed. In one approach, a specific electric connection within an RFID tag can be disrupted, for example by cutting, and this connection can be reestablished by deposition of a chemiresistive sensing material by either drawing or dropcasting. An RFID tag can include an integrated circuit (IC) containing magnetic memory material where the tag identification is stored. Depending on the sensing material and the stimulus, the tag can become readable and is classified as a “turn ON sensor,” or become unreadable and is classified as a “turn OFF sensor”. In one method, the tag is not readable by a reader when no stimulus is present, because the resistance of the sensor is too high. When the tag is placed in the presence of a stimulus that causes the sensor to change its resistance, the tag can become readable once the resistance value crosses a threshold value. This is a turn-on sensing method. In another method, the tag can be readable by a reader when no analyte is present, because the resistance of the sensor is high enough to allow current to flow through the integrated circuit. When the tag is placed in the presence of a stimulus that causes the sensor to change its resistance, the tag can become unreadable once the resistance value drops below a certain threshold value. This is a turn-off sensing method. In another method, instead of a turn-on sensing or a turn-off sensing, a series of data can be collected, which can provide a quantitative analysis of a stimulus. In another method, parallel integration can be used to integrate a sensing material into a portion of the tag containing the integrated circuit by drawing or dropcasting. This approach can “turn ON” or “turn OFF” detection of a stimulus, and can be complimentary to the first approach because requirements for resistance of the deposited sensing material can be different (which may have an effect on the dynamic range and the detection limit of chemical sensors towards different analytes). A radio frequency identification tag does not have to require a power source. RFID tags can be either passive, active or battery-assisted passive. An active tag has an on-board battery and periodically transmits its signal. A battery-assisted passive has a small battery on board and is activated when in the presence of a RFID reader. A passive tag has no battery. When detecting a stimulus comprising detecting an output from a radio frequency identification tag including a sensor portion, the stimulus does not have to contact or interact with the entire surface of the tag. The sensor portion has a surface area less than the surface area of the radio frequency identification tag. The sensor portion can be located on a portion of a surface of the radio frequency identification tag, and the stimulus can contact a portion of the surface of the radio frequency identification tag. In addition, the sensor portion can have multiple sensing locations, and a single tag can be used to detect more than one stimulus. A system for detecting a stimulus comprising a radio frequency identification tag can include a sensor portion, the sensor portion configured to change resistivity when the radio frequency identification tag contacts or interacts with the stimulus, whereby the resistivity change alters an output of the radio frequency identification tag, and a detector detecting the output from the radio frequency identification tag. The detector can include a reader. The reader can include a hand held frequency reader. A method of detecting a stimulus can include detecting an output from a radio frequency identification tag including a sensor portion. The system can include a real time sensor. The system can include a dosimeter, such as a radiation dosimeter, a chemical warfare agent dosimeter, or an analyte dosimeter, such as, for example, an ethylene dosimeter, a sulfur dosimeter, or an ozone dosimeter. The system can be used to monitor pollutants or chemicals relevant to occupational safety. Pollutants or chemicals can include fumes from automotive/equipment exhaust, volatiles from manufacturing, painting, or cleaning, or vapors in underground mines. A sensor can include an electronic circuit comprising electronic components. Electronic components can include resistors, transistors, capacitors, inductors and diodes, connected by conductive wires or traces through which electric current can flow. The electrical connection within the radio frequency identification tag can be altered. The resistivity of the sensor can change when the sensor is exposed to a stimulus. Contacting or interacting with a stimulus can close the circuit or open the circuit, or otherwise alter the properties of the circuit. A sensor can include a sensing material such as a metal, an organic material, a dielectric material, a semiconductor material, a polymeric material, a biological material, a nanowire, a semiconducting nanoparticle, a carbon nanotube, a nanofiber, a carbon fiber, a carbon particle, carbon paste, or conducting ink, or combination thereof. A sensing material can include organic electronics materials, doped conjugated polymers, or inorganic materials. A sensing material can include biological molecule receptors, living cells, antibodies, aptamers, nucleic acids, functionalized biological molecules, and so on. A tag for detecting a stimulus comprising a radio frequency identification tag can include a sensor portion, the sensor portion configured to change resistivity when the radio frequency identification tag contacts or interacts with the stimulus, whereby the resistivity change alters an output of the radio frequency identification tag, wherein the sensor portion includes a circuit, and wherein the sensor portion is configured to close the circuit or open the circuit when contacted ir having interacted with the stimulus. The tag can be worn as a badge for occupational health and safety personnel, military personnel, etc., detecting a hazardous analyte or radiation. A tag can include a substrate material. The substrate can include paper, plastic, a polymer, a metal, a metal oxide, a dielectric material, wood, leaves, skin, tissue, and so on. The substrate can include a metal oxide material. The substrate can be flexible; the substrate can be flat. The tag can also be embedded inside other objects (e.g., inside a capsule or a wall) or inside living systems (e.g., implanted inside a body). A tag can include an antenna, providing a link between a frequency reader and a tag, receiving and transmitting a signal, and serving as a conduit that moves data back and forth. The antenna can include coils surrounding a sensor; the antenna can include a dipole antenna. A tag can include an antenna group including a plurality of antennas or an antenna array. The ability to easily detect the existence of an analyte on a base signal using an ON/OFF binary detection method is of increasing interest in today's society. A system using a portable reader, such as a smartphone, enables everyone to determine the status of certain analytes anywhere without complicated analysis of a signal. When the amount of an analyte changes, a handheld frequency reader can turn on or turn off a signal, sending a notification of the presence or absence of the analyte. Another advantage of using a smartphone is that it carries within it many additional capabilities that can be coupled with chemical sensing to increase utility. For instance, a smartphone reader can identify a chemical spill and immediately send an emergency text or email alert identifying position of a spill using GPS. Another example could be wireless networks that monitor spatiotemporal changes in concentrations of chemical emissions and send emergency alerts when safe thresholds are exceeded. Coupling of such capabilities can enable unprecedented utility of chemical sensors in everyday life. A tag can serve as a binary logic element providing either a “1” or a “0” as pre-defined by functional sensor material, which offers advantages in terms of simplicity of implementation and does not require any sophistication by the end user. If viewed as a binary logic element, the tag could be used in further elaborations of that logic. For instance, a unique combination of the readout of multiple tags could be assigned to a specific meaning. For example, if three separate tags are “coded” for three separate analytes by virtue of the sensor materials used to make them, then2A3possible combinations exist, which could each mean something unique and significant. For example, if those analytes were food related, then one could possibly determine which type of food the sensors are attached to based on a combination of tag read-out, within a certain probability. Another example would be three tags that are “coded” with the same sensor material that has been designed to react at different concentrations of analyte. The combination of tag readout would allow one to determine, within some margin of error, the concentration of the analyte of interest. The binary on/off readability of CARDs by the smartphone can be a powerful approach for converting analog physical inputs (presence or absence of a chemical vapor within a defined threshold) into a digitized output (1 and 0, respectively) that conveys meaningful information about the local chemical environment of the CARDs. The advantage of a binary-readout is that it is the simplest possible output representation of input information, and hence allows modular multiplexing of different CARD combinations. Taken together, the data presented inFIG.21suggest that discrimination and identification of multiple analytes can be achieved with a smartphone by converting the output of binary CARDs (“on”/“off”) into multi-CARD logic (sequences of 0s and 1s) (FIG.21Graph E). This analytical approach has practical limitations in its implementation; however, it may be particularly useful in resource-constrained scenarios or high throughput applications where information about the presence or absence of specific chemicals at specified thresholds is critically important. Such applications may include detection of an acceptable threshold (e.g., permissible exposure limit for a chemical) that provides valuable actionable information in dynamic, complex environments (e.g., chemical release within a public space). Even under circumstances wherein the chemical of interest can be readily detected by the human nose, a differentiating feature of a smartphone-based sensing strategy over human-olfactory detection or visual inspection of a colorimetric test is the ability to efficiently bring sensed information into the information technology infrastructure. An inexpensive, simple, rapid, and modular approach for converting commercially available NFC tags into chemically actuated devices can communicate with a smartphone via radio waves. This approach enables electronic wireless, non-line-of-sight detection and discrimination of gases and vapors at part-per-million and part-per-thousand concentrations. This technology provides binary (“on”/“off”) information about the presence or absence of a chemical analyte regarding designated concentration thresholds, (e.g., NIOSH STEL) within the local environment of the sensor tag, and is capable of differentiating multiple concentrations of one analyte or multiple analytes using multi-tag logic. The general sensing strategy involving wireless communication between NFC tags and smartphones is modular and can be generalized to incorporate many types of chemiresponsive materials to enable selective detection of diverse chemical changes. Nevertheless, the significant challenges that remain to realize the full potential of this wireless sensing approach includes: (i) chemical and materials science innovations to improve the sensitivity and selectivity of chemiresponsive materials to chemical analytes; (ii) improving device-to-device performance reproducibility by advancing the state-of-the-art of nanostructured carbon deposition techniques and; (iii) enabling continuum measurement CARD readout capabilities. The combination of chemical sensing with other capabilities within the smartphone (e.g., GPS) may enable additional utility in applications involving tracking and tracing. As a result of the portability and increasingly ubiquitous use of smartphones and mobile devices, this platform can enable applications in personalized and widely distributed chemical sensing wherein the acquisition of chemical or physical information was previously unattainable. EXAMPLES Choice of Tags and Phone Commercially available “dry” Texas Instruments HF-I Tag-It Plus Transponder Inlays (TI-Tag) can be used to demonstrate converting a commercially available tag into a chemical sensor. These tags were chosen based on their chemically insensitive substrate backing, access to transponder circuitry, and commercial availability. The unmodified tags are composed of a polyethylene terephthalate substrate (which also serves as a dielectric layer for the capacitor), an aluminum antenna serving as an inductor (L), a parallel-plate aluminum capacitor (C), and a silicon integrated circuit (IC) chip (R), all connected in parallel, forming an LCR resonant circuit. Google Nexus S can be used as the primary NFC-enabled smartphone for this study, due to its wide circulation and the fact that it was the first smartphone to include both NFC hardware and software support. This phone is equipped with an RFID reader developed to operate within NFC standards. The RFID reader comprises a signal transmitting RFID controller and a signal receiving transponder. When used with unmodified TI-tags, the Nexus S has a read rage of 5 cm through solid, non-metallic objects such as paper, wood, and plastic. InFIG.1, high frequency radio waves are transmitted to a modified RFID tag, which reflects radio waves back to the smartphone that carry with them information about the unique tag identification. Apps can be used; examples of Apps include NFC TagInfo from google play and NFC Reader from google play.FIG.1demonstrates the ability to link sensing response to a serial number. The transaction can happen in the cloud. Depending on the sensing mechanism, the modified RFID tag is either “readable” or “unreadable” by the smarthphone. The RFID tag can be interrogated through solid material, non-metallic material.FIG.2shows a commercially available RFID tag.FIG.3demonstrates the readability of an RFID tag through five Post-It notes (˜5 cm). In addition to paper, a sensor can also read through other materials. The examples of other materials which a signal can penetrate include paper, wood, plastic, leather, skin, plastic composites, wood composites, slate, non-metallic objects, bark, leaves, the skin of fruit, clothing, cloth, textiles, water, organic liquids, brine, blood plasma, bodily liquids, concrete, drywall, glass fiber, non-metallic composite materials, and so on. Instrumental Analysis A vector network analyzer (VNA) was used to monitor the analog signal response of the modified TI-tags, the signals generated by the smartphone, and the modulation of signal that occurs upon collision of the smartphone-generated signal with the modified tag with and without analytes present. Analog resonant frequency data was acquired with an Agilent E5061B network analyzer by employing a custom-made loop antenna to monitor reflection across a frequency range of 10 MHz-20 MHz at 50Ω system impedance. Conversion of Commercially Available RFID Tags into Chemical Sensors The TI-tags can be converted into dynamic radio frequency sensor tags by inserting a chemiresistor in series with the IC, such that it is also in series with the capacitor and antenna. This modification is a two-step process. First, the TI-tag is rendered unreadable when probed by a conventional smartphone by disrupting one of the connections leading to the IC chip. Second, this connection is re-established by drawing a chemiresistor in-between the capacitor and the IC lead. Sensing Example A system for detecting a stimulus can have a radio frequency identification tag 101 including a sensor portion 102, the sensor portion configured to change resistivity when the radio frequency identification tag contacts or interacts with the stimulus 103, whereby the resistivity change alters an output 104 of the radio frequency identification tag, and a detector 104 detecting the output from the radio frequency identification tag (FIG.1). InFIGS.4A and4B, a high frequency RFID tag was modified by cutting at the location between the capacitor and the integrated circuit. Sensing material was then deposited next to the location where the tag had been cut until the desired electrical resistance (Rs) was achieved. Rswas determined using a multimeter. The initial resistance was recorded, and measured several times to ensure that it remained steady under ambient conditions. In the case of a turn-off sensing experiment, the tag readability by the smartphone was confirmed. In the case of a turn-on sensing experiment, the tag was unreadable by a smartphone. The tag was then exposed to analyte of interest. Rswas measured at multiple time points; upon each measurement, an attempt to interrogate the tag with the smartphone was made immediately after Rsmeasurement, and the values and readability were recorded. Upon crossing a sensor threshold value, the tag became unreadable (turn-off sensor) or readable (turn-on sensor). The experimental procedure of measuring Rsand interrogating the tag with a smartphone was continued after the threshold value was crossed. In the case of a reversible sensor, the above experimental procedure was repeated the desired number of times. This method has advantages over other methods of chemical and physical sensing. The advantages include detection of cyclohexanone at low detection limits, RFID chemical sensing with a cell phone, direct integration of sensing material into mass-produced NFC inlay, quantitation of analyte with a smartphone, and so on. FIG.4Ashows an enlargement of the chip and capacitor ofFIG.2, with a depiction the principle of Sensing Method 1.FIG.4Bshows an enlargement of the chip and capacitor ofFIG.2, with a depiction of the principle of Sensing Method 2. FIG.5shows graphical representations and equivalent electronic circuit diagrams of a modification process for Sensing Method 1 using a commercially available RFID tag (Texas Instruments Tag-It HF-1). A high frequency RFID tag can be modified by cutting at the location between the capacitor and the integrated circuit (FIGS.7Aand B). Sensing material was then deposited next to the location where the tag had been cut until the desired Rswas achieved. Rswas determined using a Fluke 114 true RMS multimeter. The initial resistance was recorded, and measured several times to ensure that it remained steady under ambient conditions. In the case of a turn-off sensing experiment, the tag readability by the smartphone was confirmed. In the case of a turn-on sensing experiment, the tag unreadability by a smartphone was confirmed. The tag was then exposed to analyte of interest. Rswas measured at multiple time points; upon each measurement, an attempt to interrogate the tag with the smartphone was made immediately after Rsmeasurement, and the values and readability were recorded. Upon crossing a sensor threshold value, the tag became unreadable (turn-off sensor) or readable (turn-on sensor). The experimental procedure of measuring Rsand interrogating the tag with a smartphone was continued after the threshold value was crossed. In the case of a reversible sensor, the above experimental procedure was repeated the desired number of times. Integration of Chemiresistive Sensing Materials into RFID Tags Alters their Resonant Frequency. A TI-tag can be viewed as a simple electrical circuit that consists of an inductor (L), a capacitor (C), and a resistor (R) connected in parallel. Equation 1 describes the resonant frequency, f0(Hz) of this type of circuit (LCR circuit) as a function of L, C, and R. The inductance in this circuit is a function of the geometry of the antenna, the capacitance is a function of the physical geometry of the conductors and the dielectric constant of the material between these conductive plates (i.e., the supporting polymeric substrate), and R is the effective resistance of all the circuit elements within the tag. f0=12π1LC-(RL)2(1) The tags can be rendered chemically sensitive via a simple, two-step modification procedure, in which selective chemi- or physi-resistive sensor elements are incorporated into the LCR circuit (FIG.7A). This method exploits the hypothesis that the resonant frequency of the RFID tag can be influenced by its chemical environment by altering R of the LCR circuit. The measured total resistance, R, of three different tags was measured with a multimeter by contacting the tag on either side of the sensor location and then compared to the resistance of the material located between the multimeter electrodes, Rs, by removing it from the tags and measuring its resistance independent of the tag. In the case of an unmodified tag, R=0.5Ω and Rs0.5Ω (FIG.7A(a)). In the case of a tag wherein the conductive pathway between the capacitor and IC was absent, R 22.5 MΩ and Rs≅∞ (FIG.7B(b)). In the case where a conductive pathway between the capacitor and IC was reestablished with a sensor, R 30 kΩ. and Rs=30 kΩ (FIG.7A(c)). These experiments suggest that Rcircuit=22.5 MΩ; therefore, the measured quantity R can be understood as behaving according to Ohm's law: 1R=1Rs+1Rcircuit(2) In the case of the sensors employed in this study, Rs<<Rcircuitand therefore it can be assumed that R≅Rs. By extension f0∝Rs. (equation 1). Furthermore, experimental evidence shows that there is negligible dependence of the tag substrate, antenna, capacitor plate, electrode material, and IC on their chemical environment, and thus ΔR≅ΔRs(FIG.8D). FIG.7Billustrates the relationship between f0and Rsfor a series of tags modified according to Sensing Method 1. A commercially available tag has Rs=0.5Ω and f0=13.65±0.01 MHz (curve a). Disrupting a connection between the capacitor and IC results increases Rsto 25 MΩ and increases f0to 14.30±0.01 MHz (curve b). Introduction of a chemiresistive material that bridges capacitor and IC by drawing at Rs=30 kΩ decreases f0to 14.10±0.01 MHz (curve c). Subsequent exposure to saturated vapor of cyclohexanone increases Rsfor example, from 30 kΩ to 70 kΩ and is accompanied by a shift in f0from 14.10±0.01 MHz to 14.20±0.01 MHz (curve d). FIG.7Ashows two-step modification of tags with variable resistors.FIG.7Bshows averaged traces (solid, bold) of frequency responses collected in septuplet (translucent, narrow traces) of: (a) unmodified tags, Rs≈0.5Ω; (b) disrupted tags, Rs≈25 MΩ; (c) modified sensor tags before exposure to cyclohexanone (equilibrium vapor pressure at RT), Rs≈30 kΩ; (c*) modified sensor tags after exposure to cyclohexanone (equilibrium vapor pressure at RT) for one minute, Rs≈70 kΩ; (d) single trace of frequency response in the absence of any tags. The insert shows normalized, frequency-dependent smartphone RF-signal attenuation (backscatter modulation) of (a), (b), (c), and (c*). FIGS.8A-8Dshow the correlation between the readability of the chemiresistive tags by a Google Nexus-S smartphone as a function of f0and Rsfor three different chemiresistive materials (9B pencil, SWCNTs, and a 4:1 (mass) blend of 2-(2-Hydroxy-1,1,1,3,3,3-hexafluoropropyl)-1-naphthol (HFIPN) with SWCNTs.FIG.8Ashows Correlation of the resonant frequency behavior of tags functionalized with 9B pencil lead (triangle), SWCNT (circle), and 4:1 wt % HFIPN:SWCNT (square) sensors with Rs=1.5 kΩ-150 kΩ to their readability (red=unreadable; blue=readable) with a smartphone.FIG.8Bshows correlation of the resonant frequency behavior of functionalized tags before (empty) and after (filled) exposure to cyclohexanone (equilibrium vapor pressure at RT) for one minute to their readability with a smartphone.FIG.8Cshows correlation of the resonant frequency behavior of tags before (empty) and after (filled) exposure to cyclohexanone (equilibrium vapor pressure at RT) for one minute to their readability with a smartphone; arrows indicate vector movement of individual sensors.FIG.8Dshows comparison of the normalized change in resonant frequency to the normalized change in resistance of tags drawn at 10 kΩ (light blue), 50 kΩ (red), and 100 kΩ (black).FIGS.8A-8Dshow that they all move in the same general direction and HFIPN/SWCNT moves the farthest (has the longest vector arrows). These features of the sensing scheme can be exploited by taking advantage of the finite smartphone dynamic transmission frequency range. When the resonant frequencies of the tag insufficiently overlap with the dynamic transmission frequency range, the tag cannot be read by the smartphone, and vice versa. Unmodified tags have a resonant frequency of 13.65 MHz±0.01 MHz and disrupted tags have a resonant frequency of 14.20 MHz±0.01 MHz. When a chemiresistor is applied, the f0shifts to lower frequency. As more sensing material is applied, more conductive pathways form, and Rsdecreases, further lowering the frequency at which the tag resonates. The tag can then be made into a turn-off sensor by drawing a sensor that causes the tag to resonate within, but near the edge of the readable range of the smartphone. When the chemiresistor is exposed to an analyte, Rsincreases, thereby increasing f0to a value outside of the dynamic transmission frequency range of the smartphone, effectively entering into an “off” state. Removal of the analyte leads to the recovery of the sensor to its original value of Rs, bringing f0within the dynamic transmission frequency range of the smartphone, effectively returning to an “on” state. FIG.10illustrates the readability of a commercial RFID tag (Texas Instruments Tag-It HF-1) modified according to Sensing Method 1 with a pristine single-walled carbon nanotube sensor, correlated with resistance of the sensing material before and after one exposure to nitric acid vapor. FIG.11illustrates the readability of a commercial RFID tag (Texas Instruments Tag-It HF-1) modified according to Sensing Method 1 with a cyclohexanone sensor, correlated with resistance of the sensing material before and after three exposures to cyclohexanone vapor. FIG.12shows sensor responses of tags exposed to respective analytes at equilibrium vapor pressures at RT.FIG.12shows turn-off of (I) cyclohexanone and (III) Windex;FIG.12shows turn-on of (II) NOxand (IV) Clorox.FIG.14shows stability 4:1 wt % HFIPN:SWCNT functionalized sensor tags to ambient conditions over time. Fabrication and Characterization of CARDs A simple two-step modification procedure can be used to make commercial NFC tags chemically sensitive (FIG.18).FIG.18depicts the principle of Sensing Method 3. First, the electronic circuit of the tag was disrupted, rendering the tag unreadable, by removing a section of the conductive aluminum that connects the IC to the capacitor with a hole-puncher. Then, the LCR circuit was re-completed with conductive nano-carbon-based chemiresponsive materials deposited by mechanical abrasion (FIG.18). Chemical selectivity in sensing was achieved by harnessing the established properties of chemiresponsive materials. See, Mirica K A, Weis J G, Schnorr J M, Esser B, Swager T M (2012) Mechanical drawing of gas sensors on paper.Angew ChemieInt Ed 51:10740-10745, Mirica K A, Azzarelli J M, Weis J G, Schnorr J M, Swager T M (2013) Rapid prototyping of carbon-based chemiresistive gas sensors on paper.Proc Natl Acad Sci US A110: E3265-E3270, and Miyata Y, Maniwa Y, Kataura H (2006) Selective oxidation of semiconducting single-wall carbon nanotubes by hydrogen peroxide.J Phys Chem B110:25-29, each of which is incorporated by reference in its entirety. This study employed two different solid-state chemiresponsive materials-PENCILs (Process-Enhanced Nanocarbon for Integrated Logic)—that can be conveniently drawn on a variety of surfaces using an established technique. See, Mirica K A, Azzarelli J M, Weis J G, Schnorr J M, Swager T M (2013) Rapid prototyping of carbon-based chemiresistive gas sensors on paper.Proc Natl Acad Sci USA110: E3265-E3270, which is incorporated by reference in its entirety. For sensing ammonia (NH3) and hydrogen peroxide (H2O2)— common industrial hazards that can be used in improvised explosives-pristine single-walled carbon nanotubes (SWCNTs) compressed in the form of a pencil ‘lead’ were chosen (P1) (see, Mirica K A, Weis J G, Schnorr J M, Esser B, Swager T M (2012) Mechanical drawing of gas sensors on paper.Angew Chemie Int Ed51:10740-10745, and Miyata Y, Maniwa Y, Kataura H (2006) Selective oxidation of semiconducting single-wall carbon nanotubes by hydrogen peroxide.J Phys Chem B110:25-29, each of which is incorporated by reference in its entirety); this material exhibits a well-characterized, dose-dependent chemiresistive response towards these analytes. A solid composite comprising a 4:1 (wt:wt) blend of 2-(2-Hydroxy-1,1,1,3,3,3-hexafluoropropyl)-1-naphthol (HFIPN) with SWCNTs generated via solvent-free mechanical mixing within a ball mill (P2) was selected because this material exhibits high selectivity and sensitivity for cyclohexanone (C6H10O) vapors (a common constituent of plastic explosives) (see, Mirica K A, Azzarelli J M, Weis J G, Schnorr J M, Swager T M (2013) Rapid prototyping of carbon-based chemiresistive gas sensors on paper.Proc Natl Acad Sci USA110: E3265-E3270, Frazier K M, Swager T M (2013) Robust cyclohexanone selective chemiresistors based on single-walled carbon nanotubes.Anal Chem85:7154-7158, and Cox J R, Miller P, Swager T M (2011) Interrupted energy transfer: highly selective detection of cyclic ketones in the vapor phase.J Am Chem Soc133:12910-12913, each of which is incorporated by reference in its entirety). HB pencil ‘lead’ (P3) was chosen as a negative control because it shows a negligible response towards the concentrations of analytes used in this study. These materials exhibit predictable drift and consistent stability in their electrical resistance (Rs) when deposited on the surface of the NFC tags (FIGS.22and23). A network analyzer was employed to determine f0and Q of the NFC tags at various stages of modification by measuring the radio-frequency reflection coefficient, S11(FIGS.19and24). See, Cole P, Ranasinghe D, Jamali B (2004) Coupling relations in RFID systems II: practical performance measurements (2003) AUTO-ID-CENTRE, ADE-AUTOID-WH-003, which is incorporated by reference in its entirety. In tandem, SGS4 was employed to test the readability of the tags (“on”/“readable” and “off”/“unreadable”) and a multimeter to estimate the electrical resistance (Rs) of the connection between the capacitor and the integrated circuit within the NFC tag.FIG.19Graph A shows a plot that exhibits six notable features. First, in the absence of any device, the S11spectrum displays a flat baseline (FIG.19Graph A-1). Second, unmodified NFC tags (Rs=0.3 Ω+0.0Ω) are SGS4-readable (“on”) and display a resonant frequency of 13.67 MHz+0.01 MHz and Q=35±1 (FIG.19Graph A-2). Third, tags where the electrical connection between the integrated circuit and the capacitor has been disrupted by hole punching (Rs=23.3 MΩ±0.8 MQ) are SGS4-unreadable (“off”) and displayfo=14.29 MHz+0.01 MHz and Q=85±2 (FIG.19Graph A-3). Fourth, when the electrical circuit is recompleted using P2, the resulting CARD-2 (Rs=16.5 k+1.0 kΩ) becomes SGS4-readable (“on”), and has f0=14.26 MHz+0.02 MHz and Q=21±1 (FIG.19Graph A-4). Fifth, when this CARD-2 is exposed to vapors of cyclohexanone (˜5000 ppm), a significant change in both f0and Q is observed. After five seconds of exposure, f0shifts to 14.30 MHz+0.01 MHz and Q increases to 32±1 (FIG.19Graph A-5), and the tag becomes SGS4-unreadable (“off”). After one minute, f0remains at 14.30±0.00 MHz; Q increases to 51±2 (FIG.19Graph A-6), and the tag remains SGS4-unreadable (“off”). Readability of CARDs by the smartphone can be rationalized by estimating the percent of incident power transferred (Pt) from the smartphone to the tag or CARD (FIGS.19B and27). For the purposes of this study, the distance of the smartphone to the CARD and the orientation of the smartphone with respect to the CARD were kept constant; however in a non-laboratory setting, distance and orientation would have to be taken into consideration. The commercial NFC tag (FIG.19Graph B-2) absorbs nearly 77% of the RF signal delivered from the smartphone. The disrupted circuit, however, absorbs only 14% of the RF signal from the phone; this amount is insufficient for effective smartphone-tag communication and the tag is unreadable by the SGS4 (FIG.19Graph B-3). Incorporation of a chemiresponsive material from P2 into this tag creates CARD-2, resulting in the amount of absorbed RF signal increasing to 23%—a sufficient amount of power transfer to enable RF communication (“on”) (FIG.19Graph B-4). Subsequent exposure of CARD-2 to C6H10O decreases the absorbed RF signal to 19% and results in CARD-2 becoming unreadable by SGS4 (FIG.19Graph B-5). Prolonged exposure of CARD-2 to the analyte for one minute leads to a further decrease in absorbed RF signal from the phone (16%) (FIG.19Graph B-6). Thus, Ptbetween smartphone and CARDs decreases with increasing R. Semi-Quantitative Detection of Ammonia with a Smartphone and CARDs After establishing the correlation between Rs, Pt, and the readability by the smartphone, the ability of CARDs to detect and wirelessly communicate repeated chemical exposure to 35 ppm NH3gas was tested. To program CARDs (n=3) for NH3, P1 was integrated with initial Rs=16.1 kΩ+0.6 kΩ into the LCR circuit using the modification method described inFIG.18, resulting in CARD-1A. Rswas measured and tested the SGS4 readability of CARD-1A in response to four consecutive exposures to 35 ppm NH3gas (FIG.27). For clarity,FIG.20Graph A summarizes the effect of NH3(35 ppm) on the resistance and phone readability of a single CARD-1A. Within one minute of exposure to 35 ppm NH3, CARD-1A experienced ΔRs=5.3 kΩ+0.7 kΩ and became unreadable (turned “off”) when probed by the phone. Removal of NH3and recovery under ambient air led to a rapid recovery of Rsand retrieval of phone readability of CARD-1A. After a 20 min recovery under ambient atmosphere, the Rsof CARD-1A recovered to 17.4 kΩ+0.6 kΩ(ΔRs=+1.2 kΩ+0.3 kΩ from the value of Rsbefore exposure). Correlating the readability of CARD-1A by SGS4 with Rsenabled us to estimate that the “on”/“off” threshold (Rt) for P1 when exposed to NH3was 20.8 kΩ+1.0 kΩ. Below this critical value of Rt, CARD-1A was readable by the SGS4, and it is unreadable when Rs>Rt. The well-defined value of Rtin the wireless communication between the smartphone and CARDs fabricated with P1, coupled with the established concentration-dependent response of SWCNTs to NH3, enables semi-quantitation. To demonstrate this concept, two types of CARDs were fabricated in triplicate and designed to turn off in response to crossing different threshold concentrations of NH3: 4 ppm (just below the threshold of human detection of NH3based on smell) (CARD-1B; initial Rs=19.2 kΩ+0.2 kΩ) and 35 ppm (NIOSH STEL) (CARD-1A; initial Rs=16.3 kΩ+0.5 kΩ) (FIG.20Graph B and Table 1). Prior to exposure to NH3, both CARDs were readable by the phone. Exposure to 4 ppm NH3only turns CARD-1B “off,” whereas exposure to 35 ppm NH3turns both CARDs “off.” This concept is general: with sufficient information about the concentration-dependent response of the chemiresponsive sensing elements in the presence of the analytes of interest, CARDs can be programmed to turn “on” or “off” at the designated thresholds of various analytes. TABLE 1Estimated Rtof CARDs employed in this study.EntryFIG.PENCILAnalyten =RtA20AP1NH31220.8 ± 1.0B20BP1NH3921.6 ± 0.7C21AP1NH3320.2 ± 0.5D21BP1H2O2/H2O322.4 ± 2.4E21CP2C6H10O324.0 ± 1.8 Discrimination of Analytes with an Array of CARDs The fabrication of arrays of CARDs containing different chemiresponsive materials can also enable the detection and discrimination of multiple analytes using NFC communication (FIG.21). Three different sensing materials (P1-P3) that produce distinct ΔRsupon interaction with NH3gas (35 ppm), cyclohexanone vapor (335 ppm), H2O2vapor (˜225 ppm), and H2O vapor (˜30,000 ppm) were employed. An array of four types of CARDs (each type in triplicate) was produced and used to detect single exposures of the analytes. To detect NH3, CARD-1A (initial Rs=16.3 kΩ+0.6 kΩ) was designed to turn “off” upon exposure to 35 ppm NH3, and turn back “on” upon recovery under ambient conditions (FIG.21Graph A-1). Importantly, CARD-1A does not turn “off” in the presence of the other analytes at the concentrations tested (FIG.21Graph A-2,3,4). To detect H2O2, a “turn-on” sensor having an initial condition of being “off” was fabricated by mechanically abrading P1 to obtain initial Rs=23.4 kΩ+0.9 kΩ (CARD-1C). CARD-1C turned “on” and became readable by the SGS4 when it was exposed to the equilibrium vapor of H2O2(35 wt. % in water), and turned back “off” as it recovered under ambient atmosphere (FIG.21Graph B-2). Although the exposures of CARD-1C to water, cyclohexanone, and NH3lead to small to moderate ΔRs(ΔRs=+1.5 kΩ+0.6 kΩ for water), these exposures did not invoke a change in its readability by SGS4 (FIG.21Graph B-1,3,4). To detect cyclohexanone, a “turn-off” sensor CARD-2 with an initial condition of being “on” was fabricated by mechanical abrasion of P2 at initial Rs=18.9 kΩ+0.6 kΩ on the surface of the tag. CARD-2 turned “off” within one minute of exposure to 335 ppm cyclohexanone (FIG.21Graph C-3). The readability of CARD-2 by SGS4 was reversible as it turned back “on” within one minute of recovery under ambient air. The value of Rsfor CARD-2, however, did not recover to its initial value of Rs; rather, it settled at Rs=15.3 kΩ+0.9 kΩ after equilibrating for 10 minutes. This mismatch in Rsmay be due to solvent-assisted rearrangement of the sensing material. Importantly, although exposure of CARD-2 to H2O, H2O2, and NH3produced small ΔRs(FIG.21Graph C-1,2,4), they did not alter the readability of this sensor by the smartphone. As a negative control, CARD-3 was fabricated by mechanical abrasion of P3 to obtain Rs=18.0 kΩ+0.6 kΩ. This tag remained readable and did not change its readability in response to analytes used in this example (FIG.21Graph D-1-4). This tag was an important component of an array-based sensing scheme because it validated the integrity of the reader-tag communication protocol and provided a static handle in a codification scheme. Methods Conversion of a Commercial NFC Tag into a Programmable CARD (Chemically Actuated Resonant Device) The circuit of an NFC tag was disrupted at the location indicated inFIG.18using a circular hole puncher (Bead Landing™, hole diameter=2 mm). A hole was punched through the tag, effectively removing a portion of the conducting aluminum film (along with the underlying polymeric substrate) connecting the integrated circuit to the capacitor. The circuit was re-completed using the mechanical abrasion by drawing a line with an appropriate PENCIL to bridge the two disconnected ends of aluminum. See, Mirica K A, Azzarelli J M, Weis J G, Schnorr J M, Swager T M (2013) Rapid prototyping of carbon-based chemiresistive gas sensors on paper.Proc Natl Acad Sci USA110: E3265-E3270, which is incorporated by reference in its entirety. An iterative process of mechanical abrasion of the PENCIL followed by measuring Rs(FIG.25) with a multimeter (Fluke 114 TRMS Multimeter) was repeated until the desired initial Rsvalue was achieved. When P1-P3 are deposited on the surface of the NFC tag by mechanical abrasion, they exhibit predictable drift characteristics, which allowed for the drawing of tags to pre-determined specifications (FIGS.22and23). To prevent potential inhalation of particulates generated by the abrasion of PENCIL on NFC tags, this process was carried out in a fume hood. The resulting device was allowed to equilibrate until a stable reading (ΔRs<0.2 kΩ/10 min) was achieved (˜30 min). All experiments were conducted within 5 h of making a CARD. Programming a CARD-Induced Smartphone Response A response that is unique to a specific tag can be invoked upon successfully establishing communication between the tag and the phone (“on”/“readable”) by pre-programming a tag-phone relationship prior to fabrication of a CARD. This study employed the freely available app ‘Trigger’ (Egomotion Corp; 28 Aug. 2014) to establish the phone-tag relationship. First, the UID of a tag is registered with the smartphone by scanning it via NFC. Second, a task (or tasks) are assigned to that specific UID. For example, a task that can be achieved with the use of ‘Trigger’ is to open another application, such as a note-taking app, that has a pre-defined message written on it. Other possible tasks that can be invoked include opening the e-mail app with a pre-written message, opening a maps app that displays the current location of the smartphone, etc. By programming ‘Trigger’ to invoke a unique task for each unique tag UID, once the tag has been converted to a CARD, meaningful information about the CARDs chemical environment can be conveyed to the user. Although outside of the scope of this study, this strategy could be improved by creating a customized app that allows more sophisticated smartphone actions in a less cumbersome user-interface architecture. Method for Determining Reflection Coefficient and Readability of CARDs with a Smartphone The reflection coefficient spectra (S11) were collected with a network analyzer (Agilent E5061B). A loop probe was affixed to the outside of ajar cap (VWR, 250 mL) using electrical tape and a tag or CARD was placed on the inside of the same jar cap using double sided tape (FIG.24). Two jars were used for the experiment: one that was empty (i.e. filled with ambient air), and one that contained cyclohexanone (10 mL) and filter paper. The reflection coefficient spectra was measured and recorded once when the cap was on the empty jar, once after the cap was on the jar containing cyclohexanone for 5 s, and once after the cap was on the jar containing cyclohexanone for 1 minute (FIG.19Graph A). The readability of the tag or CARD was determined by removing the tag from the jar cap, placing it on a piece of open-cell foam (thickness=4.5 cm), and approaching the sensor tag with a Samsung Galaxy S®4 running Android™ version 4.3 with ‘NFC Reader’ application (Adam Nyback; 5 Jul. 2013) open, held with its back parallel to the sensor tag. A sensor tag was considered “on”/“readable” if the UID could be retrieved within 5 seconds or less of holding the smartphone at ˜2.5 cm distance above the tag. Conversely, the tag was considered “off”/“unreadable” if the UID could not be retrieved under the same conditions. All measurements were performed with the phone oriented such that the parallel plate capacitor of the CARD is perpendicular to the long edge of the phone. The phone was held parallel to the surface on which the tag rested. Correlating Effects of Chemical Exposure on Rsand Smartphone Readability of the CARD A CARD was attached to one side of a plastic petri dish using double sided tape. The Rs, was determined by contacting the CARD at the indicated points using a multimeter (Fluke 114 TRMS Multimeter). The readability of the CARD by SGS4 was determined as described above. Conversely, the CARD was considered “off” if the UID could not be retrieved under the same conditions. First, Rsand readability were monitored once a minute under ambient conditions to establish a stable baseline prior to chemical exposure for 10 min. Then, the tag was exposed to the chemical analyte by either a) placing the lid on a jar with saturated vapor (H2O2/H2O or H2O) or b) in a ziploc bag containing established atmosphere. During the chemical exposure, the tag not accessible to monitoring with a multimeter, but it could still be interrogated with the smartphone at 1-min intervals. Once exposure was complete, the tag was removed from the container and allowed to recover under ambient atmosphere. During this time, Rsand readability were monitored at 1-min intervals. Binary Logic for Chemical Discrimination Using Arrays of CARDs FIG.21Graph E correlates the binary output of tag readability by the phone (“on” and “off”) with the identity of four chemical vapors used in this study. A binary (0 and 1) assignment can be employed in which the presence of a vapor is denoted as “1” and the absence of a vapor is denoted as “0”. For example, four unique tags (n=4) can be employed, each programmed for a specific analyte or as a negative control. Because each tag has a unique identification number, the change in readability of each tag in response to a specified analyte is intrinsically linked to the identity and surmounted threshold of the vapor. The n sensor tags can be arbitrarily arranged into a sequence to provide an n digit code (### . . . ) that can be used to identify unique gases and vapors. Using this coding scheme, four types of tags (CARD-1A, -1C, -2, and -3), and three types of vapors (NH3, H2O2, cyclohexanone), SGS4 can correctly identify the presence of 35 ppm NH3as ‘1000’, the presence of vapor of 35% H2O2dissolved in water as ‘0100’, and the presence of 335 ppm cyclohexanone as ‘0010.’ As one of the most commonly encountered interferents, the presence of H2O vapor would not invoke a response from the sensor tags employed in this study (‘0000’). To enable a 4-bit depth measurement, four individual CARDs need to be placed on a surface. The CARDs employed in this study cover an area of 20.3 cm2each. Thus, four CARDs, which cannot be stacked on top of each other, would cover an area of 81.2 cm2. Practical Considerations and Limitations of the Proposed Sensing Strategy Nine practical considerations and limitations should be taken into account before attempting to implement this sensing strategy: (i) Not all materials are RF transparent. Therefore, the technique can be compromised by the presence of materials that are RF opaque or reflect RF radiation. (ii) CARDs cannot be stacked on top of one-another (please see discussion in Methods under subsection ‘Binary Logic for Chemical Discrimination Using Arrays of CARDs’). (iii) Near Field Communication relies on inductive coupling and therefore the technique is sensitive to its magnetic environment. (iv) The technique, as described in the Methods under subsection ‘Method for Determining Reflection Coefficient and Readability of CARDs with a Smartphone’ is sensitive to the relative orientation of and distance between the smartphone and CARD. (v) Based on the disclosed findings, the ‘on/off’ threshold is dictated by the amplitude of power transfer between the smartphone and the CARD. Therefore, the make and model of the smartphone may influence the ‘on/off’ threshold. (vi) Based on the disclosed findings, the “on/off” threshold is dependent on the PENCIL material. (vii) The chemiresponsive materials employed in this study are unprotected from the atmosphere of the laboratory and their performance may degrade over time. (viii) Because the sensing element is exposed, the behavior of the chemiresistor may change abruptly if touched or otherwise disrupted. (ix) This technique is demonstrated in the controlled setting of a laboratory. In a non-laboratory setting, human and environmental exposure to nanomaterials would have to be addressed with packaging around the sensing element. General Materials and Methods SWCNTs (purified ≥95% as SWCNT) were kindly provided by Nano-C, Inc. (Westwood, Mass.). 2-(2-Hydroxy-1,1,1,3,3,3-hexafluoropropyl)-1-naphthol (CAS 2092-87-7) was purchased from SynQuest (Alachua, Fla.). NH3(1% in N2) was custom ordered from Airgas. All NFC tags used in this study (hereafter referred to generically as “NFC tag”) were Texas Instruments HF-I Tag-It 13.56 MHz RFID transponder square in-lays (MFG: RI-I11-114A-01), purchased from DigiKey. Choice of Tags This example uses commercially available Texas Instruments HF-I Tag-It Plus Transponder Inlays (TI-Tag) to demonstrate the concept of converting a commercially available NFC tag into a chemical sensor. These tags were chosen based on their chemically robust substrate, absence of protective polymeric coating over the circuitry, commercial availability, and low cost. The electronic circuitry of the unmodified tags is supported via polyurethane glue on both sides of a thin (47 μm), flexible sheet of polyethylene terephthalate, which also serves as a dielectric layer for the capacitor. The circuit comprises an aluminum antenna that serves as an inductor (L), a parallel-plate aluminum capacitor (C), and a silicon-based integrated circuit (IC) chip (R), all connected in parallel, forming an LCR resonant circuit (FIG.18). Choice of Analytes The selective detection of a target chemical analyte is a necessary requirement for any functional ultra-low-cost distributed chemical sensor. This requirement was achieved in a manner that does not employ extensive data analysis or computationally-intensive interpretation, and achieves selectivity towards analytes by harnessing established the properties of chemiresponsive materials. See, Mirica K A, Weis J G, Schnorr J M, Esser B, Swager T M (2012) Mechanical Drawing of Gas Sensors on Paper.Angew ChemieInt Ed 51:10740-10745, and Mirica K A, Azzarelli J M, Weis J G, Schnorr J M, Swager T M (2013) Rapid prototyping of carbon-based chemiresistive gas sensors on paper.Proc Natl Acad Sci USA110: E3265-E3270, each of which is incorporated by reference in its entirety. Detection of ammonia (NH3) gas, and vapors of cyclohexanone (C6H10O), hydrogen peroxide (H2O2), and water (H2O) were targeted as model analytes for the detection of industrial, agricultural, and safety hazards. (i) NH3is commonly emitted in industrial and agricultural settings and is toxic at relatively low levels (3); (ii) cyclohexanone is a volatile organic compound (VOC), commonly used for recrystallization of explosives, such as RDX (4), that can also aid their detection (5); (iii) H2O2can be employed in improvised explosive devices (IEDs), as a commonly employed industrial reagent, and is routinely for sanitization (hospitals). Choice of Smartphone An off-the-shelf smartphone was utilized to demonstrate the capability for wireless chemical sensing. This type of detector would be compatible with a highly-distributed network of sensors accessible to a large number of people. In this context, the Samsung Galaxy™ S4 (SGS4) was chosen as the primary NFC-enabled smartphone as a result of two factors: (i) the Samsung's Galaxy series are amongst the most widely distributed “smart” mobile devices in history. (ii) the SGS4 runs on Android, one of the most widely distributed operating systems that supports NFC applications. The demonstrated wireless chemical sensing via NFC is applicable to other NFC-enabled devices (FIG.3). The NFC chip comprises an antenna for inductive coupling with NFC tags, a transmission module with microcontroller for 13.56 MHz carrier signal generation and tag signal demodulation, as well as embedded and external (Subscriber Identity Module (SIM) card) security elements. When used with unmodified TI-tags, the SGS4 can read tags at ˜5 cm standoff distance through solid, non-metallic objects such as paper, plastic, and liquids (FIG.3). Choice of Smartphone Application The ‘NFC Reader’ (Adam Nyback; 5 Jul. 2013) and ‘NFC TagInfo’ (NFC ResearchLab; 19 Jul. 2013) applications were used to read the tags, and were freely available from the Google Play™ Store at the time of this report. These applications were chosen because they display the tag's unique identification number without invoking other time- or energy-intensive functions of the smartphone. For the purposes of this study, the tag is considered “on” or “readable” if the unique identification number can be retrieved within 5 seconds or less of holding the smartphone at ˜2.5 cm distance away from the tag. Conversely, the tag is considered “off” or “unreadable” if the unique identification number cannot be retrieved under the same conditions. Instrumental Analysis The RF signal response of the modified TI-tags and smartphone antennas from 10-20 MHz as well as the smartphone-transmitted radio frequency signal were monitored with a custom-made loop probe connected via a BNC cable to a vector network analyzer (VNA) (Agilent E5061B) by measuring reflection coefficient (S11) at 50Ω port impedance and 0 dBm input power (FIG.24). Ball Milling Cyclohexanone sensing material was generated by solvent-free ball milling of SWCNTs with 2-(2-Hydroxy-1,1,1,3,3,3-hexafluoropropyl)-1-naphthol (HFIPN) using an oscillating mixer mill (MM400, Retsch GmbH, Haan, Germany) within a stainless steel milling vial (5 mL) equipped with a single stainless steel ball (7 mm diameter). The milling vial was filled with HFIPN (96 mg) and SWCNTs (24 mg) and the mixture was ball milled for 5 min at 30 Hz. Fabrication of PENCILs PENCILs (Process Enhanced NanoCarbon for Integrated Logic) were fabricated by loading powdered sensing material into a steel pellet press (6 mm internal diameter) (Across International Item #SDS6), and compressing the powder by applying a constant pressure of 10 MPa for 1 min using a hydraulic press (Across International Item #MP24A). Fabrication of Loop Probe Hollow copper tubing covered in heat-shrink wrap was shaped into a square (5 cm×5 cm) shape and soldered to a BNC adapter. Heat-shrink wrap was placed over the connection point, and was shrunk using a heat gun in a fume hood. Dilution of Ammonia Delivery of controlled concentrations of NH3to the sensing devices placed within a gas chamber was performed using a Smart-Trak Series 100 (Sierra Instruments, Monterey, Calif.) gas mixing system at total flow rates between 0.50 and 10.00 L/min. NH3was diluted with N2. Dilution of Vapors Delivery of controlled concentrations of cyclohexanone vapors to the sensing devices placed within the gas chamber was carried out using Precision Gas Standards Generator Model 491M-B (Kin-Tek Laboratories, La Marque, Tex.). Cyclohexanone was diluted with N2at total flow rates of 0.25-0.50 L/min. Gas Chamber A custom gas chamber was fabricated by inserting two plastic syringes (1 mL, NORM-JECT® (one on either side) in the bottom corners of a Ziploc® bag (1 L) and sealing with electrical tape. Detection of NH3 Sensor tag data was collected according to the method described above. The sensor tag was kept on the benchtop of a fume hood for 10 minutes, followed by exposure to NH3in N2(35 ppm) in a gas chamber for 5 minutes, followed by removal and placement on a benchtop of a fume hood for 10 minutes. This procedure was repeated three more times; after the fourth cycle, the sensor tag was allowed to sit on the fume hood bench top for an additional 10 minutes. Detection of a Single Exposure of N2(Negative Control) Sensor tag data Rsand readability by SGS4 was determined according to the method described above. The sensor tag was kept on the benchtop of a fume hood for 10 minutes, followed by exposure to N2in a gas chamber for 5 minutes, followed by removal and placement on the fume hood bench top for 20 minutes. Detection of Single Exposure of NH3 Sensor tag data was collected according to the method described above. The sensor tag was kept on the benchtop of a fume hood for 10 minutes, followed by exposure to NH3in N2(4 ppm or 35 ppm) in a gas chamber for 5 minutes, followed by removal and placement on the fume hood bench top for 20 minutes. Detection of Single Exposure of C6H10O Sensor tag data was collected according to the method described above. The sensor tag was kept on a benchtop underneath a ventilation snorkel for 10 minutes, followed by exposure to cyclohexanone (C6H10O) in N2(335 ppm) in a gas chamber for 5 minutes, followed by removal and placement on a benchtop underneath a ventilation snorkel for 20 minutes. Detection of a Single Exposure of H2O2 Sensor tag data was collected according to the method described above. The sensor tag was kept on the benchtop of a fume hood for 10 minutes, followed by exposure to H2O2/H2O (Peq) in a plastic Ziploc® bag containing an open jar of H2O2/H2O (35%) for 5 minutes, followed by removal and placement on the fume hood bench top for 20 minutes. Detection of a Single Exposure of H2O Sensor tag data was collected according to the method described above. The sensor tag was kept on the benchtop of a fume hood for 10 minutes, followed by exposure to H2O (100% humidity in air) in plastic Ziploc® bag containing an open jar of water for 5 minutes, followed by removal and placement on the fume hood bench top for 20 minutes. Semi-Quantitative Detection of NH3 A sensor tag for 4 ppm NH3(CARD-1B) was fabricated with Rs=19.2 kΩ±0.2 kΩ, and a sensor tag for 35 ppm NH3(CARD-1A) with Rs=16.3 Ωf±0.5Ω. Prior to exposure to NH3both types of tags were “on” and readable by the phone (FIGS.20B and28). Upon exposure to 4 ppm NH3, CARDB-1B turned “off” within one minute of experiencing a change to its local environment, while CARD-1A remained “on”. After five minutes of exposure to 4 ppm NH3, CARD-1B had Rs=21.9 kΩ+0.4 kΩ (ΔRs=2.8 kΩ+0.4 kΩ); CARD-1A displayed Rs=18.8 kΩ+0.3 kΩ (ΔRs=2.6 kΩ+0.1 kΩ). The same type of experiment, with a new batch of CARD-1A and CARD-1B, each fabricated in triplicate, was repeated for 35 ppm NH3(FIG.20Graph B). Under these conditions, CARDs turned “off” (ΔRs=6.0 kΩ+0.5 kΩ): CARD-1B Rsincreased to 25.8 kΩ+0.6 kΩ(ΔRs=6.3 kΩ+0.1 kΩ), and CARD-1A Rsincreased to 21.9 kΩ+0.8 kΩ (ΔRs=5.4 kΩ+0.8 kΩ), both above the readability threshold. Determination of Estimated Power Transfer from SGS4 to CARDs The power transferred from SGS4 to CARD-2 at each stage of fabrication was determined according to a seven-step procedure: (i) collecting S11spectra (n=5) (10 MHz-20 MHz) of the SGS4-generated signal and averaged them into a single SGS4-signal spectrum. (ii) collecting S11spectra (n=5) (10 MHz-20 MHz) of at each stage of modification of a tag leading to the formation of CARD-2. Additionally S11spectra (n=5) (10 MHz-20 MHz) of CARD-2 was collected before and after exposure to saturated cyclohexanone vapor, as described inFIG.19Graph A. (iii) averaging the spectra collected in step (ii) into a single spectrum for each tag modification stage and for the gas exposure scenario. (iv) The SGS4-signal spectrum and each spectrum from (iii) was zeroed according to their response at 20 MHz. (v) The zeroed SGS4-signal spectrum from (iv) was added to each zeroed tag and CARD-2 spectrum from (iv) to yield SGS4-tag composite spectra (FIG.26Graph A). (vi) The power reflected back to the network analyzer, Pre, was determined according to Equation 3: S11=10log(PrePin)(3) Where incident power (Pin) is 0 dBm (1 μW) (FIGS.26B and26C). (vii) The percent power transferred in each case (Pt) (FIG.19Graph B) was estimated by Equation 4 (FIG.26Graph C): Pt(%)=[(∫13.53MHz13.58MHzPreSGS4df-∫13.53MHz13.58MHzPrexdf)∫13.53MHz13.58MHzPreSGS4df]×100%(4) Where x corresponds to scenarios 1-6 described inFIG.19Graph A of the main text. Determination of Rt The “on”/“off” threshold, Rt, was estimated (Table 1) by taking the average of the median Rsvalues found between the “last” Rscorrelated with an unreadable CARD and the “first” Rscorrelated with a readable CARD, during recovery from a given exposure to analyte. Other embodiments are within the scope of the following claims. | 79,568 |
11861438 | DETAILED DESCRIPTION Various examples of the present technology are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the present technology. Basic security features of a payment card can include a unique account number, signature panel, expiration date, magnetic stripe, security code, etc. Some payment cards also include a user photo or use a virtual card number or a temporary purchase number as additional security features. Despite all these security features, however, use of a payment card is still not entirely secure, with exposure of the account number printed on the payment card being one of the greatest security vulnerabilities. There is a need for improved methods for providing and using a payment card. In one example, a payment card comprises a card substrate and a personalization layer overlaying the card substrate. The personalization layer comprises at least a first region and a second region. The first region includes an account number associated with an account of a user, and the account of the user is maintained by a payment service system that issues the payment card. At least one of the first region or the second region includes a thermochromic ink. The thermochromic ink is an ink that changes color when a temperature increases or decreases. When heat is applied to the first region or the second region of the payment card having the thermochromic ink (e.g., when the user touches the first or second region), a color of the thermochromic ink in the first and/or second region may change. As a result, the account number can be revealed. In some examples, the first region and the second region are substantially identical in color at room temperature such that the account number included in the first region is invisible at room temperature. Additionally or alternatively, in some examples, the payment card includes a heating element coupled to a near field communication (NFC) chip embedded in the payment card. The heating element can cause a temperature of the payment card to change in response to a signal received from the NFC chip, and the NFC chip can send the signal in response to an interaction of the user with a mobile application executing on a device of the user. Thus, the temperature and the color of the payment card can be controlled by a user interaction on the device of the user (e.g., a mobile phone) and/or through a physical touching of the payment card by the user. Further, in some examples, a temperature reading of the payment card can be determined (e.g., based on the color of the payment card) and then used to identify a location of the payment card. The location of the payment card during a transaction helps validate the transaction, for example, by confirming that the color of the card is consistent with the temperature in an expected location of the card. Also, biometric information of the user using the payment card can be obtained to control the temperature adjustment of the payment card by the heating element and to further control whether to reveal or conceal information embedded on the card (e.g., the account number of the payment card), or information shown with a graphical display area of the payment card. Advantageously, the technology described herein solves an information exposure problem of payment cards. Currently, an account number of a payment card can be easily exposed to strangers, either by unintentional exposure (e.g., the account number is accidentally captured by a photo shared in social networks) or by intentional interception (e.g., the number is remembered and stolen by a person seeing the card). The present technology embeds a security feature where the account number is obfuscated at or around room temperature and is revealed only when the temperature is changed, for example, when a user touches the card or the card is heated with the heating element. This can prevent the account number or other card information from being exposed to people who have a view of the card. Further, the present technology embeds additional layers of security features, such as preventing fraudulent transactions based on location detection using the temperature and color of the payment card, preventing information exposure of the payment card from physical touch of unauthorized users, etc. In general, using the thermo-sensitive payment card in a payment service platform as described herein reduces network congestion (e.g., by reducing the degree of fraudulent transactions) and improves privacy, security, and accuracy associated with handling and using the payment card. The following description provides specific details for a thorough understanding and an enabling description of these implementations. One skilled in the art will understand, however, that the disclosed system and methods may be practiced without many of these details. Additionally, some well-known structures or functions may not be shown or described in detail, so as to avoid unnecessarily obscuring the relevant description of the various implementations. The terminology used in the description presented below is intended to be interpreted in its broadest reasonable manner, even though it is being used in conjunction with a detailed description of certain specific implementations of the disclosed system and methods. Some frequently used terms are now described. The phrases “in some examples,” “according to various examples,” “in the examples shown,” “in one example,” “in other examples,” “various examples,” “some examples,” and the like generally mean the particular feature, structure, or characteristic following the phrase is included in at least one example of the present invention, and may be included in more than one example of the present invention. In addition, such phrases do not necessarily refer to the same examples or to different examples. If the specification states a component or feature “may,” “can,” “could,” or “might” be included or have a characteristic, that particular component or feature is not required to be included or have the characteristic. The term “module” refers broadly to software stored on non-transitory storage medium (e.g., volatile or non-volatile memory for a computing device), hardware, or firmware (or any combination thereof) modules. Modules are typically functional such that they that may generate useful data or other output using specified input(s). A module may or may not be self-contained. An application program (also called an “application”) may include one or more modules, or a module may include one or more application programs. In various examples, “room temperature” can be about 20° C. (68° F.), can range from about 20° C. to about 22° C. (72° F.), can range from about 20° C. to about 25° C. (77° F.), or can range from about 15° C. (59° F.) to about 30° C. (86° F.). In various examples, “substantially identical in color,” “substantially similar colors,” and similar phrases can refer to a color difference (e.g., between two adjacent colors) that is visibly indistinguishable or not perceptible by the human eye (e.g., Delta E<1), only perceptible under close scrutiny (e.g., 1≤Delta E≤2), slightly perceptible (e.g., 2≤Delta E≤10), or perceptibly different but still appear similar (e.g., 11≤Delta E≤49), where “Delta E” is a value representing a “distance” between two colors (e.g., in L*a*b* color space). The preceding summary is provided for the purposes of summarizing some examples to provide a basic understanding of aspects of the subject matter described herein. Accordingly, the above-described features are merely examples and should not be construed as limiting in any way. Other features, aspects, and advantages of the subject matter described herein will become apparent from the following description of Figures and Claims. FIG.1illustrates a payment service network100in accordance with one example embodiment. According to one example, payment service network100includes merchant102that conducts transactions with customer104(or user104) for items106(e.g., goods or services) offered by the merchant102. The payment service network100includes a payment service system108(also referred to as “payment service” or “PSS”) coupled to a merchant point of sale (POS) device105and customer device103via a network110, to authorize payment instruments of customer104. Customer104may engage in transactions with merchant102to obtain items106. Customer104may provide, as shown at112, payment instruments to merchant102along with requests for items106offered by merchant102. In various examples, the payment service system108can be or include an online platform for processing payments126as described herein. The payment service system108or online platform can utilize or include one or more server computers, which can be referred to herein as platform servers or payment servers. Merchant102may utilize POS device105for accepting payment from customer104. POS device105may be any mobile or non-mobile device that includes instances of a POS application that executes on the POS device105. The instances of the POS application may be or include a downloadable application provided by the payment service system108, or embedded software running on an all-in-one POS device provided by the payment service system108. POS device105may further include a wireless communication module with wireless communication capabilities (e.g., NFC, Bluetooth, cellular data, etc.), allowing wireless communication between POS device105and other devices with wireless communication capabilities. For example, POS device105may have an NFC-enabled chip that communicates with other NFC-enabled devices. The POS application may be provided by the payment service108and provide POS functionality to POS device105to enable merchant102(e.g., a business or owners, employees, or agents of the business) to accept payments from customer104. In some types of businesses, POS device105may correspond to a store, restaurant, website, or other place of business of the merchant, and thus, may be a fixed location that typically does not change on a day-to-day basis, or may correspond to an Internet commerce site. In other types of businesses, however, the location of POS device105may change from time to time, such when the merchant operates a food truck, is a street vendor, is a cab driver, etc., or has an otherwise mobile business, e.g., in the case of a merchant who sells goods or services at buyers' homes, places of business, and so forth. As used herein, a merchant may include any business engaged in the offering of goods or services for acquisition by customers. Actions attributed to a merchant may include actions performed by owners, employees, website servers, or other agents of the merchant, and thus no distinction is made herein unless specifically discussed. In addition, as used herein, the customer104may include any entity that acquires goods or services from a merchant, such as by purchasing, renting, leasing, borrowing, licensing, or the like. Hereinafter, goods and/or services offered by merchants may be referred to as items, e.g., item106. Thus, a merchant and a customer may interact with each other to conduct a transaction in which the customer acquires item106from merchant102, and in return, customer104provides payment112to merchant102. As used herein, a transaction may include a financial transaction conducted between customer104and merchant102. For example, when paying for a transaction, customer104can provide the amount that is due to the merchant using cash or other payment instrument112(e.g., a debit card, a credit card, a stored-value gift card, a check, through an electronic payment application on device103carried by the customer, or the like). The merchant can interact with POS device105to process the transactions, such as by inputting (e.g., manually, via a magnetic card reader, NFC reader, or an RFID reader, etc.) identifiers associated with payment instrument112. For example, a payment instrument of the customer may include a card having one or more magnetic strips for providing card and customer information when swiped in a card reader. In other examples, other types of payment instruments may be used, such as smart cards having a built-in memory chip that is read by the device when the card is inserted into the reader, such as chips that comply with the Europay, MasterCard, and/or Visa (EMV) standard (e.g., EMV cards). In other examples, other types of payment instruments include cards or computing devices that communicate via radiofrequencies such as radio frequency identification (RFID) tags, near field communication (NFC) devices, etc. During the transaction, POS device105can determine transaction information describing the transaction, such as an identifier of the payment instrument (e.g., payment card number, account credentials, or other payment device identifier), an amount of payment received from the customer, the item(s) acquired by the customer, a time, location (e.g., street address, GPS coordinates, IP address, etc.) and date of the transaction, a payment card network140associated with the payment instrument, an issuing bank of the payment instrument, a name or user account of the customer, contact information of the customer, type of currency, IP address of POS device105, IP address of customer device103, and so forth. POS device105can send the transaction information to payment service108over network110(e.g., including the Internet), either substantially contemporaneously with the conducting of the transaction (in the case of online transactions) or later when POS device105is in the online mode (in the case offline transactions). In an offline transaction, POS device105may store information related to the transaction, including, but not limited to, a cost of the transaction, a time of day at which the transaction occurred, a day of the week at which the transaction occurred, a location at which the transaction took place, an item that the customer obtained, an identity and/or contact information of the customer, and a payment instrument used in the transaction. After conducting an offline transaction with customer104, POS device105may provide at least a subset of the stored information to the payment service108over the network110. The network110may represent or include any one or more wired or wireless networks, such as a Wi-Fi network, a cellular network, the Internet, or the like. In an online transaction, POS device105may send this information to payment service108over network110substantially contemporaneously with the transaction with the customer104. After merchant102receives the payment information from customer104, merchant102may send respective authorization requests, along with information related to the respective transactions, to payment service108, as illustrated at114. Payment service108may include payment processing service126and data store128that stores merchant accounts130and user accounts132, as well as the transaction histories of merchants and users. The payment processing service126may function to receive the information regarding a transaction from POS device105of merchant102and attempt to authorize the payment instrument112used to conduct the transaction. Payment processing service126may then send an indication of whether the payment instrument has been approved or declined back to POS device105, as illustrated at116. Generally, when a customer104and a merchant102enter into an electronic payment transaction, the transaction is processed by electronically transferring funds from a financial account associated with the customer104to a financial account associated with the merchant102. As such, the payment processing service126may communicate with one or more computing devices of a payment card network140(e.g., MasterCard® or VISA®) over network(s)110to conduct financial transactions electronically. Payment processing service126can also communicate with one or more computing devices of one or more banks, processing/acquiring services, or the like over the network110. For example, payment processing service126may communicate with an acquiring bank, an issuing bank, and/or a bank maintaining user accounts for electronic payments. Payment processing service126may also communicate with, or access user and merchant accounts maintained by payment service108. In some examples, the payment processing service126can communicate with one or more entities that perform or manage securities transactions and/or cryptocurrency transactions. An acquiring bank may be a registered member of a card association (e.g., Visa® or MasterCard®) and/or may be part of a payment card network140. An issuing bank may issue credit cards to buyers (e.g., customer104) and may pay acquiring banks for purchases made by cardholders (e.g., customer104) to which the issuing bank has issued a payment card. Accordingly, in some examples, the computing device(s) of an acquiring bank may be included in the payment card network and may communicate with the computing devices of a card-issuing bank to obtain payment. Further, in some examples, the customer104may use a debit card instead of a credit card, in which case, the bank computing device(s) of a bank corresponding to the debit card may receive communications regarding a transaction in which the customer is participating. Additionally, there may be computing devices of other financial institutions involved in some types of transactions or in alternative system architectures, and thus, the foregoing are merely several examples for discussion purposes. WhileFIG.1illustrates merchants102sending the transaction data directly to the payment service108as part of the request to authorize the payment instrument112, in some instances other entities (e.g., banks associated with the merchant102or with customer payment instruments112) may provide transaction data, such as part of a batched, periodic process. According to one example, data store128may be used to store merchant accounts130and user accounts132, among other data. User accounts132may store records of user accounts associated with each user of payment service108. For example, user accounts132may include a first user account134and a second user account136. Each of the accounts of user accounts132may include information related to the respective balance and transaction history associated with each user account. Each of the user accounts132may include one or more balances associated with a payment service and further include access to external bank accounts. For example, first user account134includes transaction account135and investment account138, and second user account136includes transaction account137and investment account139. According to one example, transaction accounts135and137may include stored balances maintained by payment service108on behalf of its users. Investment accounts138and139may be used by users to save a stored balance towards a particular goal or otherwise to allow payment service108to maintain an investment on behalf of its users. Each user account134and136of user accounts132may also include a loan account representing funds that are loaned to the user by the payment service108. Each user account134and136of user accounts132may further include access to external payment card networks (e.g., payment card network140) to facilitate transactions with credit cards, debit cards, and the like. Furthermore, transaction history for each user account may be stored using an individual log for each user account. For example, first user account134includes transaction activity log142and second user account136includes transaction activity log144. Transaction activity logs142and144may be used to store transaction history for each respective user account, including debits and credits made to the balances thereof. Similarly, transaction history for merchants may be stored in merchant accounts130using an individual log for each merchant. According to one example, each of the user accounts132may include stored values of multiple currencies, such as fiat currency, cryptocurrency, equity value, or other monetary value represented by digital assets. Each of the currencies may be stored directly in each account of user accounts132. Each of the user accounts132may further include access to external accounts that facilitate such currencies (e.g., third party cryptocurrency exchanges/wallets, equity holding accounts, etc.). According to one example, merchant accounts130may store information associated with respective ones of the merchants102. For instance, the merchant accounts130may indicate a class of items offered by respective merchants (e.g., coffee items, collectibles, apparel, etc.), a type of business of the merchant (e.g., restaurant, coffee shop, retail store, etc.), a geographical location of the merchant, and the like. In some instances, a computing device associated with the merchant (e.g., POS device105, servers of the merchant, etc.) determines when the customer visits physical premises or a digital presence of the merchant. For instance, the device103of the customer104may include an application (e.g., an application provided by payment service108) that communicates with POS device105of merchant102via near-field communication protocols (e.g., NFC, Bluetooth, etc.). Therefore, when the customer visits the physical premises of merchant102, for example, POS device105may detect the presence of customer device103. The POS device105may accordingly determine that the customer104is present. In another example, one or both of POS device105and customer device103may share its location (e.g., GPS coordinates) to a common service for determining when customer device103and POS device105are located within a proximity threshold of one another, and for mediating a transaction between customer device103and POS device105. In another example, customer104may utilize customer device103to check in at the merchant location, and POS device105may receive an indication of this check in. When the customer visits a digital presence of merchant102(e.g., mobile app, website, etc.), customer104may log in or otherwise provide information (e.g., a cookie on the device103) from which the merchant102determines that the customer104is at the merchant location. Of course, while a few examples are listed, it is to be appreciated that the merchant102and/or payment service108may determine when the customer104is present at the merchant location in any other number of ways. In each instance, after payment service108receives an indication that customer104is co-located with merchant102, the payment service108may determine whether to send one or more previously expressed item preferences of the customer104to the merchant102. In addition, customer104may desire to receive an instance of a payments application, such as a mobile wallet application, from the payment service108.FIG.1illustrates that the customer104may send payment-application requests118to payment service108. In response, payment service108may provide instances of the application120back to customer device103. In addition, payment service108may map an identification of the instance of the application120to the user accounts132. FIG.2illustrates a mobile device and payment application200in accordance with one example embodiment. Mobile device202and POS device206may be computing devices with wireless communication modules203and207, respectively, with wireless communication capabilities (e.g., NFC, Bluetooth, cellular data, etc.), allowing wireless communication therebetween. A payment application204is a payment application provided by the payment service210and executes on a user's mobile device202. POS device206can include a Point of Sale (POS) application208that is associated with one or more merchant systems and can be used by the customer to purchase products or services. The payment application204and POS application208can also be a website provided by payment service210(e.g., payment service108), or any source website or application that provides a portal to send and accept payments for transactions using payment service210. Applications204and208may be accessible through a web browser (e.g., Chrome® or Safari®) on the mobile device202, according to one example. In another example, applications204and208can be software applications downloadable via an application store (e.g., Google Play Store®, Apple App Store®, etc.). Once accessed or registered into the applications204and208, the web browser or application may remember the credentials (e.g., identification data205) for subsequent visits (for example, through web browser authentication, web cookies, web history, etc.), allowing access to the applications without logging-in to an account again. The description herein is with reference to the payment application204and POS application208as installed applications; however, it will be understood that these applications as authenticated or unauthenticated applications on a web browser is within the meaning of the term. In various examples, the mobile device202, the POS device206, and/or the payment service210can be the same as or can include the customer device103, the POS device105, and/or the payment service108, respectively. Payment application204can include an electronic wallet application, money transfer application (e.g., application for sending and receiving money via email or phone), or any other application having stored therein identification data205linked to user accounts of payment service210or other data linked to one or more payment cards and/or bank accounts, both of which may be used by the owner of the mobile device to initiate transactions. Such transactions can include traditional purchase transactions between customers and merchants or service providers, person-to-person transactions, and the like. Payment application204can also be used to manage internal payment cards (i.e., virtual payment cards issued by payment service108to users having a user account132). As such, options with respect to internal payment cards can be adjusted and managed using payment application204. For example, when a user account of user accounts132includes multiple payment methods (e.g., credit card, bank account, loan account, etc.), payment application204can set one of those payment methods to be the default method for debits or credits when using an internal payment card. In one example, the color of the virtual card as displayed with the mobile application may change dynamically to match the current color of the physical thermo-sensitive payment card. For example, the mobile payment application204may communicate with a physical payment card212using Bluetooth, NFC, or other wireless communication protocol via onboard electronics embedded within a structure of the payment card212. Collectively, all tools for offering payment are herein referred to as payment instruments. For example, payment instruments can refer to mobile device202running payment application204, internal payment cards, external payment cards, NFC-enabled payment cards, etc. The use of the term payment instrument does not imply a mechanism of use. For example, mobile device202may be utilized via NFC protocols (e.g., NFC Data Exchange Format (NDEF), NFC tags, etc.), or via use of software on mobile device202to send messages through web forms, applications, APIs, or messaging applications. As an additional example, payment cards, whether internal (e.g., virtual cards) or external (e.g., physical cards), can be presented to a merchant to be read, or a card number can be entered into a terminal under the control of the merchant or under the control of the customer. A payment instrument can include multiple payment instruments, such as when utilizing mobile device202to enter a payment card number. Throughout this description, specific payment instruments may be discussed, however, the specific payment instruments should not be considered limiting, and persons of ordinary skill in the art will appreciate instances in which a payment instrument such as a payment card can be substituted for another payment instrument such as a mobile device, and vice versa. Thermochromic Payment Card The customer104(or user104) makes payment to a merchant102through a payment instrument112when conducting a transaction for acquiring item(s)106(e.g., goods or service) from the merchant102, as illustrated inFIG.1. The payment instrument can be a payment card112such as a credit card, a debit card, a gift card, or the like. In certain examples, the payment card112can be a thermochromic payment card that includes a thermochromic ink in one or more regions of the payment card. The thermochromic ink can be or include an ink, coating, material, and/or pigment that changes color when temperatures increase or decrease. For example, when heat is applied to a region of the payment card112having the thermochromic ink (e.g., by the user104touching or pressing finger(s) on the region and transferring body heat to the card112), the color of the thermochromic ink and the region having the thermochromic ink can change. If the region stores any information (e.g., through a printed pattern of the thermochromic ink), the color change of the thermochromic ink can affect the visibility of the stored information. Thus, by controlling and/or adjusting a temperature of the thermochromic ink on the payment card112, there exists a feasible and efficient way to control and secure the display of the information stored in the payment card112, such as an account number of the payment card112. An example payment card112and associated packaging materials and methods are depicted inFIGS.6-9Cand described below. FIG.3Ais a graphic representation of a top or bottom surface of a payment card112at room temperature, in accordance with certain examples. The payment card112includes at least a first region302and a second region304. The first region302can include or correspond to an account number associated with an account of a user. The account of the user can be maintained by a payment service system (e.g., payment service system108) that issues the payment card112. The second region304can include or correspond to certain background portions of the payment card112, such as regions around or adjacent to characters of the account number. In some examples, the second region304can surround the first region302and/or be immediately adjacent to one another. In other examples, the first region302and the second region304can be separated and/or not adjacent to one another. In certain examples, the first region302and the second region304of the payment card112are designed to be substantially identical in color at room temperature. Since different thermochromic inks can have different colors at different temperatures, a type of thermochromic ink is chosen to ensure the first region302and the second region304are substantially identical color at room temperature. The purpose of such design is to provide a security feature for hiding certain information of the payment card112when the payment card112is left exposed and untouched at an ambient or room temperature, thereby avoiding unnecessary or inadvertent information exposure. For example, inFIG.3A, the first region302and the second region304are of similar color (e.g., at room temperature) such that the account number in the first region302is hidden in the background and/or not visible. When the temperature of the first region302or the second region304changes, the colors of the two regions can become differentiated so as to reveal the account number in the first region302. In other words, the account number can be obfuscated unless, for example, a user touches the card112or heat is otherwise applied to the card112. The body heat of the user's finger(s) can cause the color of the first region302or the second region304to change so that the account number stands out or is perceptible. FIGS.3B-3Dare graphic representations of the payment card112upon application of heat, in accordance with certain examples. In some examples, the first region302includes the thermochromic ink and the second region304does not include the thermochromic ink. Alternatively or additionally, the thermochromic ink can be included in the second region304but not in the first region302, or both regions302and304can include a thermochromic ink (e.g., a different thermochromic ink in each region). In some examples, the first region302is a region containing private or sensitive information of the payment card112. In addition to or instead of including the account number of the payment card112, the first region302can include a subregion308of expiration date information, a subregion310of card verification value (CVV) information, and/or other subregion(s) of other account data (not shown). When a user104touches the payment card112, body heat from the user can change the temperature of a thermochromic ink in the card112, which can cause the color of the thermochromic ink to change. FIG.3Billustrates an example in which the account number in the first region302includes a thermochromic ink and other regions of the card112, including the second region304and a background region306, do not include a thermochromic ink. Compared toFIG.3A, the application of heat has caused the color of the account number in the first region302to change and become visible relative to colors in the second region304and the background region306, which do not include a thermochromic ink and have not changed color. In the example ofFIG.3B, the expiration date in the subregion308of the first region302and the CVV in the subregion310of the first region302may not include the thermochromic ink and/or heat may not have been applied to these subregions, such that the color of the expiration date and the CVV has not changed to reveal the expiration date and CVV information. WhileFIG.3Bindicates that the color of the first region302can become lighter upon application of heat, it is understood that the thermochromic inks described herein can become darker and/or change hue upon application of heat, in various examples. FIG.3Cillustrates an example in which the account number, the expiration date and the CVV in the first region302include a thermochromic ink and other regions of the card112, including the second region304and the background region306, do not include a thermochromic ink. Compared toFIG.3B, responsive to the application of heat, not only has the account number become visible, but the expiration date in the subregion308and the CVV in the subregion310have also become visible. FIG.3Dillustrates an example in which the second region304includes a thermochromic ink and the first region302and the background region306do not include a thermochromic ink. Compared toFIG.3A, the application of heat has caused the color of the second region304to change relative to colors in the first region302and the background region306, which do not include a thermochromic ink and have not changed color. Because the second region304immediately surrounds the first region302in this example, the color change in the second region304renders the account number in the first region302visible. FIG.3Eillustrates an example in which the first region302and the second region304each includes a different thermochromic ink and the background region306does not include a thermochromic ink. Compared toFIG.3A, the application of heat has caused the color of the first region302to become lighter and the color of the second region304to become darker, such that the account number in the first region302is visible. The color in the background section306has not changed in this example. A secure information exposure mechanism is therefore established based on a security feature of using a type of thermochromic ink that allows an account number or other information on a payment card to be (i) invisible when the card is not in use or is at or near room temperature and (ii) visible when the card is in use, held by a user, or at a temperature above or below room temperature, as illustrated inFIGS.3A-3D. While a physical touch of the payment card can be used to heat the card, in other examples, signals can be generated (as described below with reference toFIG.4) to change the temperature of the thermochromic ink embedded in or on the payment card, and further to change a color of the card and render the card number or other information visible based on the color change. Payment Card Components FIG.4illustrates a payment card112, in accordance with certain examples. The payment card112can be made of multiple layers or plastic or other materials laminated together. In certain examples, the payment card112includes at least a card substrate402and a personalization layer404. The card substrate402may include important card information used in conducting financial transactions and/or security features used for preventing unauthorized or fraudulent card uses. For example, the card substrate402may include a magnetic stripe that is encoded with binary information for identifying the card as an authentic card associated with an account of a user. In the depicted example, the card substrate402includes a heating element410, a near field communication (NFC) chip412, a temperature sensor416, and a battery418. The personalization layer404may include card information customized for the account of the user (e.g., an account number and/or the user's name) and additional security features. For example, the personalization layer404can include a first region302, a second region304, and a biometric element414. The first region302can include the account number or other information associated with the account of the user. The account number or other information can be printed or coated on the personalization layer404and/or on an outer surface or inner layer of the card112. The second region304can include one or more background portions of the payment card112, such as regions surrounding or adjacent to the first region302. The first region302and/or the second region304can include a thermochromic ink, as described herein. In some examples, the account number of the payment card112is obfuscated at or near room temperature and is revealed when a user touches the payment card112or when the temperature of the card is otherwise changed (e.g., in response to temperature-changing signals). The heating element410, the NFC chip412, the biometric element414, the temperature sensor416, and the battery418are depicted in dash-lined boxes to indicate that these components can be optional or can reside in other layers of the payment card112. For example, additionally or alternatively, the heating element410can be embedded in the personalization layer404and/or the biometric element can be embedded in the card substrate402, while the temperature sensor416can be optional. The heating element410is configured to change a temperature of the payment card112. In certain examples, the heating element410is coupled to the NFC chip412embedded in the payment card112. The heating element410receives a signal from the NFC chip412to change the temperature of the payment card112. The NFC chip412communicates with a mobile application executing on a device of the user, e.g., a mobile device103. Based on an interaction of the user with the mobile application, the NFC chip412is configured to receive a signal from the mobile application and transmit a signal to the heating element410to cause the heating element410to change the temperature of the payment card112. For example, a user can set up an alert using the mobile application executing on his/her mobile phone. When an alert event happens, e.g., a credit limit of the payment card112is reached or a transaction using the payment card112fails, the mobile application can generate a signal and transmit the signal to the heating element410through the NFC chip412. Responsive to receiving the signal, the heating element410can heat up one or more regions of the payment card112having the thermochromic ink to change the color of the one or more regions. The changed color of the payment card112provides a visually distinctive alert that calls the user's attention. Typically, the heating element410and the battery418work together to change a color of the payment card112. The heating element410along with the battery418, in certain examples, can be configured to perform localized heating to a specific region of the payment card112having a thermochromic ink to reveal a message or a card/account number. Therefore, even if another region of the payment card112also has the thermochromic ink, only the specific region where the temperature was increased by the heating element410may change color. In some examples, the heating element410can be configured to receive a signal specifying the color and/or temperature from the mobile application via the NFC chip412, and to change the temperature and color of the payment card112according to different signals. For example, if a credit limit is reached, the heating element410can receive a first signal specifying a first temperature and change the color of a region of the payment card112to red based on the first signal. In another example, if a transaction fails due to insufficient funds, the heating element410can receive a second signal specifying a second temperature and change the color of a region of the payment card112(e.g., the same region) to purple based on the second signal. The card temperature can be chosen such that a resulting color reflects or matches an event of a specific type and/or is associated with a specific merchant. For example, when a determination is made (e.g., by the mobile application) that the payment card is in, near, or being used to make a purchase with a particular merchant, the temperature can be adjusted to achieve a color that matches a color used by the merchant (e.g., as part of trademark or trade dress used by the merchant). In various examples, the colors achievable by a thermochromic ink used in the card can be mapped to specific temperatures (e.g., using a lookup table or mathematical function). Such a mapping can be developed by measuring colors for a variety of colors. Additionally or alternatively, the heating element can perform localized heating of the card to achieve localized temperature changes that present messages to the user. Such messages can include, for example, one or more letters or numbers, a trademark or logo (e.g., for a merchant), and/or information related to an account balance or a recent transaction. Advantageously, the heating element410combined with the NFC chip412and the mobile application allows the temperature and color of a payment card to be changed without requiring a user to touch the payment card. This automatic color change of the payment card can be used to provide a user with messages, notify a user of issues related to the payment card (e.g., credit or security issues), and/or remind the user to take timely actions. Such notifications can be particularly advantageous in reducing computer and network resource usage and improving user experience. In certain examples, the payment card112includes the temperature sensor416, which can be part of the heating element410or separate from the heating element410. The temperature sensor416can be configured to obtain a temperature reading of the payment card112and transmit the reading via the NFC chip412to another component and/or device for processing. For example, the temperature sensor416can generate a temperature reading during a transaction involving the payment card112and pass the reading to the NFC chip412for transmitting to a mobile application executing on the device of the user. In certain examples, responsive to receiving the temperature reading, at least one of the device of the user (e.g., mobile device103) or the payment service system108communicating with the mobile device103can use the temperature reading to confirm a physical location of the payment card112during the transaction, and then continue, stop, or report the transaction based on the confirmed physical location. Suppose, for example, that the payment card112is being used to make a transaction at a farmer's market or other outdoor location on the northeastern coast of the United States during winter. If the temperature reading of the payment card112is higher than an expected or actual outdoor temperature at the location, the temperature reading can indicate that the card is not physically present at the location, and this can indicate that the pending transaction is fraudulent. Thus, the mobile application or the payment service system108can stop the transaction, report the transaction, and/or recommend that action be taken (e.g., by the merchant) to confirm that the transaction is not fraudulent. On the other hand, a low temperature reading can indicate that the payment card112is being exposed to a temperature that is consistent with the location of the pending transaction. In such instances, the mobile device103or the payment service system108can permit the transaction to proceed. In some examples, a camera, rather than the temperature sensor416included in the payment card112, can be used to obtain a temperature reading of the payment card112. A user can use the camera associated with the device of the user (e.g., a mobile phone camera) to take a picture of the payment card112involved in a transaction and communicate the picture to a mobile application of the device of the user. The mobile application can then detect the temperature of the payment card112based on the color of the card (e.g., black indicates the temperature is 50-60° F., or red indicates the temperature is 70-80° F.). Such information can be used to determine whether the payment card is being used in an expected location for a pending transaction. Based on the determined temperature or physical location of the payment card112, the mobile application or the payment service system108can determine whether to continue, stop, or report the transaction. Therefore, the location information determined based on the temperature or color of the payment card112can be used to safeguard transactions and improve security associated with use of the card. In some examples, the payment card112can include a biometric element414to further secure the use of the payment card. The biometric element414can be configured to obtain biometric information from the user and transmit the information to the mobile application executing on the device of the user via the NFC chip412. The mobile application can send a signal to the heating element410to adjust the temperature of the payment card112based on the biometric information. In some examples, when a user puts his or her fingers on the payment card112, the biometric element414can obtain the fingerprint of the user (e.g., through a fingerprint reader included in the biometric element) and transmit the obtained fingerprint to the mobile application. The mobile application can compare the obtained fingerprint with the fingerprint of an assigned user, and further communicate with the heating element410through the NFC chip412to adjust a temperature of the payment card112based on the comparison result. For example, the heating element410can be configured to adjust a temperature of the first region302or the second region304to reveal or conceal the account number in response to the communication received by the NFC chip from the mobile application. By applying heat with the heating element410, for example, the account number can be revealed based on a fingerprint match (e.g., when the user touching the payment card is the assigned user). Likewise, the account number can remain concealed when there is a fingerprint mismatch (e.g., when the user touching the payment card is not the assigned user), by not applying heat with the heating element410. Thus, the account number can be kept hidden when unauthorized users touch the card, thereby improving security. As described, the thermochromic ink may produce certain colors at certain temperatures (e.g., red at 70 degrees, white at 80 degrees, etc). The heating element, in communication with the payment application executing on the device of the user, may be controlled via the mobile application to cause the card to heat to a desired temperature in order to produce a particular color based on a transaction or other event caused by the user. For example, the user may activate an incentive associated with a merchant via the mobile app and a virtual card displayed with the user interface, and consequently cause an animation (e.g., virtual card turns to a color associated with the incentive or the merchant) to appear both on the user interface of the mobile application and also cause a corresponding color to appear on the physical card through temperature control via the heating element and the thermochromic ink. Additionally or alternatively, in certain examples, the payment card112can include a graphical display, such as an electronic ink (E-Ink) display or an LCD display. The graphical display can be used to present information related to the payment card112, a user of the payment card112(e.g., an image of the user), a transaction made with the payment card112(e.g., a payment amount), or a merchant associated with a transaction made with the payment card112(e.g., a name of the merchant). For example, in some instances, the graphical display can display an account number, an expiration date, and/or a CVV. Additionally or alternatively, the graphical display can be used to display a bar code or a QR code (Quick Response code). For example, the user may receive a gift card that includes a QR code or a bar code that can be presented to a merchant for payment. The graphical display can be used to display the QR code or bar code, which can then be presented to an optical scanner of the merchant for payment. Thus, the graphical display can be used to communicate e-gift card identifier information. Similarly, the graphical display can be used to communicate sports event ticket information, airline ticket information, or concert ticket information. FIG.5is a flowchart of a method500of using a payment card, in accordance with certain examples. A payment card (e.g., payment card112) is provided (step502). The payment card includes an account number associated with an account of a user and has a thermochromic ink. In some examples, the payment card includes a card substrate and a personalization layer overlaying the card substrate. The personalization layer includes a first region and a second region. The first region includes the account number associated with the account of the user. The account of the user is maintained by a payment service system (e.g., the payment service system108) that issues the payment card. At least one of the first region or the second region includes the thermochromic ink. The thermochromic ink is an ink that changes color when temperature increases or decreases. For example, when the user touches the card, body heat from the user can cause a color of the payment card. Additionally or alternatively, when the payment card is exposed to a new ambient temperature, the temperature and color of the payment card can change. In some examples, a temperature sensor in the card can be used to determine when the user has touched the card or when heat has otherwise been applied to (or removed from) the payment card, as described herein. In certain implementations, the payment card includes a heating element (e.g., heating element410) and an NFC chip (e.g., the NFC chip412) that communicates with a mobile application executing on a device of the user. The mobile application can determine (step504) that a color of the payment card should be changed (e.g., to reveal the account number or send a message to the user). In response, the user device can send (step506) a signal to the NFC chip on the payment card. The NFC chip can then activate (step508) the heating element to change a temperature and color of the payment card. The heating element can dynamically adjust a temperature of the payment card. Application of heat to the payment card can cause a color change that reveals the account number of the payment card. Otherwise, the account number of the payment card may remain hidden from view, for example, if the payment card is at or around room temperature. In some examples, when heat is applied to the first region or the second region of the payment card having the thermochromic ink (e.g., when the user touches the first or second region), the color of the first or second region having the thermochromic ink can change. As a result, the account number can be revealed. In some examples, the first region and the second region are substantially identical in color at room temperature such that the account number included in the first region is invisible at room temperature. Advantageously, revealing the account number upon application of heat can prevent unnecessary or inadvertent exposure of the account number or other information on the card, thereby increasing security of the payment card and user account. Chameleon Card In some implementations, the payment card112can be or include a metal substrate and/or can have randomized colors or a random color variation across the surface. Such a card can be referred to herein as a “chameleon card.”FIG.6includes perspective views of front and back sides of a chameleon card600. The front side of the chameleon card600includes at least three regions602,604, and606. Each region602,604, and606can have a unique or different color based on different thermochromic inks used in each region. The different colors can be present when a temperature of the card600is uniform (e.g., throughout the card) or non-uniform. The thermochromic inks can be selected and positioned to hide key information of the card600while displaying random colors. For example, as depicted, the account number associated with the card can be invisible in the regions602,604, or606. When a user touches the card600to apply body heat or a heating element embedded in the card applies heat to the card600in response to receiving a signal, the color of one or more regions602,604, or606can change to reveal the account number. Likewise, the back side of the chameleon card600can include regions608,610, and612that display different colors. For example, the regions608and612on the back side may be opposite of the region602on the front side and/or may show different colors (e.g., a darker color) based on a selection and arrangement of the thermochromic inks on the card600. Additionally or alternatively, the colors on the front side and/or the back side of the card600can be arranged to present customized shapes or images. For example, the card600can be designed and colored based on personalized artwork from a cardholder. In general, the chameleon card600can be visually appealing to attract users and improve an overall user experience. Furthermore, the payment card600may be manufactured in a way such that various semi-reflective or other ink colors are randomly printed across the upper and/or lower surfaces of the card substrate or other layer of the card. In this way, the payment card appears in a chameleon style and may appear to change colors to the user as the card is rotated. This style of card may include thermochromic ink or other ink or dyes (e.g., normal or non-thermochromic) so long as various colors are printed randomly along the surface. In one embodiment, changes in color to the chameleon card may be triggered by a detected change in environmental temperature using the temperature sensor, thermochromic ink, and/or heating element as described above. The various colors or color combinations used to print the card may be selected by the user via input on a user interface associated with the mobile application. For example, the user may select a base color of red for their chameleon card and the resulting card can have random colors of red hues printed on the card. In one embodiment, the payment service may utilize machine learning algorithms to select or suggest complementary colors to at least one desired color selected by a user, and based on users having similar profiles to the selecting user, for printing the various colors on the payment card. In some examples, the randomized coloring can be achieved by varying concentrations of thermochromic pigments across the front and/or back sides of the card600. For example, the front and/or back sides can include a mixture of two or more different thermochromic pigments, and the concentrations of the pigments can be varied to achieve a range of colors on the front and/or back sides. Each of the regions602,604, and606, for example, can have a unique combination of the thermochromic pigments. In general, the composition of thermochromic pigments can vary across the card to achieve randomized colors and/or present images at any given temperature. Chameleon Card Packaging FIG.7depicts a chameleon card700placed in a packaging container702, in accordance with certain examples. Current card packaging designs usually have notches to hold a payment card; however, the “notched” card holder design can sometimes damage the card during transit. By comparison, the packaging container702utilizes a holding tray704that has ridges706for achieving a friction fit with an outer perimeter of the card700. The holding tray704may be made of paper and/or foam and is configured to secure the card in position, while providing cushion to protect the card during transit. FIGS.8A and8Billustrate a process800for packaging the chameleon card700(or other payment card) in the packaging container702. The holding tray704is glued (step802) in a box804with a top of the tray704facing a hinged lid806of the box804. A quick response (QR) label is applied (step808) to an inner side of the lid806, which includes a protrusion809. The protrusion809can be used in conjunction with the ridges706of the holding tray704to support the chameleon card700and/or to provide a friction fit that holds the chameleon card700in place. The chameleon card700can be pressed (step810) into the holding tray704. The box804is then closed (step812) and placed (step814) into an envelope or mailer816, which can be shipped to the card holder. FIGS.9A and9Binclude a variety of views of packaging components that can be used to ship a payment card, such as the chameleon card.FIG.9Aincludes top, bottom, front, section, and side views of the holding tray704. For example, the ridges706can be seen in the front view and the section view of the holding tray704. The box702, inner mailer814, and outer mailer816can be made of recyclable paper and/or include a padded lining.FIG.9Bincludes an image of a cutout902for forming a box (e.g., the box804), which can be made of uncoated kraft folding board. Computer Implementation For clarity of explanation, in some instances the present technology may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software. Any of the steps, operations, functions, or processes described herein may be performed or implemented by a combination of hardware and software services or services, alone or in combination with other devices. In some examples, a service can be software that resides in memory of a client device and/or one or more servers of a content management system and perform one or more functions when a processor executes the software associated with the service. In some examples, a service is a program, or a collection of programs that carry out a specific function. In some examples, a service can be considered a server. The memory can be a non-transitory or transitory computer-readable medium. In some examples the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, transitory computer-readable storage media are media such as energy, carrier signals, electromagnetic waves, and signals per se. Methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer readable media. Such instructions can comprise, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, or source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, solid state memory devices, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on. Devices implementing methods according to these disclosures can comprise hardware, firmware and/or software, and can take any of a variety of form factors. Typical examples of such form factors include servers, laptops, smart phones, small form factor personal computers, personal digital assistants, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example. The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures. Although a variety of examples and other information was used to explain aspects within the scope of the appended claims, no limitation of the claims should be implied based on particular features or arrangements in such examples, as one of ordinary skill would be able to use these examples to derive a wide variety of implementations. Although some subject matter may have been described in language specific to examples of structural features and/or method steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to these described features or acts. For example, such functionality can be distributed differently or performed in components other than those identified herein. Rather, the described features and steps are disclosed as examples of components of systems and methods within the scope of the appended claims. Having now fully set forth examples and certain modifications of the concept underlying the present invention, various other examples as well as certain variations and modifications of the examples shown and described herein will obviously occur to those skilled in the art upon becoming familiar with said underlying concept. | 63,733 |
11861439 | MODE FOR CARRYING OUT THE INVENTION Hereinafter, modes for carrying out the present technology will be described. Description will be provided in the following order.1. Overview of antenna device2. Control of response of NFC IC3. Reduction in influence of eddy currents4. Modification example5. Application example6. Configuration of smartphone7. Others 1. Overview of Antenna Device FIG.1illustrates a configuration example of an antenna device11according to an embodiment of the present technology. In a case where an NFC device such as a smartphone12or IC card13is brought close to the antenna device11, the antenna device11performs near field communication (NFC wireless communication) with such the NFC device. Both the smartphone12and the IC card13are devices having a predetermined standard NFC communication function. For example, the smartphone12is provided with an NFC reader/writer51that is a reader/writer for NFC communication. Further, the IC card13is provided with an IC chip that performs NFC communication with an external reader/writer or the like and reads/writes data in response to a command transmitted by the reader/writer. As illustrated inFIG.1, the antenna device11is configured by connecting a smartphone antenna22and extension antennas23-1and23-2to a control circuit21. The smartphone12is held over the smartphone antenna22by a user. Meanwhile, the IC card13is held over the extension antenna23-1or23-2by the user. The extension antennas23-1and23-2are antennas for the IC card13. In the example ofFIG.1, the IC card13is held over the extension antenna23-1. The IC card13may be held over the extension antenna23-2, or two IC cards13may be held over the extension antennas23-1and23-2, respectively. As described above, the antenna device11has a single antenna for the smartphone12and a plurality of antennas for the IC card13. The control circuit21includes a rectifier circuit31, a reset IC32, a power supply IC33, an NFC IC34, a micro controller unit (MCU)35, a switch control circuit36, and switches37-1and37-2. The smartphone antenna22is connected to the rectifier circuit31and the NFC IC34. Further, the smartphone antenna22is connected to the extension antenna23-1via the switch37-1and is connected to the extension antenna23-2via the switch37-2. The rectifier circuit31generates a voltage direct current (VDC) when the smartphone antenna22receives radio waves output by the NFC reader/writer51provided in the smartphone12. The VDC generated by the rectifier circuit31is supplied to the reset IC32and the power supply IC33. The reset IC32monitors the VDC. The reset IC32activates the power supply IC33in a case where the VDC reaches a predetermined reset release voltage. The power supply IC33supplies VDD generated on the basis of the VDC to the NFC IC34and also supplies the VDD to the MCU35. The NFC IC34is an IC chip provided in the antenna device11. The NFC IC34can be, for example, an IC chip compatible with the FeliCa (registered trademark) standard. The NFC IC34is an IC chip that can perform NFC communication of the same standard as an IC chip provided in the IC card13. At the same time when the VDD is supplied from the power supply IC33to the NFC IC34, for example, power and a command signal corresponding to a carrier wave of 13.56 MHz are supplied from the smartphone antenna22to the NFC IC34. The MCU35is activated in response to the supply of the VDD and operates in response to the command supplied from the smartphone antenna22. For example, a write command that is a command indicating enabling or disabling of the extension antennas is transmitted from the NFC reader/writer51of the smartphone12. In a case where the smartphone antenna22receives the write command, the NFC IC34writes the write command to a memory in the NFC IC34and supplies the write command to the MCU35via an I2C bus. The NFC IC34functions as an IC chip in the antenna device11for controlling operation of the MCU35in response to the write command. Note that enabling of the extension antenna means that the switch provided between the smartphone antenna22and the extension antenna is turned on so that the smartphone antenna22and the extension antenna have a short-circuit state (electrically connected state). In a case where the extension antenna is enabled, a signal transmitted from the NFC reader/writer51of the smartphone12and received by the smartphone antenna22is supplied to the extension antenna. Meanwhile, a signal transmitted from the IC card13and received by the extension antenna is supplied to the smartphone antenna22. Further, disabling of the extension antenna means that the switch provided between the smartphone antenna22and the extension antenna is turned off so that the smartphone antenna22and the extension antenna have an open state (electrically disconnected state). The MCU35controls a general-purpose input/output (GPIO) in accordance with contents of the write command supplied from the NFC IC34. InFIG.1, the MCU35uses two GPIOs to cause the switch control circuit36to control both the switches37-1and37-2. The MCU35is a controller that switches on/off each of the switches37-1and37-2in response to a write command so as to control enabling or disabling of the extension antennas. The switch control circuit36switches on/off each of the switches37-1and37-2under the control of the MCU35. Regarding a setting of switching on/off of the switches, time can also be set. For example, it is possible to set the switch37-1to be on for one second and the switch37-2to be on for five seconds. The switch37-1is provided between the smartphone antenna22and the extension antenna23-1. The switch37-2is provided between the smartphone antenna22and the extension antenna23-2. Hereinafter, in a case where it is unnecessary to distinguish between the switches37-1and37-2, the switches will collectively be referred to as “switches37” as appropriate. Other configurations provided in pairs will be collectively described in a similar manner. In the example ofFIG.1, the two extension antennas are provided, but three or more antennas may be provided as extension antennas for the IC card13. In this case, the same number of switches as the number of extension antennas are provided between the smartphone antenna22and the extension antennas. Here, processing performed between the antenna device11and the smartphone12having the above configuration will be described. FIG.2is a sequence diagram showing a flow of the processing performed by the antenna device11and the smartphone12. The processing described with reference toFIG.2is started when, for example, the smartphone12is held over the smartphone antenna22of the antenna device11and the smartphone antenna22receives a radio wave output by the NFC reader/writer51. Each unit of the antenna device11is activated when power generated in response to reception of the radio wave output from the NFC reader/writer51by the smartphone antenna22is supplied to each unit thereof. In step S1, the NFC reader/writer51of the smartphone12transmits a write command. Here, a case of enabling the extension antenna23-1and disabling the extension antenna23-2will be described. The IC card13is held over the extension antenna23-1. In step S11, the smartphone antenna22of the antenna device11receives the write command transmitted from the NFC reader/writer51of the smartphone12. In step S12, the NFC IC34of the antenna device11writes the write command to the memory in the NFC IC34and supplies the write command to the MCU35. In step S13, the MCU35of the antenna device11causes the switch control circuit36to control the switches37in response to the write command. For example, the switch control circuit36that has received the write command described above turns on the switch37-1and turns off the switch37-2. In step S2, the NFC reader/writer51of the smartphone12transmits a polling command that is a command for making an inquiry. In step S14, the smartphone antenna22of the antenna device11receives the polling command transmitted from the NFC reader/writer51of the smartphone12. The polling command received by the smartphone antenna22is supplied to the extension antenna23-1via the switch37-1. In step S15, the extension antenna23-1of the antenna device11transmits the polling command to the IC card13held over the extension antenna23-1. In step S16, the extension antenna23-1of the antenna device11receives a response from the IC card13. The response to the polling command includes IDm that is identification information of the IC card13. The IDm is identification information of the IC chip provided in the antenna device11or IC card13. Note that another ID may be used as the identification information. The response received by the extension antenna23-1is supplied to the smartphone antenna22via the switch37-1. In step S17, the smartphone antenna22of the antenna device11transmits the response from the IC card13to the smartphone12. In step S3, the NFC reader/writer51of the smartphone12receives the response transmitted from the antenna device11as a response to the polling command. The smartphone12grasps that only the extension antenna23-1is enabled, and therefore the smartphone12can grasp that the IC card13is held over the extension antenna23-1and also grasp the IDm of the IC card13held over the extension antenna23-1. As described above, the antenna device11can extend the antenna of the NFC reader/writer51provided in the smartphone12to a plurality of extension antennas. Further, the antenna device11can perform an operation related to switching for enabling or disabling the plurality of extension antennas, without receiving power from the outside. The smartphone12can enable or disable the extension antennas23by using a command compatible with the IC chip provided in the IC card13or the NFC IC34that is an internal IC chip. The IC chip provided in the IC card13is an IC chip for an IC card, which reads/writes data in a contactless manner in response to a command transmitted by the NFC reader/writer51of the smartphone12. The internal IC chip is an IC chip that controls operation of the MCU35in response to a command and reads/writes data in a contactless manner in response to a command. 2. Control of Response of NFC IC In the present technology, after a specific extension antenna23is selected and enabled by the smartphone12, NFC communication is desirably performed only between the IC card13held over the enabled extension antenna23and the smartphone12. For example, in a case where the extension antenna23-1is enabled as illustrated inFIG.3, a command transmitted from the NFC reader/writer51of the smartphone12is desirably supplied only to the IC card13held over the extension antenna23-1, and a response from the IC card13held over the extension antenna23-1is desirably returned to the smartphone12. Because the control circuit21of the antenna device11is provided with the NFC IC34capable of performing NFC communication, the NFC IC34may respond to a command transmitted by the smartphone12. As a method of restraining the NFC IC34from responding, there are the following two methods: a method using a polling disable function; and a method using a switch connected to the NFC IC34. Method Using Polling Disable Function The polling disable function prevents the IC chip from responding even if a polling command is transmitted from the NFC reader/writer51. The polling disable function is enabled in response to reception of a polling disable command. FIG.4is a sequence diagram showing a flow of processing using the polling disable function. The processing described with reference toFIG.4is also started when, for example, the smartphone12is held over the smartphone antenna22of the antenna device11and the smartphone antenna22receives a radio wave output by the NFC reader/writer51. The extension antennas23-1and23-2are open as a default state. In step S51, the NFC reader/writer51of the smartphone12transmits a polling command. In step S71, the smartphone antenna22of the antenna device11receives the polling command transmitted from the NFC reader/writer51of the smartphone12. In step S72, the NFC IC34of the antenna device11supplies a response to the polling command to the smartphone antenna22. The smartphone antenna22transmits the response supplied from the NFC IC34to the smartphone12. The response from the NFC IC34includes IDm that is identification information of the NFC IC34. In step S52, the NFC reader/writer51of the smartphone12receives the response from the NFC IC34. In step S53, the NFC reader/writer51of the smartphone12transmits a write command for enabling the extension antenna23-1. Processes in steps S73to S75in the antenna device11are similar to the processes in steps S11to S13ofFIG.2. That is, the switch37-1is turned on and the switch37-2is turned off in response to the write command. In step S54, the NFC reader/writer51of the smartphone12transmits a polling disable command while specifying the NFC IC34on the basis of the IDm of the NFC IC34included in the response received in step S52. In step S76, the smartphone antenna22of the antenna device11receives the polling disable command transmitted from the NFC reader/writer51of the smartphone12. The NFC IC34stops responding to the polling command in response to the supply of the polling disable command. Note that polling disable is automatically canceled when the NFC IC34is reset. In step555, the NFC reader/writer51of the smartphone12transmits a polling command. Processes in steps S77to S80in the antenna device11are similar to the processes in steps S14to S17ofFIG.2. That is, the polling command is transmitted from the extension antenna23-1to the IC card13, and a response from the IC card13is received by the extension antenna23-1. The response from the IC card13received by the extension antenna23-1is transmitted from the smartphone antenna22to the smartphone12. Note that, because the polling disable function is enabled, the NFC IC34does not respond even if the polling command is supplied. In step S56, the NFC reader/writer51of the smartphone12receives the response from the IC card13. After that, in a case where the smartphone12performs NFC communication with the IC card13held over the extension antenna23-1, the smartphone12transmits a command while specifying the IDm of the IC card13included in the response. As described above, the antenna device11can restrain operation of the NFC IC34so that the NFC IC34do not respond after a specific extension antenna23is enabled under the control of the smartphone12. Method Using Switch Connected to NFC IC FIG.5illustrates another configuration example of the antenna device11. InFIG.5, the same components as those of the antenna device11inFIG.1are denoted by the same reference signs. Redundant description will be omitted as appropriate. The same applies toFIG.9described later. The configuration of the control circuit21inFIG.5is different from the configuration described with reference toFIG.1in that a switch41is provided between the smartphone antenna22and the NFC IC34. The switch41is connected to the switch control circuit36. That is, the MCU35causes the switch control circuit36to control not only the switches37-1and37-2that are switches for the extension antennas, but also the switch41that is a switch for the NFC IC34. The switch control circuit36controls on/off of the switch41as well as on/off of the switches37-1and37-2under the control of the MCU35. For example, states of the switches37and a state of the switch41are exclusively controlled. FIG.6illustrates a combination of connection states between the extension antennas23or the NFC IC34and the smartphone antenna22. As shown in the second line ofFIG.6, a default state is such that a connection between the smartphone antenna22and the NFC IC34is short-circuited and connections between the smartphone antenna22and the extension antennas23-1and23-2are open. Meanwhile, as shown in the third line ofFIG.6, in a case where the connection between the smartphone antenna22and the extension antenna23-1is short-circuited and the extension antenna23-1is enabled, the connection between the smartphone antenna22and the NFC IC34is open. Further, the connection between the smartphone antenna22and the extension antenna23-2is also open. Meanwhile, as shown in the fourth line ofFIG.6, in a case where the connection between the smartphone antenna22and the extension antenna23-2is short-circuited and the extension antenna23-2is enabled, the connection between the smartphone antenna22and the NFC IC34is open. Further, the connection between the smartphone antenna22and the extension antenna23-1is also open. That is, in a case where any one of the extension antennas23is enabled, the connection between the smartphone antenna22and the NFC IC34is open. In this case, the NFC reader/writer51of the smartphone12cannot perform NFC communication with the NFC IC34. Note that, in a case where output of radio waves by the NFC reader/writer51of the smartphone12is stopped, each connection state returns to the default state, and the smartphone antenna22and the NFC IC34are short-circuited. As described above, also by switching on/off the switch41that is a switch for the NFC IC34, the antenna device11can control operation of the NFC IC34so that the NFC IC34does not respond. 3. Reduction in Influence of Eddy Currents Generally, in a case where an NFC device is placed near a metal plate such as a desk having a metal top plate as illustrated inFIG.7, communication performance is reduced because eddy currents are generated in the metal plate. In order to reduce an influence of generation of the eddy currents, it is possible to provide a magnetic sheet and a metal plate on a bottom surface side of the antenna device11. FIG.8illustrates an example of a layered structure of the antenna device11. As illustrated inFIG.8, the antenna device11has a layered structure including an antenna layer101, a magnetic layer102, and a metal layer103. The antenna layer101is a layer on which the smartphone antenna22and the extension antenna23are provided. A substrate on which the control circuit21and the like are arranged is also provided on the antenna layer101, for example. As illustrated inFIG.8, the antenna layer101is provided on a surface side of the antenna device11. The antenna layer101is made from a material such as, for example, rubber, urethane, or leather, except for each configuration described above such as the smartphone antenna22and the extension antennas23. The magnetic layer102is provided on a bottom surface side of the antenna layer101and is provided between the antenna layer101and the metal layer103. The magnetic layer102is made from a magnetic material such as ferrite. The metal layer103is provided on the bottom surface of the antenna device11. The metal layer103is made from a thin-film metal material such as aluminum foil. Because the magnetic layer102and the metal layer103are provided on the bottom surface side of the antenna layer101, it is possible to control a direction of a magnetic field emitted from the smartphone antenna22, the extension antennas23, or the like and reduce generation of eddy currents. By adjusting RF performance in accordance with such the layered structure, the antenna device11can have less changes in communication performance, i.e., can maintain stable communication performance. Note that, in a case where each layer is made from a flexible material, the user can roll and carry the antenna device11. 4. Modification Example A light emitting diode (LED) may be provided in the antenna device11. FIG.9illustrates a configuration example of the antenna device11including the LED. The configuration of the control circuit21inFIG.9is different from the configuration described with reference toFIG.1in that an LED111is provided. The LED111is connected to the MCU35. The MCU35controls lighting, blinking, or extinguishing of the LED111in response to a command supplied from the NFC IC34. The LED111is a light emitting body that performs operations such as lighting, blinking, and extinguishing under the control of the MCU35. As described above, the antenna device11can emit light from the LED111or the like in response to a command output by the smartphone12, without receiving an external power supply. A plurality of LEDs may be provided in the antenna device11, for example, the same number of LEDs as the number of extension antennas23may be provided. It is possible to present enabling or disabling of the extension antennas23to the user by emitting light from the LEDs. 5. Application Example Hereinafter, an application example of the antenna device11will be described. The antenna device11is applied to, for example, a playmat used as a field of a competitive card game. The game is played by arranging, on the playmat, IC cards13on which characters or the like are printed. FIG.10is a top view illustrating an example of external appearance of the antenna device11applied to the playmat. As illustrated inFIG.10, a section O is formed at substantially the center on a surface of the antenna device11, and sections A to D are formed at upper, lower, left, and right corners. Each section is formed by, for example, printing a rectangular section on the surface of the antenna device11. The section O is a section for placing the smartphone12. The control circuit21and the smartphone antenna22are arranged at positions on the back side of the section O. The sections A to D are sections for placing the IC cards13. Extension antennas23-1to23-4are arranged at positions on the back sides of the respective sections A to D. Here, the four extension antennas23are provided in the antenna device11. Enabling or disabling of the four extension antennas23is controlled by the control circuit21. FIG.11illustrates how to use the antenna device11. As illustrated inFIG.11, the smartphone12in which a card game application program has been installed is placed on the section O of the antenna device11. Further, IC cards13-1to13-4that are the four IC cards13are placed on the sections A to D, respectively. Each card is marked with a spade, heart, diamond, and clover. Although the four IC cards13are used inFIG.11, an arbitrary number of IC cards13are used in the card game. FIG.12illustrates an electrical configuration of the antenna device11. As illustrated inFIG.12, the control circuit21and the smartphone antenna22are formed on a substrate151. The substrate151is provided at a position corresponding to the section O. Members such as sheets on which the extension antennas23-1to23-4are arranged are provided at positions corresponding to the sections A to D. Here, a flow of the card game will be described with reference toFIGS.13and14. The card game is started after, for example, the user activates the card game application program. As illustrated in an upper part ofFIG.13, the user places the smartphone12in which the card game application program has been activated on the section O. A button TB including the word “Scan all” is displayed at the center of a display of the smartphone12. Further, areas for displaying information of the IC cards13placed on the sections A to D are displayed at four corners of the display of the smartphone12. Further, as illustrated in a lower part ofFIG.13, the user places the IC cards13-1to13-4on the sections A to D, respectively. After placing the IC cards13-1to13-4on the sections A to D, the user touches the button TB as illustrated in an upper part ofFIG.14. InFIG.14, the colored button TB means that the button TB has been touched. In a case where the button TB is touched, each device performs the following operation. First, the NFC reader/writer51of the smartphone12transmits a polling command. The polling command received by the smartphone antenna22is supplied to the NFC IC34. The NFC IC34supplies, to the smartphone antenna22, a response including the IDm that is the identification information of the NFC IC34. The response including the IDm of the NFC IC34is transmitted to the smartphone12. Upon receipt of the response from the NFC IC34, the NFC reader/writer51of the smartphone12selects one of the extension antennas23and transmits a write command for enabling the selected extension antenna23. For example, the extension antennas are enabled in order from the extension antenna23-1. The write command received by the smartphone antenna22is supplied to the MCU35. The MCU35causes the switch control circuit36to control on/off of the switches37in response to the write command. Therefore, a specific extension antenna23is enabled. The NFC reader/writer51of the smartphone12transmits a polling disable command while specifying the IDm of the NFC IC34. As described above, the NFC IC34stops responding to a polling command because the polling disable command has been transmitted. Then, the NFC reader/writer51of the smartphone12transmits a polling command. The polling command received by the smartphone antenna22is supplied to the enabled extension antenna23. The enabled extension antenna23transmits the polling command to the IC card13placed on the extension antenna23and receives a response including the IDm that is the identification information of the IC card13. The response from the IC card13is transmitted to the smartphone12. Upon receipt of the response from the IC card13, the smartphone12detects that the IC card13is placed on the enabled extension antenna23and records the IDm thereof. Next, the smartphone12reselects the extension antenna23. That is, the smartphone12resets the NFC IC34, cancels polling disable, and then transmits a write command to enable another extension antenna23, and, after that, transmits a polling command. Upon receipt of a response to the polling command, the smartphone12records the IDm of the IC card13placed on the reselected and enabled extension antenna23. Reselection of the extension antenna23, transmission of a polling command, and recording of IDm are repeated. The smartphone12can sequentially perform NFC communication with the IC cards13-1to13-4placed on the extension antennas23-1to23-4(sections A to D) and acquire IDm of each IC card. After acquiring the IDm of the IC cards13-1to13-4, as illustrated in a lower part ofFIG.14, the smartphone12displays the detected information of the IC cards13on the display on the basis of the IDm. In the example in the lower part ofFIG.14, images showing the IC cards13-1to13-4are displayed in respective display areas displayed on the display. Note that not only the IDm of the IC cards13but also other information such as points used for the progress of the game is recorded on the IC chip of the IC card13. The smartphone12can read the points recorded on each of the IC cards13-1to13-4by using a read command. Further, the smartphone12can add or subtract the points recorded on each of the IC cards13-1to13-4by using a write command. The smartphone12can also read and display physical strength recorded on each of the IC cards13-1to13-4and can add or subtract an amount of money recorded on each of the IC cards13-1to13-4. The smartphone12can also identify the antenna device11as a playmat on the basis of the IDm that is the identification information of the NFC IC34. The IDm of the NFC IC34is used as a mat ID. Note that the playmat may include storage means such as a memory, and the mat ID may be stored therein. Total points that increase or decrease as the game progresses may be calculated by selecting a button displayed on the display of the smartphone12. FIG.15illustrates an example of calculating total points. As illustrated inFIG.15, in a case where the IC card13is placed on the section of the antenna device11and a button such as “+1”, “+5”, or “+10” displayed on the display of the smartphone12is selected, the points corresponding to the selected button are added to the points stored in the IC card13. Further, different points are added to points stored in the IC card13placed on another section of the antenna device11. In a case where a button including the word “Check” displayed on the display of the smartphone12is selected, total points are displayed on the display of the smartphone12. 6. Configuration of Smartphone FIG.16is a block diagram illustrating a configuration example of the smartphone12. As illustrated inFIG.16, the smartphone12includes not only the NFC reader/writer51but also a control unit301, a communication unit302, a memory303, an operation unit304, a camera305, and a display306. The control unit301includes a central processing unit (CPU), a read only memory (ROM), a random access memory (RAM) and the like. The control unit301controls an entire operation of the smartphone12by executing a predetermined program. The control unit301achieves an application execution unit301A. The application execution unit301A executes various programs such as a card game application program having a function of controlling the antenna device11. The communication unit302is a communication module for mobile communication such as Long Term Evolution (LTE). The communication unit302performs communication with an external device. The memory303includes a flash memory or the like. The memory303stores various kinds of information such as the IDm of the NFC IC34and the IDm of the IC card13transmitted from the antenna device11and a program to be executed by the control unit301. The operation unit304includes various buttons and a touchscreen provided on the display306. The operation unit304outputs a signal indicating content of a user operation to the control unit301. The camera305captures an image (moving image, still image) in response to a user operation. The display306includes an organic EL display, an LCD, or the like. Various screens such as a card game screen are displayed on the display306. 7. Others The smartphone12is used as a mobile terminal for externally controlling operation of the antenna device11, but various devices provided with an NFC reader/writer, such as a tablet terminal, wearable device, and PC, can be used as a mobile terminal. The series of processing described above can be executed by hardware or software. In a case where the series of processing is executed by software, a program constituting the software is installed in a computer incorporated in dedicated hardware, a general-purpose personal computer, or the like. The program to be installed is provided by being recorded on a removable medium including an optical disk (compact disc-read only memory (CD-ROM), digital versatile disc (DVD), or the like), a semiconductor memory, or the like. Further, the program may also be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital broadcasting. The program can be installed in a ROM or storage unit in advance. Note that the program executed by the computer may be a program in which the processing is performed in time series in the order described in the present specification, or may be a program in which the processing is performed in parallel or at a necessary timing such as when a call is made. The effects described in the present specification are merely illustrative and are not limited. Further, additional effects may be obtained. The embodiments of the present technology are not limited to the above embodiments and can be variously modified without departing from the gist of the present technology. Examples of Combination of Configurations The present technology can also have the following configurations. (1) An antenna device including:a mobile terminal antenna that receives a radio wave output by a reader/writer provided in a mobile terminal;a plurality of IC card antennas that communicates with an IC card including an IC chip for the IC card that reads/writes data in a contactless manner in response to a command transmitted by the reader/writer; anda control circuit that operates by using power generated when the mobile terminal antenna receives the radio wave output by the reader/writer and controls enabling or disabling of each of the IC card antennas in response to the command transmitted by the reader/writer. (2) The antenna device according to (1), in which:the control circuit includesa switch that is connected to each of the IC card antennas and switches enabling or disabling of the IC card antenna;a controller that controls the switch; andan internal IC chip that controls operation of the controller in response to the command and reads/writes data in a contactless manner in response to the command. (3) The antenna device according to (2), in whichin a case where the IC card antenna is enabled, the IC card antenna transmits the command supplied via the switch to the IC card and receives a response including identification information of the IC chip for the IC card transmitted from the IC card. (4) The antenna device according to (3), in whichthe mobile terminal antenna transmits the response received by the IC card antenna to the mobile terminal. (5) The antenna device according to (4), in which:the internal IC chip supplies another response including identification information of the internal IC chip to the mobile terminal antenna in response to the supply of the command; andthe mobile terminal antenna transmits the another response supplied from the internal IC chip to the mobile terminal. (6) The antenna device according to (5), in whichthe internal IC chip stops responding to the another response in response to supply of a disable command serving as the command. (7) The antenna device according to (5), in which:the controller further controls another switch provided between the mobile terminal antenna and the internal IC chip; andthe internal IC chip controls the operation of the controller to turn off the another switch in response to enabling of one of the IC card antennas. (8) The antenna device according to any one of (2) to (7), in whichthe controller controls light emission of a light emitting body. (9) The antenna device according to any one of (5) to (8), in whichthe identification information included in the another response is used in the mobile terminal as information for identifying the antenna device. (10) The antenna device according to any one of (1) to (9), in whichthe antenna device has a layered structure includingan antenna layer on which the IC card antennas and the mobile terminal antenna are provided,a magnetic layer made from a magnetic material and provided on a bottom surface side of the antenna layer, anda metal layer made from a metal material and provided on a bottom surface side of the magnetic layer. (11) The antenna device according to any one of (1) to (10), in which:a section for placing the mobile terminal is provided on a surface of the antenna device corresponding to a position of the mobile terminal antenna; anda section for placing the IC card is provided on the surface of the antenna device corresponding to a position of each of the plurality of IC card antennas. (12) A control method, in which:an antenna device includinga mobile terminal antenna that receives a radio wave output by a reader/writer provided in a mobile terminal,a plurality of IC card antennas that communicates with an IC card including an IC chip for the IC card that reads/writes data in a contactless manner in response to a command transmitted by the reader/writer, anda control circuit that operates by using power generated when the mobile terminal antenna receives the radio wave output by the reader/writercontrols enabling or disabling of each of the IC card antennas in response to the command transmitted by the reader/writer. (13) A program for causing a computer that controls a mobile terminal provided with a reader/writer to execute the processing oftransmitting, to an antenna device including a mobile terminal antenna that receives a radio wave output by the reader/writer, a plurality of IC card antennas that communicates with an IC card including an IC chip for the IC card that reads/writes data in a contactless manner in response to a command transmitted by the reader/writer, and a control circuit that operates by using power generated when the mobile terminal antenna receives the radio wave output by the reader/writer and controls enabling or disabling of each of the IC card antennas in response to the command transmitted by the reader/writer, the command for controlling enabling or disabling of the IC card antennas from the reader/writer. REFERENCE SIGNS LIST 11Antenna device12Smartphone13IC card21Control circuit22Smartphone antenna23-1,23-2Extension antenna31Rectifier circuit32Reset IC33Power supply IC34NFC IC35MCU36Switch control circuit37-1,37-2Switch41Switch51NFC reader/writer101Antenna layer102Magnetic layer103Metal layer111LED301Control unit301A Application execution unit | 36,994 |
11861440 | DETAILED DESCRIPTION It will be readily understood that the components of the embodiments as generally described herein and illustrated in the appended figures could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of various embodiments, as represented in the figures, is not intended to limit the scope of the present disclosure, but is merely representative of various embodiments. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated. The present solution may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the present solution is, therefore, indicated by the appended claims rather than by this detailed description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope. Reference throughout this specification to features, advantages, or similar language does not imply that all of the features and advantages that may be realized with the present solution should be or are in any single embodiment of the present solution. Rather, language referring to the features and advantages is understood to mean that a specific feature, advantage, or characteristic described in connection with an embodiment is included in at least one embodiment of the present solution. Thus, discussions of the features and advantages, and similar language, throughout the specification may, but do not necessarily, refer to the same embodiment. Furthermore, the described features, advantages and characteristics of the present solution may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize, in light of the description herein, that the present solution can be practiced without one or more of the specific features or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments of the present solution. Reference throughout this specification to “one embodiment”, “an embodiment”, or similar language means that a particular feature, structure, or characteristic described in connection with the indicated embodiment is included in at least one embodiment of the present solution. Thus, the phrases “in one embodiment”, “in an embodiment”, and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment. As used in this document, the singular form “a”, “an”, and “the” include plural references unless the context clearly dictates otherwise. Unless defined otherwise, all technical and scientific terms used herein have the same meanings as commonly understood by one of ordinary skill in the art. As used in this document, the term “comprising” means “including, but not limited to”. The terms “memory,” “memory device,” “data store,” “data storage facility” and the like each refer to a non-transitory device on which computer-readable data, programming instructions (e.g., instructions222ofFIG.2,322ofFIG.3and1120ofFIG.11) or both are stored. Except where specifically stated otherwise, the terms “memory,” “memory device,” “data store,” “data storage facility” and the like are intended to include single device embodiments, embodiments in which multiple memory devices together or collectively store a set of data or instructions, as well as individual sectors within such devices. The present document concerns various solutions to address the drawbacks of conventional RFID tag solutions such as those disclosed in the background section of this document. One solution comprises a tag formed of a relatively thin, narrow, machine washable substrate on which electronic components are mounted or otherwise disposed. The substrate may also be lightweight and recyclable. The substrate can include, but is not limited to, a fabric, a plastic, and/or a paper. The substrate may comprise a polyester (e.g., PET) substrate, and/or be coated with a layer of a flexible fluid resistive material for protecting the same from damage due to fluid exposure. The flexible fluid resistive material can include, but is not limited to, a Thermoplastic Polyurethane (“TPU”) material and/or a PET material. The flexible fluid resistive material may be a colored TPU which matches the color of items to which the tags are to be coupled. The electronic components can include, but are not limited to, a communication enable device having at least one antenna (e.g., an RFID enabled device). The tag is designed to be relatively thin so that it is hard to feel when incorporated into an item, but thick enough to withstand a certain number (e.g., 2-5) of wash cycles. A plurality of tags may be fabricated using a single piece of narrow substrate (e.g., a ribbon). In this case, the electronic components may be coupled to the narrow substrate so as to be separated from each other with equal or unequal amounts of substrate. A coating may be applied to the narrow substrate with the electronic components coupled thereto. The narrow substrate may then be rolled or wound onto a reel. The reel is then inserted into a machine (e.g., a ribbon dispensing machine) for incorporating tags with item(s). The spacing between the electronic components is selected so that the machine is able to cut the narrow substrate while installing the tags in or incorporating the tags with items without any damage thereto. The thickness of the narrow substrate is selected so that the machine is able to hold the narrow substrate under tension on the reel while installing the tags in the items. In some scenarios, the machine installation process involves: turning the reel by an amount that allows a portion of the narrow substrate that includes an electronic component to be rolled onto an item; cutting the narrow substrate at an end thereof so that a tag is placed or otherwise disposed on the item; and using a conventional sewing machine to sew at least one end of the tag onto the item. Notably, the tag is unable to be felt when sewn to the item. Another solution comprises forming tag antennas by sewing metal thread(s) directly into an item at production time and/or by printing or disposing metal trace(s) directly on the garment at production time. The length(s) of the metal thread(s)/trace(s) are dynamically selected for optimizing tag performance in view of the item's dielectric and tuning properties. The item's dielectric and tuning properties include, but are not limited to, an impedance and/or capacitance. Next, the metal thread(s) or trace(s) is(are) sewn into, printed on, or disposed directly on the item. At least a communications enabled device is then attached to the item so as to form an electrical coupling or connection between the communication enabled device and the antenna(s). This technique for coupling a tag to an item provides a relatively inexpensive solution that is performed during the production of the item. Additionally, the metal thread(s) and/or trace(s) is(are) difficult to feel when incorporated into the item. In some scenarios, the communications enabled device is coated with a flexible fluid resistive material or other substance so that the same is machine washable and/or water resistant. Additionally or alternatively, the ends of the metal thread(s) are coated with a substance selected to reduce or eliminate irritation caused by the metal thread(s) to an individual using the item. Notably, the present solution provides significantly thinner tags as compared to conventional solutions. Some conventional tags include tags that are formed on a flexible narrow substrate. The tags have a 0.005 inch thickness. The flexible narrow substrate is strong enough such that it cannot be torn by an individual, but can be cut using a razor or scissor. Accordingly, a plurality of tags are formed on a single piece of narrow substrate. The narrow substrate is cut to separate the tags from each other. The separated tag(s) is(are) then coupled to item(s). When cut, the tags fold up onto themselves which is undesirable since antenna lengths are shortened whereby tag performance is affected. Other conventional tags include an array of RFID tags glued to a PET roll. The PET roll is 0.002 inches thick. The RFID tag is about 0.008 inches thick. Leading to a total tag thickness of 0.015 inches. This tag is too thick for garment applications since the tag causes discomfort and irritation to the wearer of the garment. The automated production assembly of the present solution allows for tags with significantly reduced dimensions. The present solution employs a substrate with a thickness between 0.0001 and 0.0005 inches. Although thin, this substrate maintains enough physical strength to handle the tension required to maintain the substrate on the roll. Tags on the order of 0.001 inches and smaller are placed on this substrate (which may have a width of 0.001 inches). The total thickness of the substrate/tag assembly is much smaller than that of the conventional solutions. The present solution provides a roll technology that addresses the drawbacks of the conventional tags which roll up onto themselves. The tags of the present solution maintain their straightness or planar profiles so as to keep the antennas at the proper lengths. The tags of the present solution are so thin that they are not seen or felt when integrated into seams or other points in fabric items. The substrate of the present solution can include, but is not limited to, paper, PEP, PVC, or polymer. Illustrative System Referring now toFIG.1, there is provided an illustration of an illustrative system100that is useful for understanding the present solution. The present solution is described herein in relation to a retail store environment. The present solution is not limited in this regard, and can be used in other environments. For example, the present solution can be used in distribution centers, factories and other commercial environments. Notably, the present solution can be employed in any environment in which items need to be located and/or tracked. The system100is generally configured to allow inventory counts of items located within a facility. As shown inFIG.1, system100comprises a Retail Store Facility (“RSF”)128in which display equipment1021, . . . ,102M(collectively referred to as “102”) is disposed. The display equipment is provided for displaying items1101-110N(collectively referred to as “110”),1161-116X(collectively referred to as “116”) to customers of the retail store. The display equipment can include, but is not limited to, shelves, article display cabinets, promotional displays, fixtures and/or equipment securing areas of the RSF128. The RSF can also include emergency equipment (not shown), checkout counters, an EAS system (not shown), an RFID system, and/or an RFID/EAS system. Emergency equipment, checkout counters, video cameras, people counters, EAS systems, RFID systems, and/or RFID/EAS systems are well known in the art, and therefore will not be described herein. At least one tag reader120is provided to assist in counting the items1101-110N,1161-116Xlocated within the RSF128. The tag reader120comprises an RFID reader configured to read RFID tags. RFID readers are well known in the art. Any known or to be known RFID reader can be used herein without limitation. An illustrative tag reader will be discussed below in relation toFIG.3. Tags1121-112N(collectively referred to as “112”),1181-118X(collectively referred to as “118”) are respectively attached or coupled to the items1101-110N,1161-116X. The tags are described herein as comprising single-technology tags that are only RFID enabled. The present solution is not limited in this regard. The tags can alternatively or additionally comprise dual-technology tags that have both EAS and RFID capabilities. Notably, the tag reader120is strategically placed at a known location within the RSF128. By correlating the tag reader's tag reads and the tag reader's known location within the RSF128, it is possible to determine the location of items1101, . . . ,110N,1161, . . . ,116Xwithin the RSF128. The tag reader's known coverage area also facilitates item location determinations. Accordingly, tag read information and tag reader location information is stored in a data store126. This information can be stored in the data store126using a server124. Servers are well known in the art, and therefore will not be described herein. Referring now toFIG.2, there is an illustration of an illustrative architecture for a tag200. Tags112,118ofFIG.1are the same as or similar to tag200. As such, the discussion of tag200is sufficient for understanding the tags112,118ofFIG.1. The tag200can include more or less components than that shown inFIG.2. However, the components shown are sufficient to disclose an illustrative embodiment implementing the present solution. Some or all of the components of the tag200can be implemented in hardware, software and/or a combination of hardware and software. The hardware includes, but is not limited to, one or more electronic circuits. The electronic circuit(s) may comprise passive components (e.g., capacitors and resistors) and active components (e.g., processors) arranged and/or programmed to implement the methods disclosed herein. The hardware architecture ofFIG.2represents a representative tag200configured to facilitate inventory management. In this regard, the tag200is configured for allowing data to be exchanged with an external device (e.g., tag reader120ofFIG.1and/or server124ofFIG.1) via wireless communication technology. The wireless communication technology can include, but is not limited to, a Radio Frequency Identification (“RFID”) technology, a Near Field Communication (“NFC”) technology, and/or a Short Range Communication (“SRC”) technology. For example, one or more of the following wireless communication technologies (is)are employed: Radio Frequency (“RF”) communication technology; Bluetooth technology; WiFi technology; and/or beacon technology. Each of the listed wireless communication technologies is well known in the art, and therefore will not be described in detail herein. Any known or to be known wireless communication technology or other wireless communication technology can be used herein without limitation. The components204,244shown inFIG.2may be collectively referred to herein as electronic components250. The components206-212shown inFIG.2may be collectively referred to herein as a communication enabled device204, and include a memory208and a clock/timer212. Memory208may be a volatile memory and/or a non-volatile memory. For example, the memory208can include, but is not limited to, Random Access Memory (“RAM”), Dynamic RAM (“DRAM”), Static RAM (“SRAM”), Read Only Memory (“ROM”) and flash memory. The memory208may also comprise unsecure memory and/or secure memory. As shown inFIG.2, the communication enabled device204is electrically coupled or connected to one or more antenna(s)214for allowing data to be exchanged with the external device via a wireless communication technology (e.g., an RFID technology, an NFC technology and/or a SRC technology). The antenna(s)214is(are) configured to receive signals from the external device and/or transmit signals generated by the communication enabled device204. The antenna(s)214can comprise a near-field or far-field antenna. The antenna(s) include, but are not limited to, a chip antenna or a loop antenna. The communication enabled device204also comprises a communication device (e.g., a transceiver or transmitter)206. Communication devices (e.g., transceivers or transmitters) are well known in the art, and therefore will not be described herein. However, it should be understood that the communication device206generates and transmits signals (e.g., RF carrier signals) to external devices, as well as receives signals (e.g., RF signals) transmitted from external devices. In this way, the communication enabled device204facilitates the registration, identification, location and/or tracking of an item (e.g., item110or112ofFIG.1) to which the tag200is coupled. Item level information226and a unique identifier (“ID”)224for the tag200can be stored in memory208of the communication enabled device204and/or communicated to other external devices (e.g., tag reader120ofFIG.1and/or server124ofFIG.1) via communication device (e.g., transceiver)206. For example, the communication enabled device204can communicate information specifying a timestamp, a unique identifier for an item, item description, item price, a currency symbol, size information, sale information, and/or location information to an external device. The external device (e.g., server) can then store the information in a database (e.g., database126ofFIG.1) and/or use the information for various purposes. The communication enabled device204also comprises a controller210(e.g., a CPU). The controller210can execute instructions222implementing methods for facilitating inventory counts and management. In this regard, the controller210includes a processor (or logic circuitry that responds to instructions) and the memory208includes a computer-readable storage medium on which is stored one or more sets of instructions222(e.g., software code) configured to implement one or more of the methodologies, procedures, or functions described herein. The instructions222can also reside, completely or at least partially, within the controller210during execution thereof by the tag200. The memory208and the controller210also can constitute machine-readable media. The term “machine-readable media”, as used here, refers to a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions222. The term “machine-readable media”, as used here, also refers to any medium that is capable of storing, encoding or carrying a set of instructions222for execution by the tag200and that cause the tag200to perform any one or more of the methodologies of the present disclosure. The clock/timer212is configured to determine a date, a time, and/or an expiration of a pre-defined period of time. Technique for determining these listed items are well known in the art, and therefore will not be described herein. Any known or to be known technique for determining these listed items can be used herein without limitation. The tag200also comprises an optional location module230. The location module230is generally configured to determine the geographic location of the tag at any given time. For example, in some scenarios, the location module230employs Global Positioning System (“GPS”) technology and/or Internet based local time acquisition technology. The present solution is not limited to the particulars of this example. Any known or to be known technique for determining a geographic location can be used herein without limitation including relative positioning within a facility or structure. The tag200can also include an optional EAS component244. EAS components244are well known in the art, and therefore will not be described herein. Any known or to be known EAS component can be used herein without limitation. As shown inFIG.2, the tag200may also comprise a power source236and/or optional energy harvesting circuit232. The power source236can include, but is not limited to, a rechargeable battery and/or a capacitor. The energy harvesting circuit232is configured to harvest energy from one or more sources (e.g., heat, vibration, magnetic field, and/or RF energy) and to generate a relatively low amount of output power from the harvested energy. By employing multiple sources for harvesting, the device can continue to charge despite the depletion of a source of energy. Energy harvesting circuits are well known in the art, and therefore will not be described herein. Any known or to be known energy harvesting circuit can be used herein without limitation. The present solution is not limited to that shown inFIG.2. The tag200can have any architecture provided that it can perform the functions and operations described herein. For example, all of the components shown inFIG.2can comprise a single device (e.g., an Integrated Circuit (“IC”)). Referring now toFIG.3, there is provided a detailed block diagram of an illustrative architecture for a tag reader300. Tag reader120ofFIG.1is the same as or similar to tag reader200. As such, the discussion of tag reader200is sufficient for understanding tag reader120. Tag reader300may include more or less components than that shown inFIG.3. However, the components shown are sufficient to disclose an illustrative embodiment implementing the present solution. Some or all of the components of the tag reader300can be implemented in hardware, software and/or a combination of hardware and software. The hardware includes, but is not limited to, one or more electronic circuits. The electronic circuit may comprise passive components (e.g., capacitors and resistors) and active components (e.g., processors) arranged and/or programmed to implement the methods disclosed herein. The hardware architecture ofFIG.3represents an illustration of a representative tag reader300configured to facilitate inventory counts and management within an RSF (e.g., RSF128ofFIG.1). In this regard, the tag reader200comprises an RF enabled device350for allowing data to be exchanged with an external device (e.g., tags112,118ofFIG.1) via RF technology. The components304-316shown inFIG.3may be collectively referred to herein as the RF enabled device350, and may include a power source312(e.g., a battery) or be connected to an external power source (e.g., an AC mains). The RF enabled device350comprises an antenna302for allowing data to be exchanged with the external device via RF technology (e.g., RFID technology or other RF based technology). The external device may comprise tags112,118ofFIG.1. In this case, the antenna302is configured to transmit RF carrier signals (e.g., interrogation signals) to the listed external devices, and/or transmit data response signals (e.g., authentication reply signals) generated by the RF enabled device350. In this regard, the RF enabled device350comprises an RF transceiver308. RF transceivers are well known in the art, and therefore will not be described herein. However, it should be understood that the RF transceiver308receives RF signals including information from the transmitting device, and forwards the same to a logic controller310for extracting the information therefrom. The extracted information can be used to determine the presence, location and/or type of movement of a tag within a facility (e.g., RSF128ofFIG.1). Accordingly, the logic controller310can store the extracted information in memory304, and execute algorithms using the extracted information. For example, the logic controller310can correlate tag reads with beacon reads to determine the location of the tags within the facility. Other operations performed by the logic controller310will be apparent from the following discussion. Notably, memory304may be a volatile memory and/or a non-volatile memory. For example, the memory304can include, but is not limited to, a RAM, a DRAM, an SRAM, a ROM, and a flash memory. The memory304may also comprise unsecure memory and/or secure memory. The phrase “unsecure memory”, as used herein, refers to memory configured to store data in a plain text form. The phrase “secure memory”, as used herein, refers to memory configured to store data in an encrypted form and/or memory having or being disposed in a secure or tamper-proof enclosure. Instructions322are stored in memory for execution by the RF enabled device350and that cause the RF enabled device350to perform any one or more of the methodologies of the present disclosure. The instructions322are generally operative to facilitate determinations as to whether or not tags are present within a facility, where the tags are located within a facility, and/or which tags are in motion at any given time. Other functions of the RF enabled device350will become apparent as the discussion progresses. Illustrative Tag Architectures Referring now toFIG.4, there is provided an illustration of an illustrative architecture for a tag400. Tag400may be the same as or similar to tag1121, . . . ,112N,1181, . . . ,118XofFIG.1or tag200ofFIG.2. As such, the discussion provided above in relation to tags112,118,200is sufficient for understanding the operations of tag400. Notably, the tag400is designed to be relatively thin so that it is hard to feel when incorporated into an item (e.g., item1101, . . . ,110N,1161, . . . , or116XofFIG.1) to, but thick enough to withstand a certain number (e.g., 2-5) of wash cycles. The item can include, but is not limited to, a cloth item, a paper item, and/or a plastic item. As shown inFIG.4A, tag400comprises a substrate402on which electronic components404are mounted, attached or disposed. The electronic components404can be the same as or similar to electronic components250ofFIG.2. Accordingly, the electronic components404can include antenna(s), a communication enabled device, and/or an EAS component. The substrate402is a relatively thin, narrow, light weight, recyclable and/or machine washable substrate. The substrate402can include, but is not limited to, a fabric, a plastic, and/or a paper. The substrate402may comprise a polyester (e.g., PET) substrate. A thickness408of the substrate402is selected so that the substrate402has a physical strength that allows a machine to maintain tension on the same while incorporating or installing the tag on the item, and so that a metalized layer thereon creates antenna(s) for the tag. For example, thickness408can have a value between 0.0001 inches and 0.0025 inches. A width of the substrate402can be between 0.001 inches and 0.002 inches, which is small enough so that the tag is not felt by humans when incorporated into an item. The present solution is not limited to the particulars of this example. In some scenarios, the substrate402and electronic components404are coated with a layer of a flexible fluid resistive material406for protecting the same from damage due to fluid exposure. The fluid resistive material406can include, but is not limited to, a TPU material and/or a PET material. The fluid resistive material406may be colored to match the color of the item (e.g., item1101, . . . ,110N,1161, . . . , or116XofFIG.1) to which the tag400is to be coupled. As shown inFIG.4B, the tag400has tolerance removal areas410,414. Each tolerance removal area410,414comprises an end portion of the substrate402. These end portions of the substrate402facilitate the cutting and coupling of the tag400to the item (e.g., via stitching) without interference with and/or causing damage to the antenna(s). In some scenarios, additional substrate is provided on the elongate sides of the tag, as shown by arrows500,502ofFIG.5. In some scenarios, the antenna(s) of the electronic components404are formed as conductive trace(s) via ink printing and/or deposition (e.g., sputter deposition). Ink printing and deposition processes are well known in the art, and therefore will not be described herein. The antenna(s) can be linear, serpentine or otherwise meandering. In some scenarios, a length420of the tag400can be in the range of 140-170 mm when the antenna(s) is(are) linear or comprise straight line(s). In contrast, length420can be in the range of 60-150 mm when the antenna(s) is(are) serpentine or otherwise meandering. A thickness of the antenna(s) should be as thin as possible provided that the tag400has enough physical strength to withstand a given pulling force and/or a given number of wash cycles. The antenna(s) may be designed so that the tag's operating frequency is in a range of 840-960 Mhz (inclusive of 840 and 960), a range of 860-940 Mhz (inclusive of 860 and 940), a range of 865-868 Mhz (inclusive of 865 and 868), or a range of 902-928 Mhz (inclusive of 902 and 928). The antenna(s) may additionally or alternatively comprise tuning area(s)412,416. Each tuning area412,416comprises a portion of an antenna that can be modified for selectively and/or dynamically tuning an operating frequency of the tag (e.g., at the time of the tag's installation on the item in view of the item's dielectric and tuning properties). The tuning area can be modified by decreasing a thickness of the conductive material in that area. A laser, razor or other device can be used to precisely decrease the conductive material's thickness in the tuning area. This tuning technique may not be needed if all items have similar dielectric properties. However, the items may be of the same type, but of different sizes. In this case, the tuning technique provides a way to optimize each stock-keeping unit in advance for the item to which the tag is to be installed on. The method to tune each antenna at installation time may be used if the volume was not high enough to produce separate stock-keeping units for each production run. In other scenarios, the antenna(s) are formed by coupling physical wire(s) to the substrate402. Each wire may have a diameter between 0.1 mm and 1 mm, and a length between 100 mm and 160 mm. The thickness and/or length of the wire(s) can be decreased at installation time to facilitate the dynamic tuning of the tag's operating frequency in view of the item's dielectric and tuning properties. Referring now toFIG.6, there is provided an illustration of an elongate narrow substrate600having a plurality of tags4001,4002, . . . ,400Ncoupled thereto. The elongate narrow substrate can include, but is not limited to, ribbon. Each tag4001,4002, . . . ,400Nis the same as or similar to tag400ofFIG.4. Thus, the discussion of tag400is sufficient for understanding tags4001,4002, . . . ,400N. The tags4001,4002, . . . ,400Nare arranged on the substrate600so as to have equal spacing602between adjacent ones thereof. The adjacent tags are spaced apart from each other so that a portion of the substrate6002,6003,6004resides therebetween, respectively. The first tag4001is also spaced from an end604of the substrate600by an amount defined by substrate portion6001. Similarly, the last tag600Nis spaced from an end606of the substrate600by an amount defined by substrate portion600N+1. The substrate portions6001, . . .600N+1may constitute tolerance removal areas of tags (e.g., tolerance removal areas410,414ofFIG.4B) as shown inFIG.7, or alternatively may be provided in addition to the tag tolerance removal areas. As shown inFIG.7, each tag comprises two antennas700and a communication enabled device702. Each antenna700has a tuning area704or706. The antennas are the same as or similar to antenna(s)214ofFIG.2. The tuning areas704,706are the same as or similar to tuning areas412,416ofFIG.4. Each communication enabled device702is the same as or similar to communication enabled device204ofFIG.2. Thus, the discussions provided above in relation to204,214,412,416are sufficient for understanding components700-706ofFIG.7. The present solution is not limited to the particulars of the architecture shown inFIGS.6-7. In other scenarios, the tags are unequally spaced apart as shown inFIG.8. Referring now toFIG.9, there is provided an illustration showing a reel900onto which the substrate600is rolled. The reel900may be used to incorporate tags with items (e.g., during a relatively high volume manufacturing process). For example, during an item manufacturing process, the reel900is turned so that a tag is rolled onto an item. The substrate600is then cut within the tag's tolerance removal area so that the tag remains on the item for attachment thereto. This process is repeated for each item that is to have a tag incorporated therein. An illustration of an illustrative system1000for integrating or incorporating tags into or with items is provided inFIG.10. As shown inFIG.10, system1000comprises a dispensing machine1004, a conveyer belt1010, a tag reader1018, a computing device1020, a data store1022, and a laser1026. The tag reader1018can be the same as or similar to tag reader300ofFIG.3. The dispensing machine1004is configured to receive the reel900and/or a spool1050, and rotate the reel/spool in two opposing directions. The rotation is achieved using gear(s)1006and motor(s)1008. The spool1050can include, but is not limited to, a spool of metal thread. Metal thread is well known in the art, and therefore will not be described herein. As noted above, an elongate narrow substrate600is wound on the reel900. The elongate narrow substrate comprises a plurality of tags4001, . . . ,400Ncoupled thereto. The elongate narrow substrate with the plurality of tags may be coated using a flexible fluid resistive material (e.g., flexible resistive material406ofFIG.4). The flexible fluid resistive material can have a color that matches a color of the item(s). Each of the tags comprises at least one antenna700formed of a trace or wire disposed on the elongate narrow substrate, and a communication enabled device702coupled to the elongate narrow substrate so as to have an electrical coupling or connection with the at least one antenna. During a manufacturing process, a conveyer belt1010or an individual1014moves an item1012into proximity of the dispensing machine1004. The computing device1020then controls the dispensing machine1004to turn the reel900by an amount that allows a portion of the ribbon600to be paid out. This portion of the ribbon600includes a tag comprising a communications enabled device and antenna(s). The laser1026may then be controlled by the computing device1020to tune the antenna(s) of the tag (e.g., by removing ends of antenna wires and/or by decreasing the trace thickness in tuning areas of the antenna(s)). The tuning is performed for optimizing tag performance in view of the item's dielectric and tuning properties. The item's dielectric and tuning properties can be obtained using a Look Up Table (“LUT”)1024and/or determined using sensor data generated by sensors1016. Other devices can be used to tune the tag. Such other devices include, but are not limited to, a razor and/or a sewing machine. The ribbon600is then cut by the cutting mechanism1030of the dispensing machine1004so that the paid out portion of the ribbon is placed on or otherwise disposed on the item. The cutting mechanism1030can include, but is not limited to, a razor and/or scissors. Razors and scissors are well known in the art, and therefore will not be described herein. The portion of the ribbon is then coupled to the item so that the tag is incorporated with or in the item. For example, a nozzle1028dispenses an adhesive on the item1012and/or portion of ribbon, a heating element (not shown) applies heat to the portion of ribbon and/or item1012, a sewing machine1032stitches at least part of the portion of ribbon to the item1012, a pushing device1034pushes at least part of the portion of ribbon into the item1012, and/or the sewing machine1032encloses the portion of ribbon within a cavity formed between the item1012and a layer of cloth (not shown). The layer of cloth may have a metal thread (not shown) for tuning an operating frequency of the tag disposed on the portion of ribbon. Nozzles, heating elements, sewing machines, pushing devices, and metal threads are well known in the art, and therefore will not be described herein. The present solution is not limited to the particulars of this example. In some scenarios, the portion of the elongate narrow substrate can be painted by a painting device1034using paint with a color that matches a color of the item1012. The paint can be applied prior to or subsequent to the cutting of ribbon600. At this time, proper operation of the tag may then optionally be validated. The validation can be achieved using the tag reader1018. If the tag is operating properly, then other manufacturing operations are performed. In contrast, if the tag is not operating properly, then the tag is removed from the item, and a new tag is coupled to the item. In some scenarios, system1000is additionally or alternatively configured to incorporate tags into items using a metal thread of spool1050to form the tag antenna(s). For example, the computing device1020performs operations to: determine the dielectric and tuning properties of the item using the LUT1024or sensor data generated by sensor(s)1016; and/or dynamically determine a length of each metal thread that is to be incorporated into the item1012to optimize tag performance in view of dielectric and tuning properties of the item1012. The cutting mechanism1030creates at least one metal thread having the length that was dynamically determined. One or both ends of the metal thread may be coated with a substance selected to reduce or eliminate irritation caused by the metal thread to an individual using the item1012. The sewing machine1032then sews the metal thread into the item1012being produced to form at least one antenna (e.g., antenna(s)214ofFIG.2) for the tag (e.g., tag1121, . . .112N,1181, . . .118X,200ofFIG.2). The nozzle1028may then attach at least a communications enabled device (e.g., communications enabled device204ofFIG.2) to the item1012so as to form an electrical coupling or connection between the communications enabled device and the at least one antenna. The item1012may have at least one alignment marking that can be used in the attaching to guide proper placement of the at least one communication enabled device on the item1012. The alignment markings can include, but are not limited to, shape(s) or line(s) printed on the item (e.g., in a color different than the item's color), created by stitching (e.g., using thread in a color different than the item's color), and/or formed using die(s) (e.g., a die with a color different than the item's color). The communications enabled device may be encased with a flexible fluid resistive material, and/or attached to a piece of substrate prior to being attached to the item1012. At this point in the process, the tag reader1018may validate that the tag is operating properly. The communications enabled device may be replaced with another communications enabled device when a validation is not made that the first tag is operating properly. Additionally or alternatively, the metal thread is replaced with another metal thread when a validation is not made that the first tag is operating properly. In those or other scenarios, system1000is additionally or alternatively configured to incorporate tags into items using conductive trace(s) to form the tag antenna(s). For example, the computing device1020performs operations to: determine the dielectric and tuning properties of the item using the LUT1024or sensor data generated by sensor(s)1016; and/or dynamically determine a length of each conductive trace to be formed directly on the item1012to optimize tag performance in view of dielectric and tuning properties of the item1012. Each conductive trace is disposed on the item being produced to form at least one antenna for a tag. The conductive traces can be printed on the item via a printer1038or deposited on the item by the nozzle1028. Printers and nozzles are well known in the art, and therefore will not be described here. The nozzle1028may then attach at least a communications enabled device (e.g., communications enabled device204ofFIG.2) to the item1012so as to form an electrical coupling or connection between the communications enabled device and the at least one antenna. The item1012may have at least one alignment marking that can be used in the attaching to guide proper placement of the at least one communication enabled device on the item1012. The alignment markings can include, but are not limited to, shape(s) or line(s) printed on the item (e.g., in a color different than the item's color), shape(s) or line(s) created by stitching (e.g., using thread in a color different than the item's color), and/or shape(s) or line(s) formed using die(s) (e.g., a die with a color different than the item's color). The communications enabled device may be encased with a flexible fluid resistive material, and/or attached to a piece of substrate prior to being attached to the item1012. At this point in the process, the tag reader1018may validate that the tag is operating properly. The communications enabled device may be replaced with another communications enabled device when a validation is not made that the first tag is operating properly. Additionally or alternatively, the conductive trace(s) is(are) tuned when a validation is not made that the first tag is operating properly. Referring now toFIG.11, there is provided a detailed block diagram of an illustrative architecture for the computing device1020ofFIG.10. Computing device1020may include more or less components than those shown inFIG.11. However, the components shown are sufficient to disclose an illustrative embodiment implementing the present solution. The hardware architecture ofFIG.11represents one embodiment of a representative computing device configured to facilitate the incorporation of tags into and with items. As such, the computing device1020ofFIG.11implements at least a portion of a method for incorporating tags into or with items in accordance with the present solution. Some or all the components of the computing device1020can be implemented as hardware, software and/or a combination of hardware and software. The hardware includes, but is not limited to, one or more electronic circuits. The electronic circuits can include, but are not limited to, passive components (e.g., resistors and capacitors) and/or active components (e.g., amplifiers and/or microprocessors). The passive and/or active components can be adapted to, arranged to and/or programmed to perform one or more of the methodologies, procedures, or functions described herein. As shown inFIG.11, the computing device1020comprises a user interface1102, a Central Processing Unit (“CPU”)1106, a system bus1110, a memory1112connected to and accessible by other portions of computing device1020through system bus1110, and hardware entities1114connected to system bus1110. The user interface can include input devices (e.g., a keypad1150and/or a camera1158) and output devices (e.g., a speaker1152, a display1154, and/or Light Emitting Diodes (“LEDs”)1156), which facilitate user-software interactions for controlling operations of the computing device1020. At least some of the hardware entities1114perform actions involving access to and use of memory1112, which can be a RAM, a disk driver and/or a Compact Disc Read Only Memory (“CD-ROM”). Hardware entities1114can include a disk drive unit1116comprising a computer-readable storage medium1118on which is stored one or more sets of instructions1120(e.g., software code) configured to implement one or more of the methodologies, procedures, or functions described herein. The instructions1120can also reside, completely or at least partially, within the memory1112and/or within the CPU1106during execution thereof by the computing device1020. The memory1112and the CPU1106also can constitute machine-readable media. The term “machine-readable media”, as used here, refers to a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions1120. The term “machine-readable media”, as used here, also refers to any medium that is capable of storing, encoding or carrying a set of instructions1120for execution by the computing device1020and that cause the computing device1020to perform any one or more of the methodologies of the present disclosure. In some scenarios, the hardware entities1114include an electronic circuit (e.g., a processor) programmed for facilitating the incorporation of tags into items. In this regard, it should be understood that the electronic circuit can access and run application(s)1124installed on the computing device1020that implement the present solution. Illustrative Methods for Incorporating Tags into/with Items Referring now toFIG.12, there is provided a flow diagram of an illustrative method1200for incorporation of tag(s) (e.g., tag(s)112,118ofFIG.1,200ofFIG.2,400ofFIG.4, and/or4001, . . . ,400NofFIG.6) into or with item(s) (e.g., item(s)110,116ofFIG.1). For example, a tag is incorporated into a seam, a hem or an overlapping fabric edge finish of a garment or hat. The present solution is not limited to the particulars of this example. Method1200begins with1202and continues with1204where traces are printed on or wires are coupled to an elongate narrow substrate (e.g., substrate402ofFIG.4or600ofFIGS.6-7) to form antennas (e.g., antenna(s)214ofFIG.2or700ofFIG.7) for the tags. At least one communications enabled device (e.g., communication enabled device204ofFIG.2or702ofFIG.7) is coupled to the narrow substrate in1206. This coupling can be achieved via an adhesive and/or the application of heat. Next in1208, the narrow substrate is rolled onto a reel (e.g., reel900ofFIG.9). The reel is inserted into a machine for use in incorporating tags into the item, as shown by1210. The machine can include, but is not limited to, a dispensing machine (e.g., ribbon dispensing machine1004ofFIG.10). Dispensing machines are well known in the art, and therefore will not be described herein. The reel may be rolled using gears (e.g., gear(s)1006ofFIG.10) and motors (e.g., motor(s)1008ofFIG.10). Gears and motors are well known in the art, and therefore will not be described herein. In1212, an item is placed in proximity to the machine. This can be achieved automatically by a conveyer belt (e.g., conveyer belt1010) or manually by an individual (e.g., individual1014ofFIG.10). The item can be in a partially or fully manufactured state at this point in the process. The dielectric and tuning properties of the item are then determined in1214. This determination can be made by a computing device using an LUT (e.g., LUT1024ofFIG.10) and/or sensor data (e.g., capacitive measurements) generated by sensors (e.g., sensors1016ofFIG.10) configured to sense dielectric and tuning properties of items. Techniques for sensing the dielectric and tuning properties of items are well known in the art, and therefore will not be described herein. Any known or to be known technique for sensing the dielectric and tuning properties of items can be used herein. The reel is then turned in1216by an amount that allows a portion of the narrow substrate (e.g., portion6001and at least portion of6002ofFIGS.6-7) that includes a communications enabled device and the corresponding antenna(s) (e.g., tag4001ofFIGS.6-7) to be paid out. The tag is dynamically tuned in1218for optimizing tag performance in view of the item's dielectric and tuning properties determined in1214. The tuning can be achieved by: (1) decreasing a thickness of the antenna trace(s) disposed on the narrow substrate (e.g., using a laser or razer); (2) clipping one or more ends of the antenna wires coupled to the narrow substrate; and/or (3) sewing metal thread(s) into the item at location(s) where the tag(s) are to reside. The metal thread(s) create capacitance and inductance that tune the tag's operating frequency. Next in1220, the narrow substrate is cut (e.g., in portion6002ofFIGS.6-7) so as to cause the same to be placed on or otherwise disposed on the item. The cutting of the narrow substrate can be achieved via a cutting mechanism (e.g., cutting mechanism1030ofFIG.10) of the dispensing machine. The cutting mechanism can include, but is not limited to, a razer or scissors. The narrow substrate is then coupled to the item so as to incorporate the tag in or with the item, as shown by1222. This coupling can be achieved via an adhesive, an application of heat, and/or stitching. Upon completing1222, operations are performed in1224to validate that the tag is operating properly. The validation can be achieved using a tag reader (e.g., tag reader1018ofFIG.10). Tag readers are well known in the art, and therefore will not be described herein. The tag reader can transmit interrogation signals to the tag, wait for a response signal from the tag, receive the response signal, and process the response signal. The proper operation of the tag may be validated when the response signal is received in a given amount of time after the interrogation signal transmission, and/or the response signal includes certain information (e.g., a tag identifier). If a validation is not made that the tag is operating properly [1226:NO], then method1200continues with1228where the tag is removed from the item and a new tag is coupled to the item. Once the new tag is coupled to the item, method1200returns to1224where operation of the new tag is tested during a validation process. In contrast, if a validation is made that the tag is operating properly [1226:YES], then1230is performed where method1200ends or other actions are taken (e.g., finish manufacturing/fabricating the item and/or return to1204to incorporate a tag in a next item). In some cases, it may be undesirable to leave the tag attached to the item when it leaves a facility (e.g., RSF128ofFIG.1). Accordingly, a tool (e.g., a heating element, stitching removal device, and/or a robot having an articulating arm with a grasper) may optionally be used to remove all or part of the tag from the item prior to when the item is removed from the facility. Referring now toFIG.13, there is provided a flow diagram of an illustrative method1300for incorporation of tag(s) (e.g., tag(s)112,118ofFIG.1,200ofFIG.2,400ofFIG.4, and/or4001, . . . ,400NofFIG.6) into or with item(s) (e.g., item(s)110,116ofFIG.1). For example, a tag is incorporated into a seam, a hem or an overlapping fabric edge finish of a garment or hat. The present solution is not limited to the particulars of this example. Method1300begins with1302and continues with1304where traces are printed on or wires are coupled to an elongate narrow substrate (e.g., substrate402ofFIG.4or600ofFIGS.6-7) to form antennas (e.g., antenna(s)214ofFIG.2or700ofFIG.7) for the tags. At least one communications enabled device (e.g., communication enabled device204ofFIG.2or702ofFIG.7) is coupled to the narrow substrate in1306. This coupling can be achieved via an adhesive and/or the application of heat. Next in1308, color is optionally added to a flexible fluid resistive material. The color may be selected so that the color of the flexible fluid resistive material matches the color of item(s) to which tag(s) is(are) to be coupled. The flexible fluid resistive material (colored or clear) may then optionally be used to coat the narrow substrate, antenna(s) and communication enabled device(s), as shown by1310. In1312, the narrow substrate is rolled onto a reel (e.g., reel900ofFIG.9). The reel is inserted into a machine for use in incorporating tags into the item, as shown by1314. The machine can include, but is not limited to, a dispensing machine (e.g., ribbon dispensing machine1004ofFIG.10). Dispensing machines are well known in the art, and therefore will not be described herein. The reel may be rolled using gears (e.g., gear(s)1006ofFIG.10) and motors (e.g., motor(s)1008ofFIG.10). Gears and motors are well known in the art, and therefore will not be described herein. In1316, metal thread(s) is(are) optionally sewn into the item at location(s) where the tag(s) is(are) to be incorporated. The metal thread(s) create capacitance and inductance for tuning the tag(s) so as to provide optimized tag performance in view of the item's dielectric and tuning properties (e.g., impedence). The dielectric and tuning properties of the item may be determined in1316. This determination can be made by a computing device using an LUT (e.g., LUT1024ofFIG.10) and/or sensor data (e.g., capacitive measurements) generated by sensors (e.g., sensors1016ofFIG.10) configured to sense dielectric and tuning properties of items. Techniques for sensing the dielectric and tuning properties of items are well known in the art, and therefore will not be described herein. Any known or to be known technique for sensing the dielectric and tuning properties of items can be used herein. The metal threads allow for custom tuning of each item by having different sized metal threads sewn into the items. The metal threads also provide a way to increase the capacitance or inductance from a simple trace/wire antenna so that it has better impedance matching with the communications enabled device and better RF performance. In1318, an item is placed in proximity to the machine. This can be achieved automatically by a conveyer belt (e.g., conveyer belt1010) or manually by an individual (e.g., individual1014ofFIG.10). The item can be in a partially or fully manufactured state at this point in the process. The reel is then turned in1320by an amount that allows a portion of the narrow substrate (e.g., portion6001and at least portion of6002ofFIGS.6-7) that includes a communications enabled device and the corresponding antenna(s) (e.g., tag4001ofFIGS.6-7) to be paid out. The antenna(s) are optionally tuned in1322for optimizing tag performance in view of the item's dielectric and tuning properties. The tuning can be achieved by decreasing a thickness of the antenna trace(s) disposed on the narrow substrate (e.g., using a laser or razer), or clipping one or more ends of the antenna wires coupled to the narrow substrate. In1324, paint is optionally added to the paid out portion of the narrow substrate.1324can be performed as an alternative to1308where color is added to the flexible fluid resistive material. The paint is selected so that the color of the painted tag matches the color of the item. In1326, the narrow substrate is cut (e.g., in portion6002ofFIGS.6-7) so as to cause the same to be placed on or otherwise disposed on the item. The cutting of the narrow substrate can be achieved via a cutting mechanism (e.g., cutting mechanism1030ofFIG.10) of the dispensing machine. The cutting mechanism can include, but is not limited to, a razer or scissors. The narrow substrate is then coupled to the item so as to incorporate the tag in or with the item, as shown by1328-1334. As shown by1328, at least one side of the narrow substrate is sewn or otherwise attached to the item (e.g., via an adhesive or an application of heat). Alternatively, the narrow substrate is pushed into the item. As shown by1330-1334, the narrow substrate may additionally or alternatively be enclosed within a cavity formed between the item and a layer of cloth. The layer of cloth can be coupled to the item via a sewing machine. In some scenarios, a metal thread is sewn into the layer of cloth for tuning the operating frequency of the tag. Upon coupling the tag to the item and/or validating the tag's performance,1336is performed where method1300ends or other actions are taken (e.g., finish manufacturing/fabricating the item and/or return to1304to incorporate a tag in a next item). In some cases, it may be undesirable to leave the tag attached to the item when it leaves a facility (e.g., RSF128ofFIG.1). Accordingly, a tool (e.g., a heating element, stitching removal device, and/or a robot having an articulating arm with a grasper) may optionally be used to remove all or part of the tag from the item prior to when the item is removed from the facility. Referring now toFIG.14, there is provided a flow diagram of an illustrative method1400for incorporation of tag(s) (e.g., tag(s)112,118ofFIG.1,200ofFIG.2,400ofFIG.4, and/or4001, . . . ,400NofFIG.6) into or with item(s) (e.g., item(s)110,116ofFIG.1and/or item1012ofFIG.10). For example, a tag is incorporated into a seam, a hem or an overlapping fabric edge finish of a garment or hat. The present solution is not limited to the particulars of this example. Method1400begins with1402and continues with1404where an item (e.g., item1012ofFIG.10) is fully or partially produced. Alignment marking(s) is(are) optionally added to the item in1406. The alignment markings can be used in a subsequent process to couple a tag (e.g., tag200ofFIG.2) to the item. In this regard, the alignment markings can clearly show where the tag is to be placed on the item, and help guide such placement. The alignment markings can include, but are not limited to, shape(s) or line(s) printed on the item (e.g., in a color different than the item's color), created by stitching (e.g., using thread in a color different than the item's color), and/or formed using die(s) (e.g., a die with a color different than the item's color). In1408, a length of each metal thread that is to be incorporated into the item to form a tag antenna (e.g., antenna214ofFIG.2) is dynamically determined. The length of each metal thread can be selected for optimizing tag performance based on the dielectric and tuning properties of the item (e.g., item1012ofFIG.10). The dielectric and tuning properties of the item may be determined by a computing device (e.g., computing device1020ofFIG.10) using an LUT (e.g., LUT1024ofFIG.10) and/or sensor data (e.g., capacitive measurements) generated by sensors (e.g., sensors1016ofFIG.10) configured to sense dielectric and tuning properties of items. Techniques for sensing the dielectric and tuning properties of items are well known in the art, and therefore will not be described herein. Any known or to be known technique for sensing the dielectric and tuning properties of items can be used herein. In1410, metal thread(s) having the dynamically determined length(s) is(are) created. This can involve cutting piece(s) of metal thread from a spool of metal thread (e.g., spool1050ofFIG.10) using a cutting mechanism (e.g., cutting mechanism1030ofFIG.10) and/or tuning each piece of metal thread by cutting one or more ends thereof (e.g., using cutting mechanism1030ofFIG.10and/or a laser1026ofFIG.10). Ends of the metal thread(s) are optionally coated in1412with a substance selected to reduce or eliminate irritation caused by the metal thread(s) to an individual using the item. The metal thread(s) is(are) then sewn by a sewing machine (e.g., sewing machine1032ofFIG.10) into the item being produced for forming tag antenna(s) (e.g., antenna(s)214ofFIG.2), as shown by1414. Notably, the metal thread(s) is(are) very difficult to feel in the item. In1416, at least a communication enabled device (e.g., communication enabled device204ofFIG.2) is optionally encased with a flexible fluid resistive material (e.g., flexible fluid resistive material406ofFIG.4A). The flexible fluid resistive material may be clear or colored. The color of the flexible fluid resistive material may be selected so that it matches the color of the item to which the tag is being incorporated. The color may be added to the flexible fluid resistive material in1416. In1418, the communication enabled device is optionally attached to a piece of substrate (e.g., PET or Mylar) (e.g., substrate402ofFIG.4A). This attachment can be achieved via an adhesive, an application of heat, and/or stitching. The piece of substrate is provided to facilitate the attachment of the communication enabled device to the item. In1420, the communication enabled device is attached to the item so as to form an electrical coupling or connection between the communication enabled device and the metal thread antenna(s). This attachment can be achieved via an adhesive, an application of heat and/or stitching. The electrical coupling can include, but is not limited to, an inductive coupling. Upon completing1420, operations are performed in1422to validate that the tag is operating properly. The validation can be achieved using a tag reader (e.g., tag reader1018ofFIG.10). Tag readers are well known in the art, and therefore will not be described herein. The tag reader can transmit interrogation signals to the tag, wait for a response signal from the tag, receive the response signal, and process the response signal. The proper operation of the tag may be validated when the response signal is received in a given amount of time after the interrogation signal transmission, and/or the response signal includes certain information (e.g., a tag identifier). If a validation is not made that the tag is operating properly [1424:NO], then method1400continues with1426where the metal thread(s) and/or communications enabled device is(are) removed from the item and a new one(s) thereof is(are) coupled to the item. Additionally or alternatively, the antenna(s) is(are) tuned by removing at least a portion of each metal thread (e.g., by removing a free end of each metal thread). Once these actions are taken, method1400returns to1422where operation of the tag is tested during a validation process. In contrast, if a validation is made that the tag is operating properly [1424:YES], then1428and/or1430is(are) performed. In some cases, it may be undesirable to leave the tag attached to the item when it leaves a facility (e.g., RSF128ofFIG.1). Accordingly, a tool (e.g., a heating element, stitching removal device, and/or a robot having an articulating arm with a grasper) may optionally be used in1522to remove the communication enabled device, device mounting assembly and/or metal thread(s) from the item prior to when the item is removed from the facility. Subsequently,1430is performed where method1400ends or other actions are performed (e.g., finish manufacturing/fabricating the item and/or return to1402to incorporate a tag in a next item). Referring now toFIG.15, there is provided a flow diagram of an illustrative method1500for incorporation of tag(s) (e.g., tag(s)112,118ofFIG.1,200ofFIG.2,400ofFIG.4, and/or4001, . . . ,400NofFIG.6) into or with item(s) (e.g., item(s)110,116ofFIG.1). For example, a tag is incorporated into a seam, a hem or an overlapping fabric edge finish of a garment or hat. The present solution is not limited to the particulars of this example. Method1500begins with1502and continues with1504where an item (e.g., item1012ofFIG.10) is fully or partially produced. Alignment marking(s) is(are) optionally added to the item in1505. The alignment markings can include, but are not limited to, shape(s) or line(s) printed on the item (e.g., in a color different than the item's color), created by stitching (e.g., using thread in a color different than the item's color), and/or formed using die(s) (e.g., a die with a color different than the item's color). The alignment markings can be used in a subsequent process to couple a tag to the item. In this regard, the alignment markings can clearly show where some or all components of the tag are to be placed on the item, and help guide such placement. In1506, a length of each metal trace that is to be disposed directly on the item to form a tag antenna (e.g., antenna214ofFIG.2) is dynamically determined. The length of each metal trace can be selected for optimizing tag performance based on the dielectric and tuning properties of the item. The dielectric and tuning properties of the item may be determined by a computing device (e.g., computing device1020ofFIG.10) using an LUT (e.g., LUT1024ofFIG.10) and/or sensor data (e.g., capacitive measurements) generated by sensors (e.g., sensors1016ofFIG.10) configured to sense dielectric and tuning properties of items. Techniques for sensing the dielectric and tuning properties of items are well known in the art, and therefore will not be described herein. Any known or to be known technique for sensing the dielectric and tuning properties of items can be used herein. In1508, metal trace(s) having the dynamically determined length(s) is(are) printed or otherwise disposed on the item so as to form the tag antenna(s). The metal trace(s) may optionally be tuned after being printed or otherwise disposed on the item. The tuning can be achieved by decreasing a thickness of a metal trace at one or more ends thereof (e.g., using a laser1026ofFIG.10). The metal traces can be formed of any suitable material, such as copper. The metal traces can be otherwise disposed on the item in accordance with any known or to be known deposition technique (e.g., sputtering). In1510, at least a communication enabled device (e.g., communication enabled device204ofFIG.2) is optionally encased with a flexible fluid resistive material (e.g., flexible fluid resistive material406ofFIG.4A). The flexible fluid resistive material may be clear or colored. The color of the flexible fluid resistive material may be selected so that it matches the color of the item to which the tag is being incorporated. The color may be added to the flexible fluid resistive material in1510. In1512, the communication enabled device is optionally attached to a piece of substrate (e.g., PET or Mylar) (e.g., substrate402ofFIG.4A). This attachment can be achieved via an adhesive, an application of heat, and/or stitching. The substrate can facilitate the attachment of the communication enabled device to the item. In1514, the communication enabled device is attached to the item so as to form an electrical coupling or connection between the communication enabled device and the metal trace antenna(s). This attachment can be achieved via an adhesive, an application of heat and/or stitching. The electrical coupling can include, but is not limited to, an inductive coupling. Upon completing1514, operations are performed in1516to validate that the tag is operating properly. The validation can be achieved using a tag reader (e.g., tag reader1018ofFIG.10) and/or computing device (e.g., computing device1020ofFIG.10). Tag readers are well known in the art, and therefore will not be described herein. The tag reader can transmit interrogation signals to the tag, wait for a response signal from the tag, receive the response signal, and process the response signal. An output of the tag reader may optionally be provided to the computing device for processing. The proper operation of the tag may be validated when the response signal is received in a given amount of time after the interrogation signal transmission, and/or the response signal includes certain information (e.g., a tag identifier). If a validation is not made that the tag is operating properly [1518:NO], then method1500continues with1520where the communications enabled device is removed from the item and a new communications enabled device is coupled to the item. The antenna(s) may also be tuned in1520by decreasing a thickness of each conductive trace of a given portion thereof (e.g., of a free end). Once the new tag is coupled to the item, method1500returns to1516where operation of the new tag is tested during a validation process. In contrast, if a validation is made that the tag is operating properly [1518:YES], then1522and/or1524is(are) performed. In some cases, it may be undesirable to leave the tag attached to the item when it leaves a facility (e.g., RSF128ofFIG.1). Accordingly, a tool (e.g., a heating element, stitching removal device, and/or a robot having an articulating arm with a grasper) may optionally be used in1522to remove the communication enabled device, device mounting assembly and/or metal thread(s) from the item prior to when the item is removed from the facility. Subsequently,1524is performed where method1500ends or other actions are performed (e.g., finish manufacturing/fabricating the item and/or return to1502to incorporate a tag in a next item). Although the present solution has been illustrated and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art upon the reading and understanding of this specification and the annexed drawings. In addition, while a particular feature of the present solution may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Thus, the breadth and scope of the present solution should not be limited by any of the above described embodiments. Rather, the scope of the present solution should be defined in accordance with the following claims and their equivalents. | 68,785 |
11861441 | DETAILED DESCRIPTION A wireless tracking device is disclosed herein that is resilient to physical damage. The wireless tracking device is configured to maintain an electrical connection between two or more electronic components on a PCB when one or more traces of the PCB are damaged. In some embodiments, the wireless tracking device includes redundant parts that allows for the wireless tracking device to receive physical damage without losing functionality. In some embodiments, the wireless IOT device is an adhesive tape platform or a segment thereof. The adhesive tape platform includes wireless transducing components and circuitry that perform communication and/or sensing. The adhesive tape platform has a flexible adhesive tape form-factor that allows it to function as both an adhesive tape for adhering to and/or sealing objects and a wireless sensing device. In the following description, like reference numbers are used to identify like elements. Furthermore, the drawings are intended to illustrate major features of exemplary embodiments in a diagrammatic manner. The drawings are not intended to depict every feature of actual embodiments nor relative dimensions of the depicted elements and are not drawn to scale. As used herein, the term “or” refers to an inclusive “or” rather than an exclusive “or.” In addition, the articles “a” and “an” as used in the specification and claims mean “one or more” unless specified otherwise or clear from the context to refer the singular form. The term “tape node” refers to an adhesive tape platform or a segment thereof that is equipped with sensor, processor, memory, energy source/harvesting mechanism, and wireless communications functionality, where the adhesive tape platform (also referred to herein as an “adhesive product” or an “adhesive tape product”) has a variety of different form factors, including a multilayer roll or a sheet that includes a plurality of divisible adhesive segments. Once deployed, each tape node can function, for example, as an adhesive tape, label, sticker, decal, or the like, and as a wireless communications device. The terms “adhesive tape node,” “wireless node,” or “tape node” may be used interchangeably in certain contexts, and refer to an adhesive tape platform or a segment thereof that is equipped with sensor, processor, memory, energy source/harvesting mechanism, and wireless communications functionality, where the adhesive product has a variety of different form factors, including a multilayer roll or a sheet that includes a plurality of divisible adhesive segments. Once deployed, each tape node or wireless node can function, for example, as an adhesive tape, label, sticker, decal, or the like, and as a wireless communications device. A “peripheral” tape node or wireless node, also referred to as an outer node, leaf node, or terminal node, refers to a node that does not have any child nodes. In certain contexts, the terms “parcel,” “envelope,” “box,” “package,” “container,” “pallet,” “carton,” “wrapping,” and the like are used interchangeably herein to refer to a packaged item or items. In certain contexts, the terms “wireless tracking system,” “hierarchical communications network,” “distributed agent operating system,” and the like are used interchangeably herein to refer to a system or network of wireless nodes. INTRODUCTION This specification describes a low-cost, multi-function adhesive tape platform with a form factor that unobtrusively integrates the components useful for implementing a combination of different asset tracking and management functions and also is able to perform a useful ancillary function that otherwise would have to be performed with the attendant need for additional materials, labor, and expense. In an aspect, the adhesive tape platform is implemented as a collection of adhesive products that integrate wireless communications and sensing components within a flexible adhesive structure in a way that not only provides a cost-effective platform for interconnecting, optimizing, and protecting the components of the tracking system but also maintains the flexibility needed to function as an adhesive product that can be deployed seamlessly and unobtrusively into various asset management and tracking applications and workflows, including person and object tracking applications, and asset management workflows such as manufacturing, storage, shipping, delivery, and other logistics associated with moving products and other physical objects, including logistics, sensing, tracking, locationing, warehousing, parking, safety, construction, event detection, road management and infrastructure, security, and healthcare. In some examples, the adhesive tape platforms are used in various aspects of asset management, including sealing assets, transporting assets, tracking assets, monitoring the conditions of assets, inventorying assets, and verifying asset security. In these examples, the assets typically are transported from one location to another by truck, train, ship, or aircraft or within premises, e.g., warehouses by forklift, trolleys etc. In disclosed examples, an adhesive tape platform includes a plurality of segments that can be separated from the adhesive product (e.g., by cutting, tearing, peeling, or the like) and adhesively attached to a variety of different surfaces to inconspicuously implement any of a wide variety of different wireless communications based network communications and transducing (e.g., sensing, actuating, etc.) applications. Examples of such applications include: event detection applications, monitoring applications, security applications, notification applications, and tracking applications, including inventory tracking, asset tracking, person tracking, animal (e.g., pet) tracking, manufactured parts tracking, and vehicle tracking. In example embodiments, each segment of an adhesive tape platform is equipped with an energy source, wireless communication functionality, transducing functionality, and processing functionality that enable the segment to perform one or more transducing functions and report the results to a remote server or other computer system directly or through a network of tapes. The components of the adhesive tape platform are encapsulated within a flexible adhesive structure that protects the components from damage while maintaining the flexibility needed to function as an adhesive tape (e.g., duct tape or a label) for use in various applications and workflows. In addition to single function applications, example embodiments also include multiple transducers (e.g., sensing and/or actuating transducers) that extend the utility of the platform by, for example, providing supplemental information and functionality relating characteristics of the state and or environment of, for example, an article, object, vehicle, or person, over time. Systems and processes for fabricating flexible multifunction adhesive tape platforms in efficient and low-cost ways also are described. In addition to using roll-to-roll and/or sheet-to-sheet manufacturing techniques, the fabrication systems and processes are configured to optimize the placement and integration of components within the flexible adhesive structure to achieve high flexibility and ruggedness. These fabrication systems and processes are able to create useful and reliable adhesive tape platforms that can provide local sensing, wireless transmitting, and locationing functionalities. Such functionality together with the low cost of production is expected to encourage the ubiquitous deployment of adhesive tape platform segments and thereby alleviate at least some of the problems arising from gaps in conventional infrastructure coverage that prevent continuous monitoring, event detection, security, tracking, and other asset tracking and management applications across heterogeneous environments. Adhesive Tape Platform FIG.1Ashows an example asset10that is sealed for shipment using an example adhesive tape platform12that includes embedded components of a wireless transducing circuit14(collectively referred to herein as a “tape node”). In this example, a length13of the adhesive tape platform12is dispensed from a roll16and affixed to the asset10. The adhesive tape platform12includes an adhesive side18and a non-adhesive side20. The adhesive tape platform12can be dispensed from the roll16in the same way as any conventional packing tape, shipping tape, or duct tape. For example, the adhesive tape platform12may be dispensed from the roll16by hand, laid across the seam where the two top flaps of the asset10meet, and cut to a suitable length either by hand or using a cutting instrument (e.g., scissors or an automated or manual tape dispenser). Examples of such tapes include tapes having non-adhesive sides20that carry one or more coatings or layers (e.g., colored, light reflective, light absorbing, and/or light emitting coatings or layers). Referring toFIG.1B, in some examples, the non-adhesive side20of the length13of the adhesive tape platform12includes writing or other markings that convey instructions, warnings, or other information to a person or machine (e.g., a bar code reader), or may simply be decorative and/or entertaining. For example, different types of adhesive tape platforms may be marked with distinctive colorations to distinguish one type of adhesive tape platform from another. In the illustrated example, the length13of the adhesive tape platform12includes a two-dimensional bar code (e.g., a QR Code)22, written instructions24(i.e., “Cut Here”), and an associated cut line26that indicates where the user should cut the adhesive tape platform12. The written instructions24and the cut line26typically are printed or otherwise marked on the top non-adhesive surface20of the adhesive tape platform12during manufacture. The two-dimensional bar code22, on the other hand, may be marked on the non-adhesive surface20of the adhesive tape platform12during the manufacture of the adhesive product12or, alternatively, may be marked on the non-adhesive surface20of the adhesive tape platform12as needed using, for example, a printer or other marking device. In order to avoid damage to the functionality of the segments of the adhesive tape platform12, the cut lines26typically demarcate the boundaries between adjacent segments at locations that are free of any active components of the wireless transducing circuit14. The spacing between the wireless transducing circuit components14and the cut lines26may vary depending on the intended communication, transducing and/or adhesive taping application. In the example illustrated inFIG.1A, the length of the adhesive tape platform12that is dispensed to seal the asset10corresponds to a single segment of the adhesive tape platform12. In other examples, the length of the adhesive tape platform12needed to seal a asset or otherwise serve the adhesive function for which the adhesive tape platform12is being applied may include multiple segments13of the adhesive tape platform12, one or more of which segments13may be activated upon cutting the length of the adhesive tape platform12from the roll16and/or applying the length of the adhesive tape platform to the asset10. In some examples, the transducing components14that are embedded in one or more segments13of the adhesive tape platform12are activated when the adhesive tape platform12is cut along the cut line26. In these examples, the adhesive tape platform12includes one or more embedded energy sources (e.g., thin film batteries, which may be printed, or conventional cell batteries, such as conventional watch style batteries, rechargeable batteries, or other energy storage device, such as a super capacitor or charge pump) that supply power to the transducing components14in one or more segments of the adhesive tape platform12in response to being separated from the adhesive tape platform12(e.g., along the cut line26). In some examples, each segment13of the adhesive tape platform12includes its own respective energy source including energy harvesting elements that can harvest energy from the environment. In some of these examples, each energy source is configured to only supply power to the components in its respective adhesive tape platform segment regardless of the number of contiguous segments13that are in a given length of the adhesive tape platform12. In other examples, when a given length of the adhesive tape platform12includes multiple segments13, the energy sources in the respective segments13are configured to supply power to the transducing components14in all of the segments13in the given length of the adhesive tape platform12. In some of these examples, the energy sources are connected in parallel and concurrently activated to power the transducing components14in all of the segments13at the same time. In other examples, the energy sources are connected in parallel and alternately activated to power the transducing components14in respective ones of the adhesive tape platform segments13at different time periods, which may or may not overlap. FIG.2shows an example adhesive tape platform30that includes a set of adhesive tape platform segments32each of which includes a respective set of embedded wireless transducing circuit components34, and a backing sheet36with a release coating that prevents the adhesive segments32from adhering strongly to the backing sheet36. Each adhesive tape platform segment32includes an adhesive side facing the backing sheet36, and an opposing non-adhesive side40. In this example, a particular segment32′ of the adhesive tape platform30has been removed from the backing sheet36and affixed to an envelope44. Each segment32of the adhesive tape platform30can be removed from the backing sheet36in the same way that adhesive labels can be removed from a conventional sheet of adhesive labels (e.g., by manually peeling a segment32from the backing sheet36). In general, the non-adhesive side40′ of the segment32′ may include any type of writing, markings, decorative designs, or other ornamentation. In the illustrated example, the non-adhesive side40′ of the segment32′ includes writing or other markings that correspond to a destination address for the envelope44. The envelope44also includes a return address46and, optionally, a postage stamp or mark48. In some examples, segments of the adhesive tape platform12are deployed by a human operator. The human operator may be equipped with a mobile phone or other device that allows the operator to authenticate and initialize the adhesive tape platform12. In addition, the operator can take a picture of a asset including the adhesive tape platform and any barcodes associated with the asset and, thereby, create a persistent record that links the adhesive tape platform12to the asset. In addition, the human operator typically will send the picture to a network service and/or transmit the picture to the adhesive tape platform12for storage in a memory component of the adhesive tape platform12. In some examples, the wireless transducing circuit components34that are embedded in a segment32of the adhesive tape platform12are activated when the segment32is removed from the backing sheet32. In some of these examples, each segment32includes an embedded capacitive sensing system that can sense a change in capacitance when the segment32is removed from the backing sheet36. As explained in detail below, a segment32of the adhesive tape platform30includes one or more embedded energy sources (e.g., thin film batteries, common disk-shaped cell batteries, or rechargeable batteries or other energy storage devices, such as a super capacitor or charge pump) that can be configured to supply power to the wireless transducing circuit components34in the segment32in response to the detection of a change in capacitance between the segment32and the backing sheet36as a result of removing the segment32from the backing sheet36. FIG.3shows a block diagram of the components of an example wireless transducing circuit70that includes a number of communication systems72,74. Example communication systems72,74include a GPS system that includes a GPS receiver circuit82(e.g., a receiver integrated circuit) and a GPS antenna84, and one or more wireless communication systems each of which includes a respective transceiver circuit86(e.g., a transceiver integrated circuit) and a respective antenna88. Example wireless communication systems include a cellular communication system (e.g., GSM/GPRS), a Wi-Fi communication system, an RF communication system (e.g., LoRa), a Bluetooth communication system (e.g., a Bluetooth Low Energy system), a Z-wave communication system, and a ZigBee communication system. The wireless transducing circuit70also includes a processor90(e.g., a microcontroller or microprocessor), one or more energy storage devices92(e.g., non-rechargeable or rechargeable printed flexible battery, conventional single or multiple cell battery, and/or a super capacitor or charge pump), one or more transducers94(e.g., sensors and/or actuators, and, optionally, one or more energy harvesting transducer components). In some examples, the conventional single or multiple cell battery may be a watch style disk or button cell battery that is associated electrical connection apparatus (e.g., a metal clip) that electrically connects the electrodes of the battery to contact pads on the flexible circuit116. Examples of sensing transducers94include a capacitive sensor, an altimeter, a gyroscope, an accelerometer, a temperature sensor, a strain sensor, a pressure sensor, a piezoelectric sensor, a weight sensor, an optical or light sensor (e.g., a photodiode or a camera), an acoustic or sound sensor (e.g., a microphone), a smoke detector, a radioactivity sensor, a chemical sensor (e.g., an explosives detector), a biosensor (e.g., a blood glucose biosensor, odor detectors, antibody based pathogen, food, and water contaminant and toxin detectors, DNA detectors, microbial detectors, pregnancy detectors, and ozone detectors), a magnetic sensor, an electromagnetic field sensor, and a humidity sensor. Examples of actuating (e.g., energy emitting) transducers94include light emitting components (e.g., light emitting diodes and displays), electro-acoustic transducers (e.g., audio speakers), electric motors, and thermal radiators (e.g., an electrical resistor or a thermoelectric cooler). In some examples, the wireless transducing circuit70includes a memory96for storing data, including, e.g., profile data, state data, event data, sensor data, localization data, security data, and one or more unique identifiers (ID)98associated with the wireless transducing circuit70, such as a product ID, a type ID, and a media access control (MAC) ID, and control code99. In some examples, the memory96may be incorporated into one or more of the processor90or transducers94, or may be a separate component that is integrated in the wireless transducing circuit70as shown inFIG.3. The control code typically is implemented as programmatic functions or program modules that control the operation of the wireless transducing circuit70, including a tape node communication manager that manages the manner and timing of tape node communications, a tape node power manager that manages power consumption, and a tape node connection manager that controls whether connections with other tape nodes are secure connections or unsecure connections, and a tape node storage manager that securely manages the local data storage on the node. The tape node connection manager ensures the level of security required by the end application and supports various encryption mechanisms. The tape node power manager and tape communication manager work together to optimize the battery consumption for data communication. In some examples, execution of the control code by the different types of tape nodes described herein may result in the performance of similar or different functions. FIG.4is a top view of a portion of an example flexible adhesive tape platform100that shows a first segment102and a portion of a second segment104. Each segment102,104of the flexible adhesive tape platform100includes a respective set106,108of the components of the wireless transducing circuit70. The segments102,104and their respective sets of components106,108typically are identical and configured in the same way. In some other embodiments, however, the segments102,104and/or their respective sets of components106,108are different and/or configured in different ways. For example, in some examples, different sets of the segments of the flexible adhesive tape platform100have different sets or configurations of tracking and/or transducing components that are designed and/or optimized for different applications, or different sets of segments of the flexible adhesive tape platform may have different ornamentations (e.g., markings on the exterior surface of the platform) and/or different (e.g., alternating) lengths. An example method of fabricating the adhesive tape platform100(seeFIG.4) according to a roll-to-roll fabrication process is described in connection with FIGS. 6, 7A, and 7B of U.S. Pat. No. 10,262,255, issued Apr. 16, 2019, the entirety of which is incorporated herein by reference. The instant specification describes an example system of adhesive tape platforms (also referred to herein as “tape nodes”) that can be used to implement a low-cost wireless network infrastructure for performing monitoring, tracking, and other asset management functions relating to, for example, parcels, persons, tools, equipment and other physical assets and objects. The example system includes a set of three different types of tape nodes that have different respective functionalities and different respective cover markings that visually distinguish the different tape node types from one another. In one non-limiting example, the covers of the different tape node types are marked with different colors (e.g., white, green, and black). In the illustrated examples, the different tape node types are distinguishable from one another by their respective wireless communications capabilities and their respective sensing capabilities. FIG.5Ashows a cross-sectional side view of a portion of an example segment102of the flexible adhesive tape platform100that includes a respective set of the components of the wireless transducing circuit106corresponding to the first tape node type (i.e., white). The flexible adhesive tape platform segment102includes an adhesive layer112, an optional flexible substrate110, and an optional adhesive layer114on the bottom surface of the flexible substrate110. If the bottom adhesive layer114is present, a release liner (not shown) may be (weakly) adhered to the bottom surface of the adhesive layer114. In some examples, the adhesive layer114includes an adhesive (e.g., an acrylic foam adhesive) that has a high bond strength that is sufficient to prevent removal of the adhesive segment102from a surface on which the adhesive layer114is adhered without destroying the physical or mechanical integrity of the adhesive segment102and/or one or more of its constituent components. In some examples, the optional flexible substrate110is implemented as a prefabricated adhesive tape that includes the adhesive layers112,114and the optional release liner. In other examples, the adhesive layers112,114are applied to the top and bottom surfaces of the flexible substrate110during the fabrication of the adhesive tape platform100. The adhesive layer112bonds the flexible substrate110to a bottom surface of a flexible circuit116, that includes one or more wiring layers (not shown) that connect the processor90, a low power wireless communication interface81(e.g., a Zigbee, Bluetooth® Low Energy (BLE) interface, or other low power communication interface), a timer circuit83, transducing and/or energy harvesting component(s)94(if present), the memory96, and other components in a device layer122to each other and to the energy storage component92and, thereby, enable the transducing, tracking and other functionalities of the flexible adhesive tape platform segment102. The low power wireless communication interface81typically includes one or more of the antennas84,88and one or more of the wireless circuits82,86. FIG.5Bshows a cross-sectional side view of a portion of an example segment103of the flexible adhesive tape platform100that includes a respective set of the components of the wireless transducing circuit106corresponding to the second tape node type (i.e., green). In this example, the flexible adhesive tape platform segment103differs from the segment102shown inFIG.5Aby the inclusion of a medium power communication interface85(e.g., a LoRa interface) in addition to the low power communications interface that is present in the first tape node type (i.e., white). The medium power communication interface has longer communication range than the low power communication interface. In some examples, one or more other components of the flexible adhesive tape platform segment103differ, for example, in functionality or capacity (e.g., larger energy source). FIG.5Cshows a cross-sectional side view of a portion of an example segment105of the flexible adhesive tape platform100that includes a respective set of the components of the wireless transducing circuit106corresponding to the third tape node type (i.e., black). In this example, the flexible adhesive tape platform segment105includes a high power communications interface87(e.g., a cellular interface; e.g., GSM/GPRS) and an optional medium and/or low power communications interface85. The high power communication range provides global coverage to available infrastructure (e.g. the cellular network). In some examples, one or more other components of the flexible adhesive tape platform segment105differ, for example, in functionality or capacity (e.g., larger energy source). FIGS.5A-5Cshow examples in which the cover layer128of the flexible adhesive tape platform100includes one or more interfacial regions129positioned over one or more of the transducers94. In examples, one or more of the interfacial regions129have features, properties, compositions, dimensions, and/or characteristics that are designed to improve the operating performance of the platform100for specific applications. In some examples, the flexible adhesive tape platform100includes multiple interfacial regions129over respective transducers94, which may be the same or different depending on the target applications. Example interfacial regions include an opening, an optically transparent window, and/or a membrane located in the interfacial region129of the cover128that is positioned over the one or more transducers and/or energy harvesting components94. Additional details regarding the structure and operation of example interfacial regions129are described in U.S. Provisional Patent Application No. 62/680,716, filed Jun. 5, 2018, PCT Patent Application No. PCT/US2018/064919, filed Dec. 11, 2018, U.S. Pat. No. 10,885,420, issued Jan. 4, 2021, U.S. Pat. No. 10,902,310 issued Jan. 25, 2021, and U.S. Provisional Patent Application No. 62/670,712, filed May 11, 2018, all of which are incorporated herein in their entirety. In some examples, a flexible polymer layer124encapsulates the device layer122and thereby reduces the risk of damage that may result from the intrusion of contaminants and/or liquids (e.g., water) into the device layer122. The flexible polymer layer124also planarizes the device layer122. This facilitates optional stacking of additional layers on the device layer122and also distributes forces generated in, on, or across the adhesive tape platform segment102so as to reduce potentially damaging asymmetric stresses that might be caused by the application of bending, torqueing, pressing, or other forces that may be applied to the flexible adhesive tape platform segment102during use. In the illustrated example, a flexible cover128is bonded to the planarizing polymer 124 by an adhesive layer (not shown). The flexible cover128and the flexible substrate110may have the same or different compositions depending on the intended application. In some examples, one or both of the flexible cover128and the flexible substrate110include flexible film layers and/or paper substrates, where the film layers may have reflective surfaces or reflective surface coatings. Example compositions for the flexible film layers include polymer films, such as polyester, polyimide, polyethylene terephthalate (PET), and other plastics. The optional adhesive layer on the bottom surface of the flexible cover128and the adhesive layers112,114on the top and bottom surfaces of the flexible substrate110typically include a pressure-sensitive adhesive (e.g., a silicon-based adhesive). In some examples, the adhesive layers are applied to the flexible cover128and the flexible substrate110during manufacture of the adhesive tape platform100(e.g., during a roll-to-roll or sheet-to-sheet fabrication process). In other examples, the flexible cover128may be implemented by a prefabricated single-sided pressure-sensitive adhesive tape and the flexible substrate110may be implemented by a prefabricated double-sided pressure-sensitive adhesive tape; both kinds of tape may be readily incorporated into a roll-to-roll or sheet-to-sheet fabrication process. In some examples, the flexible polymer layer124is composed of a flexible epoxy (e.g., silicone). In some examples, the energy storage device92is a flexible battery that includes a printed electrochemical cell, which includes a planar arrangement of an anode and a cathode and battery contact pads. In some examples, the flexible battery may include lithium-ion cells or nickel-cadmium electro-chemical cells. The flexible battery typically is formed by a process that includes printing or laminating the electro-chemical cells on a flexible substrate (e.g., a polymer film layer). In some examples, other components may be integrated on the same substrate as the flexible battery. For example, the low power wireless communication interface81and/or the processor(s)90may be integrated on the flexible battery substrate. In some examples, one or more of such components also (e.g., the flexible antennas and the flexible interconnect circuits) may be printed on the flexible battery substrate. In some examples, the flexible circuit116is formed on a flexible substrate by printing, etching, or laminating circuit patterns on the flexible substrate. In some examples, the flexible circuit116is implemented by one or more of a single-sided flex circuit, a double access or back bared flex circuit, a sculpted flex circuit, a double-sided flex circuit, a multi-layer flex circuit, a rigid flex circuit, and a polymer thick film flex circuit. A single-sided flexible circuit has a single conductor layer made of, for example, a metal or conductive (e.g., metal filled) polymer on a flexible dielectric film. A double access or back bared flexible circuit has a single conductor layer but is processed so as to allow access to selected features of the conductor pattern from both sides. A sculpted flex circuit is formed using a multi-step etching process that produces a flex circuit that has finished copper conductors that vary in thickness along their respective lengths. A multilayer flex circuit has three of more layers of conductors, where the layers typically are interconnected using plated through holes. Rigid flex circuits are a hybrid construction of flex circuit consisting of rigid and flexible substrates that are laminated together into a single structure, where the layers typically are electrically interconnected via plated through holes. In polymer thick film (PTF) flex circuits, the circuit conductors are printed onto a polymer base film, where there may be a single conductor layer or multiple conductor layers that are insulated from one another by respective printed insulating layers. In the example flexible adhesive tape platform segments102shown inFIGS.5A-5C, the flexible circuit116is a single access flex circuit that interconnects the components of the adhesive tape platform on a single side of the flexible circuit116. In other examples, the flexible circuit116is a double access flex circuit that includes a front-side conductive pattern that interconnects the low power communications interface81, the timer circuit83, the processor90, the one or more transducers94(if present), and the memory96, and allows through-hole access (not shown) to a back-side conductive pattern that is connected to the flexible battery (not shown). In these examples, the front-side conductive pattern of the flexible circuit116connects the communications circuits82,86(e.g., receivers, transmitters, and transceivers) to their respective antennas84,88and to the processor90, and also connects the processor90to the one or more sensors94and the memory96. The backside conductive pattern connects the active electronics (e.g., the processor90, the communications circuits82,86, and the transducers) on the front-side of the flexible circuit116to the electrodes of the flexible battery116via one or more through holes in the substrate of the flexible circuit116. Depending on the target application, the wireless transducing circuits70are distributed across the flexible adhesive tape platform100according to a specified sampling density, which is the number of wireless transducing circuits70for a given unit size (e.g., length or area) of the flexible adhesive tape platform100. In some examples, a set of multiple flexible adhesive tape platforms100are provided that include different respective sampling densities in order to seal different asset sizes with a desired number of wireless transducing circuits70. In particular, the number of wireless transducing circuits per asset size is given by the product of the sampling density specified for the adhesive tape platform and the respective size of the adhesive tape platform100needed to seal the asset. This allows an automated packaging system to select the appropriate type of flexible adhesive tape platform100to use for sealing a given asset with the desired redundancy (if any) in the number of wireless transducer circuits70. In some example applications (e.g., shipping low value goods), only one wireless transducing circuit70is used per asset, whereas in other applications (e.g., shipping high value goods) multiple wireless transducing circuits70are used per asset. Thus, a flexible adhesive tape platform100with a lower sampling density of wireless transducing circuits70can be used for the former application, and a flexible adhesive tape platform100with a higher sampling density of wireless transducing circuits70can be used for the latter application. In some examples, the flexible adhesive tape platforms100are color-coded or otherwise marked to indicate the respective sampling densities with which the wireless transducing circuits70are distributed across the different types of adhesive tape platforms100. Referring toFIG.6A, in some examples, each of one or more of the segments270,272of a flexible adhesive tape platform274includes a respective one-time wake circuit275that delivers power from the respective energy source276to the respective wireless circuit278(e.g., a processor, one or more transducers, and one or more wireless communications circuits) in response to an event. In some of these examples, the wake circuit275is configured to transition from an off state to an on state when the voltage on the wake node277exceeds a threshold level, at which point the wake circuit transitions to an on state to power-on the segment270. In the illustrated example, this occurs when the user separates the segment from the adhesive tape platform274, for example, by cutting across the adhesive tape platform274at a designated location (e.g., along a designated cut-line280). In particular, in its initial, un-cut state, a minimal amount of current flows through the resistors R1and R2. As a result, the voltage on the wake node277remains below the threshold turn-on level. After the user cuts across the adhesive tape platform274along the designated cut-line280, the user creates an open circuit in the loop282, which pulls the voltage of the wake node above the threshold level and turns on the wake circuit275. As a result, the voltage across the energy source276will appear across the wireless circuit278and, thereby, turn on the segment270. In particular embodiments, the resistance value of resistor R1is greater than the resistance value of R2. In some examples, the resistance values of resistors R1and R2are selected based on the overall design of the adhesive product system (e.g., the target wake voltage level and a target leakage current). In some examples, each of one or more of the segments of an adhesive tape platform includes a respective sensor and a respective wake circuit that delivers power from the respective energy source to the respective one or more of the respective wireless circuit components278in response to an output of the sensor. In some examples, the respective sensor is a strain sensor that produces a wake signal based on a change in strain in the respective segment. In some of these examples, the strain sensor is affixed to a adhesive tape platform and configured to detect the stretching of the tracking adhesive tape platform segment as the segment is being peeled off a roll or a sheet of the adhesive tape platform. In some examples, the respective sensor is a capacitive sensor that produces a wake signal based on a change in capacitance in the respective segment. In some of these examples, the capacitive sensor is affixed to an adhesive tape platform and configured to detect the separation of the tracking adhesive tape platform segment from a roll or a sheet of the adhesive tape platform. In some examples, the respective sensor is a flex sensor that produces a wake signal based on a change in curvature in the respective segment. In some of these examples, the flex sensor is affixed to a adhesive tape platform and configured to detect bending of the tracking adhesive tape platform segment as the segment is being peeled off a roll or a sheet of the adhesive tape platform. In some examples, the respective sensor is a near field communications sensor that produces a wake signal based on a change in inductance in the respective segment. FIG.6Bshows another example of an adhesive tape platform294that delivers power from the respective energy source276to the respective tracking circuit278(e.g., a processor, one or more transducers, and one or more wireless communications circuits) in response to an event. This example is similar in structure and operation as the adhesive tape platform294shown inFIG.6A, except that the wake circuit275is implemented by a switch296that is configured to transition from an open state to a closed state when the voltage on the switch node277exceeds a threshold level. In the initial state of the adhesive tape platform294, the voltage on the switch node is below the threshold level as a result of the low current level flowing through the resistors R1and R2. After the user cuts across the adhesive tape platform294along the designated cut-line280, the user creates an open circuit in the loop282, which pulls up the voltage on the switch node above the threshold level to close the switch296and turn on the wireless circuit278. FIG.6Cshows a diagrammatic cross-sectional front view of an example adhesive tape platform300and a perspective view of an example asset302. Instead of activating the adhesive tape platform in response to separating a segment of the adhesive tape platform from a roll or a sheet of the adhesive tape platform, this example is configured to supply power from the energy source302to turn on the wireless transducing circuit306in response to establishing an electrical connection between two power terminals308,310that are integrated into the adhesive tape platform. In particular, each segment of the adhesive tape platform300includes a respective set of embedded tracking components, an adhesive layer312, and an optional backing sheet314with a release coating that prevents the segments from adhering strongly to the backing sheet314. In some examples, the power terminals308,310are composed of an electrically conductive material (e.g., a metal, such as copper) that may be printed or otherwise patterned and/or deposited on the backside of the adhesive tape platform300. In operation, the adhesive tape platform can be activated by removing the backing sheet314and applying the exposed adhesive layer312to a surface that includes an electrically conductive region316. In the illustrated embodiment, the electrically conductive region316is disposed on a portion of the asset302. When the adhesive backside of the adhesive tape platform300is adhered to the asset with the exposed terminals308,310aligned and in contact with the electrically conductive region316on the asset302, an electrical connection is created through the electrically conductive region316between the exposed terminals308,310that completes the circuit and turns on the wireless transducing circuit306. In particular embodiments, the power terminals308,310are electrically connected to any respective nodes of the wireless transducing circuit306that would result in the activation of the tracking circuit306in response to the creation of an electrical connection between the power terminals308,310. In some examples, after a tape node is turned on, it will communicate with the network service to confirm that the user/operator who is associated with the tape node is an authorized user who has authenticated himself or herself to the network service54. In these examples, if the tape node cannot confirm that the user/operator is an authorized user, the tape node will turn itself off. Deployment of Tape Nodes FIG.7shows an example network communications environment400(also referred to herein as ansystem”400) that includes a network402that supports communications between one or more servers404executing one or more applications of a network service408, mobile gateways410,412, a stationary gateway414, and various types of tape nodes that are associated with various assets (e.g., parcels, equipment, tools, persons, and other things). Each member of the IOT system400may be referred to as a node of the IOT system400, including the tape nodes, other wireless IOT devices, gateways (stationary and mobile), client devices, and servers. In some examples, the network402includes one or more network communication systems and technologies, including any one or more of wide area networks, local area networks, public networks (e.g., the internet), private networks (e.g., intranets and extranets), wired networks, and wireless networks. For example, the network402includes communications infrastructure equipment, such as a geolocation satellite system416(e.g., GPS, GLONASS, and NAVSTAR), cellular communication systems (e.g., GSM/GPRS), Wi-Fi communication systems, RF communication systems (e.g., LoRa), Bluetooth communication systems (e.g., a Bluetooth Low Energy system), Z-wave communication systems, and ZigBee communication systems. In some examples, the one or more network service applications406leverage the above-mentioned communications technologies to create a hierarchical wireless network of tape nodes that improves asset management operations by reducing costs and improving efficiency in a wide range of processes, from asset packaging, asset transporting, asset tracking, asset condition monitoring, asset inventorying, and asset security verification. Communication across the network is secured by a variety of different security mechanisms. In the case of existing infrastructure, a communication link the communication uses the infrastructure security mechanisms. In case of communications among tapes nodes, the communication is secured through a custom security mechanism. In certain cases, tape nodes can also be configured to support block chain to protect the transmitted and stored data. A set of tape nodes can be configured by the network service408to create hierarchical communications network. The hierarchy can be defined in terms of one or more factors, including functionality (e.g., wireless transmission range or power), role (e.g., master tape node vs. peripheral tape node), or cost (e.g., a tape node equipped with a cellular transceiver vs. a peripheral tape node equipped with a Bluetooth LE transceiver). Tape nodes can be assigned to different levels of a hierarchical network according to one or more of the above-mentioned factors. For example, the hierarchy can be defined in terms of communication range or power, where tape nodes with higher power or longer communication range transceivers are arranged at a higher level of the hierarchy than tape nodes with lower power or lower range transceivers. In another example, the hierarchy is defined in terms of role, where, e.g., a master tape node is programmed to bridge communications between a designated group of peripheral tape nodes and a gateway node or server node. The problem of finding an optimal hierarchical structure can be formulated as an optimization problem with battery capacity of nodes, power consumption in various modes of operation, desired latency, external environment, etc. and can be solved using modern optimization methods e.g. neural networks, artificial intelligence, and other machine learning computing systems that take expected and historical data to create an optimal solution and can create algorithms for modifying the system's behavior adaptively in the field. The tape nodes may be deployed by automated equipment or manually. In this process, a tape node typically is separated from a roll or sheet and adhered to a asset, or other stationary or mobile object (e.g., a structural element of a warehouse, or a vehicle, such as a delivery truck) or stationary object (e.g., a structural element of a building). This process activates the tape node and causes the tape node to communicate with a server404of the network service408. In this process, the tape node may communicate through one or more other tape nodes in the communication hierarchy. In this process, the network server404executes the network service application406to programmatically configure tape nodes that are deployed in the environment400. In some examples, there are multiple classes or types of tape nodes, where each tape node class has a different respective set of functionalities and/or capacities. In some examples, the one or more network service servers404communicate over the network402with one or more gateways that are configured to send, transmit, forward, or relay messages to the network402and activated tape nodes that are associated with respective assets and within communication range. Example gateways include mobile gateways410,412and a stationary gateway414. In some examples, the mobile gateways410,412, and the stationary gateway414are able to communicate with the network402and with designated sets or groups of tape nodes. In some examples, the mobile gateway412is a vehicle (e.g., a delivery truck or other mobile hub) that includes a wireless communications unit416that is configured by the network service408to communicate with a designated set of tape nodes, including a peripheral tape node418in the form of a label that is adhered to an asset420contained within a parcel421(e.g., an envelope), and is further configured to communicate with the network service408over the network402. In some examples, the peripheral tape node418includes a lower power wireless communications interface of the type used in, e.g., tape node102(shown inFIG.5A), and the wireless communications unit416is implemented by a tape node (e.g., one of tape node103or tape node105, respectively shown inFIGS.5B and5C) that includes a lower power communications interface for communicating with tape nodes within range of the mobile gateway412and a higher power communications interface for communicating with the network402. In this way, the tape nodes418and416create a hierarchical wireless network of nodes for transmitting, forwarding, bridging, relaying, or otherwise communicating wireless messages to, between, or on behalf of the peripheral tape node418and the network service408in a power-efficient and cost-effective way. In some examples, the mobile gateway410is a mobile phone that is operated by a human operator and executes a client application422that is configured by the network service408to communicate with a designated set of tape nodes, including a master tape node424that is adhered to a parcel426(e.g., a box), and is further configured to communicate with the network service408over the network402. In the illustrated example, the parcel426contains a first parcel labeled or sealed by a tape node428and containing a first asset430, and a second parcel labeled or sealed by a tape node432and containing a second asset434. As explained in detail below, the master tape node424communicates with each of the peripheral tape nodes428,432and communicates with the mobile gateway408in accordance with a hierarchical wireless network of tape nodes. In some examples, each of the peripheral tape nodes428,432includes a lower power wireless communications interface of the type used in, e.g., tape node102(shown inFIG.5A), and the master tape node424is implemented by a tape node (e.g., tape node103, shown inFIG.5B) that includes a lower power communications interface for communicating with the peripheral tape nodes428,432contained within the parcel426, and a higher power communications interface for communicating with the mobile gateway410. The master tape node424is operable to relay wireless communications between the tape nodes428,432contained within the parcel426and the mobile gateway410, and the mobile gateway410is operable to relay wireless communications between the master tape node424and the network service408over the wireless network402. In this way, the master tape node424and the peripheral tape nodes428and432create a hierarchical wireless network of nodes for transmitting, forwarding, relaying, or otherwise communicating wireless messages to, between, or on behalf of the peripheral tape nodes428,432and the network service408in a power-efficient and cost-effective way. In some examples, the stationary gateway414is implemented by a server executing a server application that is configured by the network service408to communicate with a designated set440of tape nodes442,444,446,448that are adhered to respective parcels containing respective assets450,452,454,456on a pallet458. In other examples, the stationary gateway414is implemented by a tape node (e.g., one of tape node103or tape node105, respectively shown inFIGS.5B and5C) that is adhered to, for example, a wall, column or other infrastructure component of the environment400, and includes a lower power communications interface for communicating with tape nodes within range of the stationary gateway414and a higher power communications interface for communicating with the network402. In one embodiment, each of the tape nodes442-448is a peripheral tape node and is configured by the network service408to communicate individually with the stationary gateway414, which relays communications from the tape nodes442-448to the network service408through the stationary gateway414and over the communications network402. In another embodiment, one of the tape nodes442-448at a time is configured as a master tape node that transmits, forwards, relays, or otherwise communicate wireless messages to, between, or on behalf of the other tape nodes on the pallet458. In this embodiment, the master tape node may be determined by the tape nodes442-448or designated by the network service408. In some examples, the tape node with the longest range or highest remaining power level is determined to be the master tape node. In some examples, when the power level of the current master tape node drops below a certain level (e.g., a fixed power threshold level or a threshold level relative to the power levels of one or more of the other tape nodes), another one of the tape nodes assumes the role of the master tape node. In some examples, a master tape node459is adhered to the pallet458and is configured to perform the role of a master node for the tape nodes442-448. In these ways, the tape nodes442-448,458are configurable to create different hierarchical wireless networks of nodes for transmitting, forwarding, relaying, bridging, or otherwise communicating wireless messages with the network service408through the stationary gateway414and over the network402in a power-efficient and cost-effective way. In the illustrated example, the stationary gateway414also is configured by the network service408to communicate with a designated set of tape nodes, including a master tape node460that is adhered to the inside of a door462of a shipping container464, and is further configured to communicate with the network service408over the network402. In the illustrated example, the shipping container464contains a number of parcels labeled or sealed by respective peripheral tape nodes466and containing respective assets. The master tape node416communicates with each of the peripheral tape nodes466and communicates with the stationary gateway415in accordance with a hierarchical wireless network of tape nodes. In some examples, each of the peripheral tape nodes466includes a lower power wireless communications interface of the type used in, e.g., tape node102(shown inFIG.5A), and the master tape node460is implemented by a tape node (e.g., tape node103, shown inFIG.5B) that includes a lower power communications interface for communicating with the peripheral tape nodes466contained within the shipping container464, and a higher power communications interface for communicating with the stationary gateway414. In some examples, when the doors of the shipping container464are closed, the master tape node460is operable to communicate wirelessly with the peripheral tape nodes466contained within the shipping container464. In an example, the master tape node460is configured to collect sensor data from the peripheral tape nodes and, in some embodiments, process the collected data to generate, for example, one or more histograms from the collected data. When the doors of the shipping container464are open, the master tape node460is programmed to detect the door opening (e.g., with an accelerometer component of the master tape node460) and, in addition to reporting the door opening event to the network service408, the master tape node460is further programmed to transmit the collected data and/or the processed data in one or more wireless messages to the stationary gateway414. The stationary gateway414, in turn, is operable to transmit the wireless messages received from the master tape node460to the network service408over the wireless network402. Alternatively, in some examples, the stationary gateway414also is operable to perform operations on the data received from the master tape node460with the same type of data produced by the master node459based on sensor data collected from the tape nodes442-448. In this way, the master tape node460and the peripheral tape nodes466create a hierarchical wireless network of nodes for transmitting, forwarding, relaying, or otherwise communicating wireless messages to, between, or on behalf of the peripheral tape nodes466and the network service408in a power-efficient and cost-effective way. In an example of the embodiment shown inFIG.7, there are three classes of tape nodes: a short range tape node, a medium range tape node, and a long range tape node, as respectively shown inFIGS.5A-5C. The short range tape nodes typically are adhered directly to parcels containing assets. In the illustrated example, the tape nodes418,428,432,442-448,466are short range tape nodes. The short range tape nodes typically communicate with a low power wireless communication protocol (e.g., Bluetooth LE, Zigbee, or Z-wave). The medium range tape nodes typically are adhered to objects (e.g., a box426and a shipping container460) that are associated with multiple parcels that are separated from the medium range tape nodes by a barrier or a large distance. In the illustrated example, the tape nodes424and460are medium range tape nodes. The medium range tape nodes typically communicate with a medium power wireless communication protocol (e.g., LoRa or Wi-Fi). The long-range tape nodes typically are adhered to mobile or stationary infrastructure of the wireless communication environment400. In the illustrated example, the mobile gateway tape node412and the stationary gateway tape node414are long range tape nodes. The long range tape nodes typically communicate with other nodes using a high power wireless communication protocol (e.g., a cellular data communication protocol). In some examples, the mobile gateway tape node436is adhered to a mobile vehicle (e.g., a truck). In these examples, the mobile gateway412may be moved to different locations in the environment400to assist in connecting other tape nodes to the server404. In some examples, the stationary gateway tape node414may be attached to a stationary structure (e.g., a wall) in the environment400with a known geographic location. In these examples, other tape nodes in the environment can determine their geographic location by querying the gateway tape node414. Wireless Communications Network FIG.8shows an example hierarchical wireless communications network of tape nodes470. In this example, the short range tape node472and the medium range tape node474communicate with one another over their respective low power wireless communication interfaces476,478. The medium range tape node474and the long range tape node480communicate with one another over their respective medium power wireless communication interfaces478,482. The long range tape node480and the network server404communicate with one another over the high power wireless communication interface484. In some examples, the low power communication interfaces476,478establish wireless communications with one another in accordance with the Bluetooth LE protocol, the medium power communication interfaces452,482establish wireless communications with one another in accordance with the LoRa communications protocol, and the high power communication interface484establishes wireless communications with the server404in accordance with a cellular communications protocol. In some examples, the different types of tape nodes are deployed at different levels in the communications hierarchy according to their respective communications ranges, with the long range tape nodes generally at the top of the hierarchy, the medium range tape nodes generally in the middle of the hierarchy, and the short range tape nodes generally at the bottom of the hierarchy. In some examples, the different types of tape nodes are implemented with different feature sets that are associated with component costs and operational costs that vary according to their respective levels in the hierarchy. This allows system administrators flexibility to optimize the deployment of the tape nodes to achieve various objectives, including cost minimization, asset tracking, asset localization, and power conservation. In some examples, a server404of the network service408designates a tape node at a higher level in a hierarchical communications network as a master node of a designated set of tape nodes at a lower level in the hierarchical communications network. For example, the designated master tape node may be adhered to a parcel (e.g., a box, pallet, or shipping container) that contains one or more tape nodes that are adhered to one or more assets containing respective assets. In order to conserve power, the tape nodes typically communicate according to a schedule promulgated by the server404of the network service408. The schedule usually dictates all aspects of the communication, including the times when particular tape nodes should communicate, the mode of communication, and the contents of the communication. In one example, the server404transmits programmatic Global Scheduling Description Language (GSDL) code to the master tape node and each of the lower-level tape nodes in the designated set. In this example, execution of the GSDL code causes each of the tape nodes in the designated set to connect to the master tape node at a different respective time that is specified in the GSDL code, and to communicate a respective set of one or more data packets of one or more specified types of information over the respective connection. In some examples, the master tape node simply forwards the data packets to the server network node404, either directly or indirectly through a gateway tape node (e.g., the long range tape node416adhered to the mobile vehicle412or the long range tape node414adhered to an infrastructure component of the environment400). In other examples, the master tape node processes the information contained in the received data packets and transmits the processed information to the server network node404. FIG.9shows an example method of creating a hierarchical communications network. In accordance with this method, a first tape node is adhered to a first asset in a set of associated assets, the first tape node including a first type of wireless communication interface and a second type of wireless communication interface having a longer range than the first type of wireless communication interface (FIG.9, block490). A second tape node is adhered to a second asset in the set, the second tape node including the first type of wireless communication interface, wherein the second tape node is operable to communicate with the first tape node over a wireless communication connection established between the first type of wireless communication interfaces of the first and second tape nodes (FIG.9, block492). An application executing on a computer system (e.g., a server404of a network service408) establishes a wireless communication connection with the second type of wireless communication interface of the first tape node, and the application transmits programmatic code executable by the first tape node to function as a master tape node with respect to the second tape node (FIG.9, block494). In other embodiments, the second tape node is assigned the role of the master node of the first tape node. Distributed Agent Operating System As used herein, the term “node” refers to both a tape node and a non-tape node (i.e., a node or wireless device that is not an adhesive tape platform) unless the node is explicitly designated as a “tape node” or a “non-tape node.” In some embodiments, a non-tape node may have the same or similar communication, sensing, processing and other functionalities and capabilities as the tape nodes described herein, except without being integrated into a tape platform. In some embodiments, non-tape nodes can interact seamlessly with tape nodes. Each node may be assigned a respective unique identifier, according to some embodiments. The following disclosure describes a distributed software operating system that is implemented by distributed hardware nodes executing intelligent agent software to perform various tasks or algorithms. In some embodiments, the operating system distributes functionalities (e.g., performing analytics on data or statistics collected or generated by nodes) geographically across multiple intelligent agents that are bound to items (e.g., parcels, containers, packages, boxes, pallets, a loading dock, a door, a light switch, a vehicle such as a delivery truck, a shipping facility, a port, a hub, etc.). In addition, the operating system dynamically allocates the hierarchical roles (e.g., master and slave roles) that nodes perform over time in order to improve system performance, such as optimizing battery life across nodes, improving responsiveness, and achieving overall objectives. In some embodiments, optimization is achieved using a simulation environment for optimizing key performance indicators (PKIs). In some embodiments, the nodes are programmed to operate individually or collectively as autonomous intelligent agents. In some embodiments, nodes are configured to communicate and coordinate actions and respond to events. In some embodiments, a node is characterized by its identity, its mission, and the services that it can provide to other nodes. A node's identity is defined by its capabilities (e.g., battery life, sensing capabilities, and communications interfaces). A node's mission (or objective) is defined by the respective program code, instructions, or directives it receives from another node (e.g., a server or a master node) and the actions or tasks that it performs in accordance with that program code, instructions, or directives (e.g., sense temperature every hour and send temperature data to a master node to upload to a server). A node's services define the functions or tasks that it is permitted to perform for other nodes (e.g., retrieve temperature data from a peripheral node and send the received temperature data to the server). At least for certain tasks, once programmed and configured with their identities, missions, and services, nodes can communicate with one another and request services from and provide services to one another independently of the server. Thus, in accordance with the runtime operating system every agent knows its objectives (programmed). Every agent knows which capabilities/resources it needs to fulfill objective. Every agent communicates with every other node in proximity to see if it can offer the capability. Examples include communicate data to the server, authorize going to lower power level, temperature reading, send an alert to local hub, send location data, triangulate location, any boxes in same group that already completed group objectives. Nodes can be associated with items. Examples of an item includes, but are not limited to for example, a package, a box, pallet, a container, a truck or other conveyance, infrastructure such as a door, a conveyor belt, a light switch, a road, or any other thing that can be tracked, monitored, sensed, etc. or that can transmit data concerning its state or environment. In some examples, a server or a master node may associate the unique node identifiers with the items. Communication paths between tape and/or non-tape nodes may be represented by a graph of edges between the corresponding assets (e.g., a storage unit, truck, or hub). In some embodiments, each node in the graph has a unique identifier. A set of connected edges between nodes is represented by a sequence of the node identifiers that defines a communication path between a set of nodes. Referring toFIG.10A, a node520(Node A) is associated with an asset522(Asset A). In some embodiments, the node520may be implemented as a tape node that is used to seal the asset522or it may be implemented as a label node that is used to label the asset522; alternatively, the node520may be implemented as a non-tape node that is inserted within the asset522or embedded in or otherwise attached to the interior or exterior of the asset522. In the illustrated embodiment, the node520includes a low power communications interface524(e.g., a Bluetooth Low Energy communications interface). Another node526(Node B), which is associated with another asset530(Asset B), is similarly equipped with a compatible low power communications interface528(e.g., a Bluetooth Low Energy communications interface). In an example scenario, in accordance with the programmatic code stored in its memory, node526(Node B) requires a connection to node520(Node A) to perform a task that involves checking the battery life of Node A. Initially, Node B is unconnected to any other nodes. In accordance with the programmatic code stored in its memory, Node B periodically broadcasts advertising packets into the surrounding area. When the other node520(Node A) is within range of Node B and is operating in a listening mode, Node A will extract the address of Node B and potentially other information (e.g., security information) from an advertising packet. If, according to its programmatic code, Node A determines that it is authorized to connect to Node B, Node A will attempt to pair with Node B. In this process, Node A and Node B determine each other's identities, capabilities, and services. For example, after successfully establishing a communication path532with Node A (e.g., a Bluetooth Low Energy formatted communication path), Node B determines Node A's identity information (e.g., master node), Node A's capabilities include reporting its current battery life, and Node A's services include transmitting its current battery life to other nodes. In response to a request from Node B, Node A transmits an indication of its current battery life to Node B. Referring toFIG.10B, a node534(Node C) is associated with an asset535(Asset C). In the illustrated embodiment, the Node C includes a low power communications interface536(e.g., a Bluetooth Low Energy communications interface), and a sensor537(e.g., a temperature sensor). Another node538(Node D), which is associated with another asset540(Asset D), is similarly equipped with a compatible low power communications interface542(e.g., a Bluetooth Low Energy communications interface). In an example scenario, in accordance with the programmatic code stored in its memory, Node D requires a connection to Node C to perform a task that involves checking the temperature in the vicinity of Node C. Initially, Node D is unconnected to any other nodes. In accordance with the programmatic code stored in its memory, Node D periodically broadcasts advertising packets in the surrounding area. When Node C is within range of Node D and is operating in a listening mode, Node C will extract the address of Node D and potentially other information (e.g., security information) from the advertising packet. If, according to its programmatic code, Node C determines that it is authorized to connect to Node D, Node C will attempt to pair with Node D. In this process, Node C and Node D determine each other's identities, capabilities, and services. For example, after successfully establishing a communication path544with Node C (e.g., a Bluetooth Low Energy formatted communication path), Node D determines Node C's identity information (e.g., a peripheral node), Node C's capabilities include retrieving temperature data, and Node C's services include transmitting temperature data to other nodes. In response to a request from Node D, Node C transmits its measured and/or locally processed temperature data to Node D. Referring toFIG.10C, a pallet550is associated with a master node551that includes a low power communications interface552, a GPS receiver554, and a cellular communications interface556. In some embodiments, the master node551may be implemented as a tape node or a label node that is adhered to the pallet550. In other embodiments, the master node551may be implemented as a non-tape node that is inserted within the body of the pallet550or embedded in or otherwise attached to the interior or exterior of the pallet550. The pallet550provides a structure for grouping and containing assets559,561,563each of which is associated with a respective peripheral node558,560,562(Node E, Node F, and Node G). Each of the peripheral nodes558,560,562includes a respective low power communications interface564,566,568(e.g., Bluetooth Low Energy communications interface). In the illustrated embodiment, each of the nodes E, F, G and the master node551are connected to each of the other nodes over a respective low power communications path (shown by dashed lines). In some embodiments, the assets559,561,563are grouped together because they are related. For example, the assets559,561,563may share the same shipping itinerary or a portion thereof. In an example scenario, the master pallet node550scans for advertising packets that are broadcasted from the peripheral nodes558,560,562. In some examples, the peripheral nodes broadcast advertising packets during respective scheduled broadcast intervals. The master node551can determine the presence of the assets559,561,563in the vicinity of the pallet550based on receipt of one or more advertising packets from each of the nodes E, F, and G. In some embodiments, in response to receipt of advertising packets broadcasted by the peripheral nodes558,560,562, the master node551transmits respective requests to the server to associate the master node551and the respective peripheral nodes558,560,562. In some examples, the master tape node requests authorization from the server to associate the master tape node and the peripheral tape nodes. If the corresponding assets559,561,563are intended to be grouped together (e.g., they share the same itinerary or certain segments of the same itinerary), the server authorizes the master node551to associate the peripheral nodes558,560,562with one another as a grouped set of assets. In some embodiments, the server registers the master node and peripheral tape node identifiers with a group identifier. The server also may associate each node ID with a respective physical label ID that is affixed to the respective asset. In some embodiments, after an initial set of assets is assigned to a multi-asset group, the master node551may identify another asset arrives in the vicinity of the multi-asset group. The master node may request authorization from the server to associate the other asset with the existing multi-asset group. If the server determines that the other asset is intended to ship with the multi-asset group, the server instructs the master node to merge one or more other assets with currently grouped set of assets. After all assets are grouped together, the server authorizes the multi-asset group to ship. In some embodiments, this process may involve releasing the multi-asset group from a containment area (e.g., customs holding area) in a shipment facility. In some embodiments, the peripheral nodes558,560,562include environmental sensors for obtaining information regarding environmental conditions in the vicinity of the associated assets559,561,563. Examples of such environmental sensors include temperature sensors, humidity sensors, acceleration sensors, vibration sensors, shock sensors, pressure sensors, altitude sensors, light sensors, and orientation sensors. In the illustrated embodiment, the master node551can determine its own location based on geolocation data transmitted by a satellite-based radio navigation system570(e.g., GPS, GLONASS, and NAVSTAR) and received by the GPS receiver554component of the master node551. In an alternative embodiment, the location of the master pallet node551can be determined using cellular based navigation techniques that use mobile communication technologies (e.g., GSM, GPRS, CDMA, etc.) to implement one or more cell-based localization techniques. After the master node551has ascertained its location, the distance of each of the assets559,561,563from the master node551can be estimated based on the average signal strength of the advertising packets that the master node551receives from the respective peripheral node. The master node551can then transmit its own location and the locations of the asset nodes E, F, and G to a server over a cellular interface connection with a cell tower572. Other methods of determining the distance of each of the assets559,561,563from the master node551, such as Received Signal-Strength Index (RSSI) based indoor localization techniques, also may be used. In some embodiments, after determining its own location and the locations of the peripheral nodes, the master node551reports the location data and the collected and optionally processed (e.g., either by the peripheral nodes peripheral nodes558,560,562or the master node551) sensor data to a server over a cellular communication path571on a cellular network572. In some examples, nodes are able to autonomously detect logistics execution errors if assets that suppose to travel together no longer travel together, and raise an alert. For example, a node (e.g., the master node551or one of the peripheral nodes558,560,562) alerts the server when the node determines that a particular asset559is being or has already been improperly separated from the group of assets. The node may determine that there has been an improper separation of the particular asset559in a variety of ways. For example, the associated node558that is bound to the particular asset559may include an accelerometer that generates a signal in response to movement of the asset from the pallet. In accordance with its intelligent agent program code, the associated node558determines that the master node551has not disassociated the particular asset559from the group and therefore broadcasts advertising packets to the master node, which causes the master node551to monitor the average signal strength of the advertising packets and, if the master node551determines that the signal strength is decreasing over time, the master node551will issue an alert either locally (e.g., through a speaker component of the master node551) or to the server. Referring toFIG.10D, a truck580is configured as a mobile node or mobile hub that includes a cellular communications interface582, a medium power communications interface584, and a low power communications interface586. The communications interfaces580-586may be implemented on one or more tape and non-tape nodes. In an illustrative scenario, the truck580visits a storage facility, such as a warehouse588, to wirelessly obtain temperature data generated by temperature sensors in the medium range nodes590,592,594. The warehouse588contains nodes590,592, and594that are associated with respective assets591,593,595. In the illustrated embodiment, each node590-594is a medium range node that includes a respective medium power communications interface596,602,608, a respective low power communications interface598,604,610and one or more respective sensors600,606,612. In the illustrated embodiment, each of the asset nodes590,592,594and the truck580is connected to each of the other ones of the asset nodes through a respective medium power communications path (shown by dashed lines). In some embodiments, the medium power communications paths are LoRa formatted communication paths. In some embodiments, the communications interfaces584and586(e.g., a LoRa communications interface and a Bluetooth Low Energy communications interface) on the node on the truck580is programmed to broadcast advertisement packets to establish connections with other network nodes within range of the truck node. A warehouse588includes medium range nodes590,592,594that are associated with respective containers591,593,595(e.g., assets, boxes, pallets, and the like). When the truck node's low power interface586is within range of any of the medium range nodes590,592,594and one or more of the medium range nodes is operating in a listening mode, the medium range node will extract the address of truck node and potentially other information (e.g., security information) from the advertising packet. If, according to its programmatic code, the truck node determines that it is authorized to connect to one of the medium range nodes590,592,594, the truck node will attempt to pair with the medium range node. In this process, the truck node and the medium range node determine each other's identities, capabilities, and services. For example, after successfully establishing a communication path with the truck node (e.g., a Bluetooth Low Energy formatted communication path614or a LoRa formatted communication path617), the truck node determines the identity information for the medium range node590(e.g., a peripheral node), the medium range node's capabilities include retrieving temperature data, and the medium range node's services include transmitting temperature data to other nodes. Depending of the size of the warehouse588, the truck580initially may communicate with the nodes590,592,594using a low power communications interface (e.g., Bluetooth Low Energy interface). If any of the anticipated nodes fails to respond to repeated broadcasts of advertising packets by the truck580, the truck580will try to communicate with the non-responsive nodes using a medium power communications interface (e.g., LoRa interface). In response to a request from the truck node584, the medium range node590transmits an indication of its measured temperature data to the truck node. The truck node repeats the process for each of the other medium range nodes592,594that generate temperature measurement data in the warehouse588. The truck node reports the collected (and optionally processed, either by the medium range nodes590,592,594or the truck node) temperature data to a server over a cellular communication path616with a cellular network618. Referring toFIG.10E, a master node630is associated with an item632(e.g., an asset) and grouped together with other items634,636(e.g., assets) that are associated with respective peripheral nodes638,640. The master node630includes a GPS receiver642, a medium power communications interface644, one or more sensors646, and a cellular communications interface648. Each of the peripheral nodes638,640includes a respective medium power communications interface650,652and one or more respective sensors654,656. In the illustrated embodiment, the peripheral and master nodes are connected to one another other over respective pairwise communications paths (shown by dashed lines). In some embodiments, the nodes630638,640communicate through respective LoRa communications interfaces over LoRa formatted communications paths658,660,662. In the illustrated embodiment, the master and peripheral nodes638,638,640include environmental sensors for obtaining information regarding environmental conditions in the vicinity of the associated assets632,634,636. Examples of such environmental sensors include temperature sensors, humidity sensors, acceleration sensors, vibration sensors, shock sensors, pressure sensors, altitude sensors, light sensors, and orientation sensors. In accordance with the programmatic code stored in its memory, the master node630periodically broadcasts advertising packets in the surrounding area. When the peripheral nodes638,640are within range of master node630, and are operating in a listening mode, the peripheral nodes638,640will extract the address of master node630and potentially other information (e.g., security information) from the advertising packets. If, according to their respective programmatic code, the peripheral nodes638,640determine that hey are authorized to connect to the master node630, the peripheral nodes638,640will attempt to pair with the master node630. In this process, the peripheral nodes638,640and the master node and the peripheral nodes determine each other's identities, capabilities, and services. For example, after successfully establishing a respective communication path658,660with each of the peripheral nodes638,640(e.g., a LoRa formatted communication path), the master node630determines certain information about the peripheral nodes638,640, such as their identity information (e.g., peripheral nodes), their capabilities (e.g., measuring temperature data), and their services include transmitting temperature data to other nodes. After establishing LoRa formatted communications paths658,660with the peripheral nodes638,640, the master node630transmits requests for the peripheral nodes638,640to transmit their measured and/or locally processed temperature data to the master node630. In the illustrated embodiment, the master node630can determine its own location based on geolocation data transmitted by a satellite-based radio navigation system666(e.g., GPS, GLONASS, and NAVSTAR) and received by the GPS receiver642component of the master node630. In an alternative embodiment, the location of the master node630can be determined using cellular based navigation techniques that use mobile communication technologies (e.g., GSM, GPRS, CDMA, etc.) to implement one or more cell-based localization techniques. After the master node630has ascertained its location, the distance of each of the assets634,636from the master node630can be estimated based on the average signal strength of the advertising packets that the master node630receives from the respective peripheral node. The master node630can then transmit its own location and the locations of the asset nodes E, F, and G to a server over a cellular interface connection with a cell tower672. Other methods of determining the distance of each of the assets634,636from the master node630, such as Received Signal-Strength Index (RSSI) based indoor localization techniques, also may be used. In some embodiments, after determining its own location and the locations of the peripheral nodes, the master node630reports the location data the collected and optionally processed (e.g., either by the peripheral nodes peripheral nodes634,636or the master node630) sensor data to a server over a cellular communication path670on a cellular network672. Spreading Out Electronics Disclosed herein is a wireless tracking device for tracking assets. In certain situations, tracking devices attached to assets may be exposed to physical damage or trauma. In particular, environments related to shipping, storing, industrial machinery, logistics, or other environments where asset tracking is useful, the assets being tracked may be handled roughly or interact with machinery that can damage parts of the asset or tracking devices that are attached to the assets. Examples of physical damage may include the asset being dropped, a large item or object being dropped on top of the asset, a nail or other hardware (e.g., screw, bolt) penetrating the asset, machinery (e.g., a crane) impacting the asset during transport, strain from the asset holding a large amount of weight, exposure to heat, exposure to radiation, exposure to extreme cold, exposure to moisture, exposure to other weather conditions, and other forms of physical damage. In one example, the asset being tracked is a pallet for transporting and/or storing items. The pallet may include a wood material, a plastic material, a metal material (e.g., steel), some other material, or some combination thereof. The disclosed wireless tracking device includes a printed circuit board (PCB) and/or other components that are resilient to physical damage. The PCB of the resilient tape node may be a flexible PCB. When referring to a “PCB” herein, a flexible PCB may be an included embodiment. In some embodiments, the wireless tracking devices is an embodiment of the adhesive tape platform described above with respect toFIGS.1A-6C. The wireless tracking device (also referred to herein as a “resilient tape node”) is a member of the wireless tracking system400described above, with respect toFIG.7. The resilient tape node has an adhesive tape form factor, according to some embodiments, and is configured to be adhered to a portion of the asset being tracked. As mentioned above, when the resilient tape node is tracking the asset, the resilient tape node may be physically damaged. However, the resilient tape node is configured to continue functioning and performing asset tracking even after receiving significant physical damage and/or trauma. One issue with conventional tracking devices is that if one or more of the conductive traces of a PCB of a conventional tracking devices is damaged, the tracking device may no longer be operational due to, for example, an open circuit caused by the damage. Another issue may be that a electronic component may malfunction due to physical damage or trauma. An example of an electronic component may include a microcontroller, a processor, a memory, an energy storage device, a sensor, communication system, some other electronic component, or some combination thereof. In some embodiments, the resilient tape node includes redundant traces or sub-traces. If one or more redundant traces or sub-traces is damaged, one of the other redundant traces that isn't damaged may still provide a connection between two or more components on the PCB. In some embodiments, portions of the resilient tape node that are more likely to be exposed to physical damage include redundant traces or sub-traces, and other portions of the resilient tape node that are less likely to be exposed to physical damage do not include the redundant traces or sub-traces. The resilient tape node may also include redundant electronic components. According to some embodiments, the resilient tape node includes multiple copies of the same electronic component and corresponding traces. When one of the electronic components is damaged and malfunctioning, the resilient tape node deactivates the damaged component and switches to using one of the other copies of the component. Thus, the resilient tape node may receive significant physical damage or trauma and still continue to function. FIG.11Ais a diagrammatic view of a pallet1100retrofitted with a tracking device1120, in accordance with some embodiments. The tracking device1120is an embodiment of the adhesive tape platform12, shown inFIGS.1-6C, according to some embodiments. The tracking device1120may also be an embodiment of a wireless tracking belt. For example, the wireless tracking belt may be a belt that includes hook and loop fasteners, such that the belt may fasten to itself to secure the wireless tracking belt to an object that needs to be tracked. The wireless tracking belt may include the wireless transducing circuit70, according to some embodiments. The pallet1100comprises top and bottom sets of deckboards1105and a set of support poles (also referred to as “stringers”)1110, including a center support pole1110A and at least a support pole1110B,1110C at each side of the pallet. In some embodiments, the pallet1100may have additional or fewer deck boards1105and support poles1110than shown inFIG.11A, and the deck boards and support poles may be oriented or shaped differently than shown inFIG.11A. The pallet1100may include additional, different, or fewer components than shown in the diagram ofFIG.11A, such as blocks, notches, additional boards, and other components. For example, the pallet1100may be a plastic pallet wherein the deck boards1105are a single component, or the pallet may have smaller or larger gaps between the deck boards and/or the support poles1110. The tracking device1120is looped around the center support pole1110A (also referred to as the “center stringer”) of the pallet1100and positioned such that electronic components of the tracking device are oriented outward along the tracking device, in some embodiments. In some embodiments, nails, screws, bolts, or other hardware are used to secure the tracking device1120to the pallet1100. This may result in punctures, tears, or other damage to the wireless tracking device1120. In other embodiments, the tracking device1120is secured to the pallet1100using other means (e.g., with hook and loop fasteners on the tracking device1120or by an adhesive), but the tracking device1120still experiences damage. For example, nails or screws may be used to secure other objects to the pallet1100and incidentally damage the wireless tracking device, nails or screws may be used to repair a portion of the pallet1100and incidentally damage the wireless tracking device, or other damage may be inflicted to the wireless tracking device in the course of tracking the pallet1100. FIG.11Bshows a close-up view1103of the pallet11100where the wireless tracking device1120is secured to the pallet1100on the center support pole1110. The wireless tracking device1120has experienced damage while tracking the pallet1100, in the form of nails1130that have punctured the wireless tracking device1120. The wireless tracking device1120is configured to be resilient to the physical damage or trauma and includes resilient circuit board traces that enable the wireless tracking device1120to continue functioning even when a conductive trace of a circuit in the wireless tracking device1120is damaged or broken. The resilient circuit board traces are discussed in further detail below with respect toFIGS.12A-20. FIG.12Ais a diagram showing a portion of a printed circuit board1201including a conductive trace1210between two components1220,1230that includes a plurality of sub-traces1212A,1212B,1212C,1212D, according to some embodiments. The plurality of sub-traces1212A,1212B,1212C,1212D is collectively be referred to as the “sub-traces1212,” herein. The portion of the PCB1201is part of the resilient tape node, and in some embodiments, the portion of the PCB1201is a portion of a flexible PCB. The trace1210includes the redundant sub-traces1212, such that if one of the sub-traces is broken or damaged during the tracking of an asset, the trace1201will still electrically connect the component1220to the component1230. For example, if the tape node is penetrated by a nail during tracking of an asset (e.g., a pallet), the nail may break or otherwise damage one or more of the sub-traces1212. As long as at least one of the sub-traces1212is not broken or damaged, the component1220and1230remain electrically connected to each other. According to some embodiments, the trace1210may include portions that do not include the redundant sub-traces1212. For example, the trace1210may include a portion that is a simple trace without the parallel redundant sub-traces. The portion may lead or be connected to a portion with the redundant sub-traces, as shown with the portion of the trace1210that is on the left side ofFIG.12near the contact point1235. In further embodiments, the portion of the trace12120that does not include the redundant sub-traces1212may be positioned in areas of the tape node that are less susceptible to physical damage than the areas of the tape node that include the portions of the trace1210including the redundant sub-traces1212. Thus, the design and layout of the traces may be simplified in areas of the tape node where the PCB is exposed to less risk of physical damage. Although only two components1220,1230are shown being connected by the trace1210inFIG.12A., the PCB and the trace1210are not limited thereto, and, in other embodiments, the trace1210may connect more than two components. Further, the PCB and the trace1210may include a different configuration and/or number of elements than shown inFIG.12. For example, the trace1210may include a different number, shape, or size of sub-traces1212. The components1220,1230are components of the wireless transducing circuit70shown inFIG.3, according to some embodiments. The components1220,1230may include an antenna, a communication circuit or system, an energy storage component (e.g., a battery), a memory, a processor, a microcontroller, an integrated circuit, an LED, a micro LED, a sensor (e.g., a photo sensor), a display, some other electronic component, or some combination thereof. FIG.12Bis a diagram showing the portion of the PCB1201in a state1203after it has experienced physical damage1240A,1240B. For example, the PCB1201may have been punctured by one or more nails. Although the damage1240A,1240B overlaps the trace1210, the connection between the contact points1235,1245is not disrupted. FIG.13Ais a diagram showing a portion of a printed circuit board1301including a large size conductive trace1310between two components1330,1340that is configured to withstand physical damage, according to some embodiments. The portion of the PCB1301is included in an embodiment of the resilient tape node. The large trace1310is configured to receive physical damage to a portion of the large trace1310and still electrically connect the component1330to the component1340. For example if a nail or other hardware punctures or pierces the PCB at a region overlapping the large trace1310, the large trace1310will not be completely severed, as long as the puncture hole has a diameter that is less than the width of the large trace1310. In some embodiments, the large trace has a width W that is larger than a threshold width. For example, the large trace may have a width that is larger than 1 cm, according to some embodiments. In some embodiments, the width W1corresponds to a type of physical damage risk that is associated with a task that the resilient tape node is assigned to. For example, if the resilient tape node is assigned to track a pallet that is likely to have one or more nails puncturing the resilient tape node, the width W1may be greater than a threshold width that corresponds to a diameter of the nails used with the pallet. FIG.13Aalso shows a smaller trace1320that has a width W2that is smaller than the width W1. The width W2may correspond to a normal trace size comparable to a conventional PCB conductive trace. The smaller trace1320is positioned in an area of the tape node that have a lower risk of damage than an area where the large trace1310is positioned. The components1330,1340show inFIG.13Aare substantially similar to the components1220,1230described above with respect toFIG.12. FIG.13Bis a diagram showing the portion of the PCB1301in a state1303after it has experienced physical damage1350. For example, the PCB1301may have been punctured by a nail. Although the damage1350overlaps the trace1310, the connection between the contact points1335,1345is not disrupted. FIG.14Ais a diagram showing a portion of a printed circuit board1401including a plurality of redundant traces1410A,1410B,1410C,1410D,1410E connecting two components1420,1430, according to some embodiments. The plurality of redundant traces1410A,1410B,1410C,1410D,1410E are collectively referred to as the “redundant traces1410,” herein. The redundant traces1410transmit the same electrical signals between the components1420,1430. If one of the redundant traces1410is broken or damaged, the connection between the components1420,1430is preserved with the signal being carried by one of the other unbroken redundant traces1410. In some embodiments, the redundant traces1410all connect to the same contact or contact pad on the PCB, i.e. the redundant traces are connected in parallel at one or more points on the PCB. In other embodiments, one or more of the components1420,1430includes redundant contacts or ports for receiving the signal carried by the redundant traces1430. For example, if the component1420is an integrated circuit, each of the redundant traces1430may connect to a different redundant port, pin, or contact on the component1420. Each redundant port on the component1420may be configured to receive the same signal. The redundant ports on the component1420may internally be connected in parallel, according to some embodiments. Other than embodiments where the components1420,1430include redundant ports, pins, or contacts, the components1420,140show inFIG.14Aare substantially similar to the components1220,1230described above with respect toFIG.12. FIG.14Bis a diagram showing the portion of the PCB1401in a state1403after it has experienced physical damage1440. For example, the PCB1401may have been punctured by a nail. Although the damage1440overlaps one of the redundant traces1410A, the connection between the component1420,1430, is maintained by the other redundant traces1410B-1410E. In some embodiments, a resilient tape node includes a PCB with a percentage of metal in the PCB. The percentage of metal in the PCB may be adjusted based on an expected amount of damage that the resilient tape node will receive. In further embodiments, a resilient tape node's PCB includes a percentage of metal that is above a threshold percentage. In some embodiments, portions of the PCB exposed to physical damage may have different percentages of metal than other portions. In other embodiments, traces on the PCB include a plurality of metals. The percentages of each metal included in the traces may be similarly adjusted based on an expected amount of damage that the resilient tape node will receive. For example, a percentage of nickel included in the traces may be above a threshold percentage (e.g., 40%) based on the resilient tape node being exposed to physical damage. This may be done if one of the plurality of metals is more resilient against or less susceptible to breaking. In some embodiments, the percentages of each metal is different in a portion of the resilient tape node that is exposed to physical damage than other portions. Detecting Damage to Conductive Traces FIG.15is a diagram showing an example portion of a printed circuit board1501including a trace1510between two components1520,1530that includes two sub-traces1512A,1512B, according to some embodiments. A resilient tape node includes the portion of the PCB1501. The two sub-traces1512A,1512B may collectively be referred to as the “sub-traces1512,” herein. The portion of the PCB1501shown inFIG.15is an embodiment of the portion of the PCB1201shown inFIG.12that has only two sub-traces1512. Although the portion of the PCB1501includes only two sub-traces1512, the resilient tape node may include embodiments of the trace1501that includes a different number of sub-traces1512and is not limited thereto. The resilient tape node that includes the portion of the PCB1501is configured to detect when one of the sub-traces1512is damaged or broken. FIGS.16A-16Care circuit diagrams corresponding to various states1601,1602,1603for the circuit included in the example portion of the printed circuit board1501shown inFIG.15, according to some embodiments.FIG.16shows an undamaged state1601of the circuit. As shown inFIG.15, the components1520,1530are connected by the redundant sub-traces1512. The trace1510connects to the component1520at the contact point1525and connects to the component1530at the contact point1535. The sub-trace1512A and the sub-trace1512B are connected in parallel. The electrical resistance of the sub-trace1512A is represented by R1. The electrical resistance of the sub-trace1512B is represented by R2. In the undamaged state, the equivalent resistance between the contact points1525,1535is (R_1R_2)/(R_1+R_2). FIG.16Bshows a damaged state1602of the circuit where the sub-trace1512A is broken or punctured at some portion of the sub-trace1512A. The sub-trace1512A is completely broken to the point that no electrical current can pass through the sub-trace1512A.FIG.16Cshows an alternate damaged state1603of the circuit where the sub-trace1512B is broken or punctured at some portion of the sub-trace1512B. The sub-trace1512B is completely broken to the point that no electrical current can pass through the sub-trace1512B. In either damaged state1602,1603, the equivalent resistance between the contact points1525,1535will increase compared to the undamaged state1601. Thus, damage to the sub-traces1512can be detected by measuring the resistance or impedance between the contact points1525,1535. In some embodiments, the circuit includes a resistance or impedance monitoring component connected to the contact points1525,1535to detect damage to the traces. If the monitoring component detects a change (e.g., increase) in the resistance or impedance, the resilient tape node determines that at least one of the sub-traces1512has been damaged. In further embodiments, responsive to this, the resilient tape node transmits a notification to a member of the wireless tracking system400indicating that the resilient tape node has received physical damage. In other embodiments, the resistance or impedance between the contact points1525,1535is measured by a human operator or inspection tool during an inspection, renovation, refurbishment, or repair process. In some embodiments, the tape node includes contact pads that are easily accessible from the exterior of the tape node for measuring the resistance or impedance of the trace1510. FIG.16is a flowchart showing a method1600of detecting damage to a conductive trace in a printed circuit board, according to some embodiments. At least two components of a resilient tape node are electrically connected1610on a PCB by a conductive trace that includes at least two redundant sub-traces connected in parallel. The trace is connected to each component at a respective contact point. The conductive trace may be, for example, an embodiment of the trace1210shown inFIG.12or an embodiment of the trace1510shown inFIG.15. The initial resistance or impedance across the trace is measured1620at two contact points. The resistance or impedance may be measured by a component of the tape node or may be externally measured by a human operator or inspection tool, according to some embodiments. The initial resistance or impedance is stored1630. In some embodiments, the initial resistance or impedance is stored on a memory of the resilient tape node. In other embodiments, the initial resistance or impedance is stored on a client device, another tape node, a server, a gateway node, some other member of the wireless tracking system400, or some combination thereof. In some embodiments, the resilient tape node transmits the initial resistance or impedance to the wireless tracking system400. In some embodiments, the human operator or inspection tool transmits the initial resistance or impedance (e.g., via a client device connected to the network). The resilient tape node then operates, tracking1640one or more asset as described above. During its operation, the resilient tape node may be physically damaged in one way or another. For example, the tape node may receive physical trauma or force from another object. In other examples, a nail penetrates the tape node. After some period of use, a new resistance or impedance is measured1650across the trace at the two contact points. Similar methods for measuring the new resistance or impedance may be used with respect to the step1620. In response to the new resistance or impedance being higher than the initial resistance by at least a threshold amount, it is determined1660that at least one of the redundant sub-traces has been damaged. In some embodiments, the resilient tape node locally determines that at least one of the redundant sub-traces has been damaged. In other embodiments, the resilient tape node transmits the impedance or resistance measurements to another member of the wireless tracking system400and another member of the wireless tracking system400performs the computation for determining1660that at least one of the sub-traces is damaged. In some embodiments, a human operator or inspection tool determines1660that at least one of the sub-traces is damaged. Additionally, a user of the wireless tracking system400may be notified that at least one of the sub-traces is damaged. The wireless tracking system400may also track the condition of the PCB, the sub-traces, and/or the resilient tape node by logging the data (including the resistance or impedance measurements) on the condition in a database. In some embodiments, if it is determined that a threshold level of damage has occurred, the wireless tracking system400flags the resilient tape node for refurbishment or deactivation. For example, if it is determined that all but one of the sub-traces of the trace are damaged, the wireless tracking system400may flag the respective resilient tape node and notify a user of the wireless tracking system400. This may be done to prevent the failure of the resilient tape node during a crucial tracking task. FIG.18is a diagram showing an example portion of a printed circuit board1201including a plurality of redundant components1810A,1810B,1810C,1810D and a corresponding plurality of redundant traces1820A,1820B,1820C,1820D, according to some embodiments. The plurality of redundant components1810A,1810B,1810C,1810D are referred to herein as “the redundant components1810.” The plurality of redundant traces1820A,1820B,1820C,1820D are referred to herein as the “redundant traces1820.” The redundant traces connect each of the redundant components1810to the component1830. In some embodiments, the component1830includes a respective redundant port, pin, or contact for receiving a signal from one of the redundant components1810. In some embodiments, the redundant traces1820all connect to the same port, pin or contact of the component1830. In some embodiments, the component1830includes a switch or multiplexer for selecting one of the redundant traces1820. According to some embodiments, the resilient tape node only uses one of the redundant components1810at a time. For example, only one of the redundant components1810may be used to minimize power consumption of the circuit. When it is determined that one of the redundant traces1820is damaged or broken, the resilient tape node may switch to using another one of the redundant components1810., according to some embodiments. For example, a resilient tape node may initially use the redundant component1810A. If it is determined that the redundant trace1820A is damaged or broken, the resilient tape node may switch to using the redundant component1810B. In some embodiments, the redundant components1810include internal switches for activating or deactivating themselves. In other embodiments, the circuit includes switches for connecting or disconnecting each of the redundant components1810. The monitor traces1840A,1840B connect the redundant traces1820to the impedance monitor1850. The impedance monitor is a component configured to measure the equivalent resistance or impedance across the redundant traces, according to some embodiments in order to determine if one or more of the redundant traces1820is damaged or broken. Although the impedance monitor1850is shown inFIGS.18and19to be connected to each of the redundant traces1820in parallel, in other embodiments the impedance monitor1850may have a separate pair of monitor traces for each one of the redundant traces1820to individually measure the resistance or impedance of the one of the redundant traces1820. Based on a change of the resistance or impedance measurements (e.g., using the method described inFIG.16) of the redundant traces1820, it may be determined that one or more of the redundant traces1820has been damaged or broken. In some embodiments, each of the redundant traces1820may be designed with a corresponding resistance or impedance. One or more of the corresponding resistance or impedances may be different from each other. In further embodiments, it is determined which of the redundant traces1820has been damaged or broken based on a change of the resistance or impedance. For example, if the resistance or impedance of the redundant traces1820changes by a first amount (within a threshold of error), it is determined that the first redundant trace1820A has been damaged or broken. If the resistance or impedance changes by a second amount (within a threshold of error), it is determined that the second redundant trace1820A has been damaged or broken. FIGS.19A-19Bare schematic diagrams corresponding to the example portion of the printed circuit board1801shown inFIG.18, according to some embodiments.FIG.19Ashows an undamaged state1901of the circuit included in the portion of the PCB1801. In the example shown inFIG.19A, the redundant component1810A is being used; however, in other cases, another one of the redundant components (e.g., redundant component1810D) may be used. The circuit may switch between the redundant components1810using the switches1822A,1822B,1822C,1822D (collectively, “switches1822”), according to some embodiments. In other embodiments, the switches1822are each internal to one of the redundant components1810. In other embodiments, other methods are used to switch between the redundant components1810. FIG.19Bshows a damaged state1902of the circuit where the redundant trace1820A has been broken. The impedance monitor1850detects that the redundant trace1820A a change in the impedance or resistance corresponding to the broken redundant trace1820A. In response, the resilient tape node switches to using redundant component1820B. Although the example ofFIG.19Bshows that the circuit has switched to using redundant component1820B, in other examples, the circuit switches to another one of the redundant components1820C,1820D. In some embodiments, the redundant traces1820are not connected in parallel and each of the redundant traces1820is individually connected to the impedance monitor1850by a separate pair of monitor traces. In other embodiments, the resilient tape node switches which of the redundant components1820it uses based on detecting damage to one of the redundant components. For example, if the component1820A is outputting an electrical signal that is unexpected or corresponds to a malfunction, the resilient tape node may switch to using the component1820B, in response. In the example ofFIGS.19A and19Bthe monitoring trace is connected in parallel to each of the traces1720A-1720D. However, in other embodiments, the redundant traces (and as a result the components1720A-1720D) are not connected in parallel. The impedance monitor may be separately connected to each of the redundant traces1720A-1720D via independent traces. In further embodiments, the circuit1901includes a separate impedance monitor1750component connected to each of the redundant traces1720A-1720D. The wireless tracking device switches between which component1720A-1720D to activate and use based on a status (broken, damaged, or undamaged) of the redundant traces1720A-1720D. FIG.19Cshows an alternate embodiment1903of the circuit1901that includes a multiplexer1920in place of the switches1722A-1722CD. In the example shown inFIG.19C, the circuit1903does not include the redundant components1720B-1720C, but includes a single component1920that performs the functions of component1720A. The redundant traces1720A-1720D are connected to the inputs of the multiplexer1930and an output of the multiplexer1930is connected to the component1920, with the multiplexer1930configured to switch between the redundant traces based on detecting one or more of the redundant traces is broken or damaged. In other embodiments, the multiplexer may be connected to the multiple components1720A-1720D, as with the circuit1901ofFIG.19A. FIG.20is a flowchart showing a method of switching between redundant components of a circuit, according to some embodiments. The circuit is included in a resilient tape node. The circuit includes a plurality of redundant components including a first redundant component. The first redundant component is activated2010and used in the circuit. In some embodiments, the other redundant components which are functionally similar or are configured to perform the same functions as the first redundant component are deactivated. The circuit also includes a plurality of redundant traces connecting the redundant components to other parts of the circuit. The redundant traces include a first redundant trace connected to the first redundant component. The initial resistance or impedance across the redundant traces is measured2020and stored2030. The resilient tape node is then used to track2040an asset, as described above. During the asset tracking, the resilient tape node may receive physical damage. After some period of use, a new resistance or impedance is measured2050across the redundant traces. Responsive to the new resistance or impedance being different (e.g., higher) from the initial resistance or impedance by a first amount, it is determined2060that the first redundant trace has been damaged. Responsive to the determining2060, the resilient tape node deactivates2070the first redundant component and activates a second redundant component of the plurality of redundant components. Thus, the resilient tape node does not lose the functionality of the first redundant component even when the resilient tape node is damaged FIG.21is an alternate embodiment of a resilient wireless tracking device2110that includes a region resilient to physical damage, according to some embodiments. The resilient wireless tracking device2110is an embodiment of the adhesive tape platform, according to some embodiments. In other embodiments, the resilient wireless tracking device2110is a wireless tracking belt, such as the one discussed above with respect toFIGS.11A-11B. The resilient wireless tracking device2110includes a wireless transducing circuit2114which is an embodiment of the wireless transducing circuit70. The wireless transducing circuit2114overlaps one or more regions2120and one or more resilient regions2130. The portions of the wireless transducing circuit2114that overlaps the resilient regions2130include embodiments of the resilient conductive traces and circuits discussed above, such as, for example, the ones shown inFIGS.12A-16C and18-19C. The portions of the wireless transducing circuit2114that overlaps the one or more regions2120may include components and circuit elements that are not resilient to physical damage, in some embodiments. The wireless tracking device2110is configured to be positioned on an asset, such that the resilient regions2130are exposed to physical damage or trauma. For example, the wireless tracking device2110may be installed on an asset in a position where the resilient regions2130overlap a portion of the asset where hardware (e.g., nails, screws, bolts) is installed or punctures the asset. The one or more regions2120may be positioned at a position that protects the regions2120from physical damage or trauma. On an external surface of the wireless tracking device2110, for example, on the cover layer or substrate, one or more graphics2140A,2140B. The one or more graphics2140A,2140B may include images and/or text. The one or more graphics2140A,2140B may be printed directly on the wireless tracking device2110or may be printed on labels that are applied to the wireless tracking device2110, according to some embodiments. The graphics2140A indicate to a user that the resilient region2130is resilient to physical damage. In the example ofFIG.21, the graphic2140A includes text that indicates that a user may puncture the wireless tracking device2110in the resilient regions2130, without causing malfunction to the wireless tracking device2110. Similarly, the graphic2140B includes text that indicates that the user should not puncture the wireless tracking device2110in the regions2120. In other embodiments, different type of or a different number of graphics may be displayed on the wireless tracking device2110. Computer Apparatus FIG.22shows an example embodiment of computer apparatus320that, either alone or in combination with one or more other computing apparatus, is operable to implement one or more of the computer systems described in this specification. The computer apparatus320includes a processing unit322, a system memory324, and a system bus326that couples the processing unit322to the various components of the computer apparatus320. The processing unit322may include one or more data processors, each of which may be in the form of any one of various commercially available computer processors. The system memory324includes one or more computer-readable media that typically are associated with a software application addressing space that defines the addresses that are available to software applications. The system memory324may include a read only memory (ROM) that stores a basic input/output system (BIOS) that contains start-up routines for the computer apparatus320, and a random access memory (RAM). The system bus326may be a memory bus, a peripheral bus or a local bus, and may be compatible with any of a variety of bus protocols, including PCI, VESA, Microchannel, ISA, and EISA. The computer apparatus320also includes a persistent storage memory328(e.g., a hard drive, a floppy drive, a CD ROM drive, magnetic tape drives, flash memory devices, and digital video disks) that is connected to the system bus326and contains one or more computer-readable media disks that provide non-volatile or persistent storage for data, data structures and computer-executable instructions. A user may interact (e.g., input commands or data) with the computer apparatus320using one or more input devices330(e.g. one or more keyboards, computer mice, microphones, cameras, joysticks, physical motion sensors, and touch pads). Information may be presented through a graphical user interface (GUI) that is presented to the user on a display monitor332, which is controlled by a display controller334. The computer apparatus320also may include other input/output hardware (e.g., peripheral output devices, such as speakers and a printer). The computer apparatus320connects to other network nodes through a network adapter336(also referred to as a “network interface card” or NIC). A number of program modules may be stored in the system memory324, including application programming interfaces338(APIs), an operating system (OS)340(e.g., the Windows® operating system available from Microsoft Corporation of Redmond, Washington U.S.A.), software applications341including one or more software applications programming the computer apparatus320to perform one or more of the steps, tasks, operations, or processes of the locationing and/or tracking systems described herein, drivers342(e.g., a GUI driver), network transport protocols344, and data346(e.g., input data, output data, program data, a registry, and configuration settings). Examples of the subject matter described herein, including the disclosed systems, methods, processes, functional operations, and logic flows, can be implemented in data processing apparatus (e.g., computer hardware and digital electronic circuitry) operable to perform functions by operating on input and generating output. Examples of the subject matter described herein also can be tangibly embodied in software or firmware, as one or more sets of computer instructions encoded on one or more tangible non-transitory carrier media (e.g., a machine readable storage device, substrate, or sequential access memory device) for execution by data processing apparatus. The details of specific implementations described herein may be specific to particular embodiments of particular inventions and should not be construed as limitations on the scope of any claimed invention. For example, features that are described in connection with separate embodiments may also be incorporated into a single embodiment, and features that are described in connection with a single embodiment may also be implemented in multiple separate embodiments. In addition, the disclosure of steps, tasks, operations, or processes being performed in a particular order does not necessarily require that those steps, tasks, operations, or processes be performed in the particular order; instead, in some cases, one or more of the disclosed steps, tasks, operations, and processes may be performed in a different order or in accordance with a multi-tasking schedule or in parallel. Other embodiments are within the scope of the claims. Additional Configuration Information The foregoing description of the embodiments of the disclosure have been presented for the purpose of illustration; it is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Persons skilled in the relevant art can appreciate that many modifications and variations are possible in light of the above disclosure. Some portions of this description describe the embodiments of the disclosure in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, hardware, or any combinations thereof. Any of the steps, operations, or processes described herein may be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In one embodiment, a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described. Embodiments of the disclosure may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus. Furthermore, any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability. Embodiments of the disclosure may also relate to a product that is produced by a computing process described herein. Such a product may comprise information resulting from a computing process, where the information is stored on a non-transitory, tangible computer readable storage medium and may include any embodiment of a computer program product or other data combination described herein. Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the disclosure be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of the embodiments is intended to be illustrative, but not limiting, of the scope of the disclosure, which is set forth in the following claims. | 127,072 |
11861442 | DETAILED DESCRIPTION The following description is the best embodiment presently contemplated for carrying out the present invention. This description is made for the purpose of illustrating the general principles of the present invention and is not meant to limit the inventive concepts claimed herein. A Radio Frequency (RF) device with a circuit and a “semitransparent” antenna. The semitransparent antenna gathers some of the RF energy, but most of the energy in the RF wave does not couple into the antenna. Accordingly, because the antenna minimally affects the electromagnetic RF fields surrounding the antenna even in the vicinity of the antenna, assemblies of objects carrying the RF devices can be formed while maintaining acceptable communications with the RF devices. Many types of devices can take advantage of the embodiments disclosed herein, including but not limited to Radio Frequency Identification (RFID) systems and other wireless devices/systems; pacemakers; portable electronic devices; remote controllers for televisions, audio devices, and other electronic devices; smoke detectors; etc. To provide a context, and to aid in understanding the various embodiments of the invention, much of the present description shall be presented in terms of an RFID system such as that shown inFIG.1. It should be kept in mind that this is done by way of example only, and the invention is not to be limited to RFID systems, as one skilled in the art will appreciate how to implement the teachings herein into electronics devices in hardware and/or software. Examples of hardware include Application Specific Integrated Circuits (ASICs), printed circuits, monolithic circuits, reconfigurable hardware such as Field Programmable Gate Arrays (FPGAs), etc. Further, the methodology disclosed herein can also be incorporated into a computer program product, such as a computer disc containing software. Further, such software can be downloadable or otherwise transferable from one computing device to another via network, nonvolatile memory device, etc. FIG.2illustrates a Radio Frequency (RF) device200, e.g., RFID tag according to one embodiment. The radio frequency data communication device200includes an integrated circuit204, a power source206connected to the integrated circuit204to supply power to the integrated circuit204, and at least one antenna202connected to the integrated circuit204for radio frequency transmission and reception by the integrated circuit204. For purposes of this disclosure, including the appended claims, the term “integrated circuit” and “circuit” shall be defined as a combination of interconnected circuit elements associated on or within a continuous substrate. For purposes of this disclosure, including the appended claims, the term “semiconductive substrate” is defined to mean any construction comprising semiconductive material, including, but not limited to, bulk semiconductive materials such as a semiconductive wafer (either alone or in assemblies comprising other materials thereon), and semiconductive material layers (either alone or in assemblies comprising other materials). For purposes of this disclosure, including the appended claims, the term “substrate” refers to any supporting structure, including, but not limited to, the semiconductive substrates described above, printed circuit boards (PCBs), adhesive backings, etc. In the embodiment illustrated inFIG.2, the integrated circuit204is a monolithic integrated circuit. For purposes of this disclosure, including the appended claims, the term “monolithic integrated circuit” shall be defined as an integrated circuit wherein all circuit components are manufactured into or on top of a single chip of silicon or layer of semiconductive material. The integrated circuit204will be described in greater detail below. The power source206is a battery and/or a power supply circuit that extracts and regulates power from the RF reader signal. The radio frequency data communication device200can be included in any appropriate housing or packaging, made of plastic or any other suitable material. The device200is of a small size that lends itself to applications employing small housings, such as cards, miniature tags, etc. Larger housings can also be employed. The device200, housed in any appropriate housing, can be supported from or attached to an object in any desired manner; for example using double sided tape, glue, lanyards, leash, nails, staples, rivets, or any other fastener. The housing can be sewn on to an object, hung from an object, implanted in an object (hidden), etc. A description of illustrative RFID tags, systems, and methods of user are disclosed in U.S. Patent Appl. Pub. No. 2004/0201457A1 to O'Toole et al., which is herein incorporated by reference. Various configurations are possible for the antenna202. The integrated circuit204includes a receiver300and a transmitter302(FIG.3). In one embodiment, separate antennas314and316are provided for receiver and transmitter of the integrated circuit204. In another embodiment (FIG.2), a single antenna is shared by the receiver and transmitter sections. In one embodiment, the antenna is defined by conductive paste (e.g., epoxy) screened onto a card or housing. In another embodiment, the antenna is formed of a conducting polymer. An advantage of conducting polymers is that the sheet resistivity is controllable in a range from 1 Ω/sq to 1,000,000 Ω/sq. In the illustrated embodiment, the antenna is a planar conductive material such as Indium Tin Oxide or other suitable high sheet resistance metal-based material conductively bonded to the integrated circuit via bonding pads. In an embodiment where a single antenna is employed, that single antenna can be a folded dipole antenna defining a continuous conductive path, or loop, of microstrip. Alternatively, the antenna can be constructed as a continuous loop antenna. Additional antenna designs are disclosed in copending U.S. patent application Ser. No. 11/073,239 filed on Mar. 4, 2005 with title “COMPACT OMNI-DIRECTIONAL RF SYSTEM,” and which is herein incorporated by reference. In the embodiments described herein, the tag antennas are designed to control and limit their interactions with the RF fields such that most of the RF wave striking or in the immediate vicinity of the antenna does not couple into the antenna. Thus, the antenna minimally affects the electromagnetic RF fields surrounding the antenna even in the vicinity of the antenna. By “minimally affects” what is meant is that at least about 50%, and preferably greater than about 90%, of the RF energy striking the antenna and in the vicinity of the antenna is useable by another RF device in the vicinity of the tag. In this antenna design, the inductive impedance elements are reduced and the antenna impedance increased to the point where the residual inductance of the tag antenna has only minimal effect on the antenna's impedance. Such antennas are preferably constructed of a planar conductor having a sheet resistivity of greater than about 1 Ω/sq, preferably greater than about 10 Ω/sq. To prevent excessive loading of this high impedance antenna, the tag circuit input impedance is preferably as high as possible. A total impedance of the RF device presented to the RF wave is preferably greater than about 1000Ω. One embodiment has a resistive impedance of >100KΩ and an input bypass capacitance of less than 0.02 pf corresponding to a reactive bypass impedance of at least about 10 KΩ. At 900 MHz, a non-resonant antenna design may include fabricating the antenna using conductors with a sheet resistivity of, for instance about 1000 Ω/sq, and designing the tag to have a total tag impedance of perhaps 100KΩ. The impedance of the semi-transparent tags is adjusted to the objects to which they are attached so that even a tightly packed assembly of such objects will appear to the RF propagating signal as a moderately lossey RF propagation medium. For instance, tags on stackable boxes 10 mm thick could be equipped with 10KΩ antennas; tags on 1 mm thick poker chips could have 100KΩ antennas; tags on 0.2 mm thick currency could have 500KΩ antennas. The total admittance or dissipation-factor of the tag/package system is preferably kept roughly constant per volume so that RF radiation can pass through the assembly without excessive attenuation or reflection. While the individual performance of these semi-transparent tags will be significantly inferior to the individual performance of conventional tags, the performance of these semi-transparent tags will not be degraded as much by the presence of other near-by semi-transparent tags. For example, while a conventional tagged poker chip might have a 100 m range in free space, the range of that same tagged poker chip would be reduced to less than 0.01 m when sandwiched between a dozen of other similar poker chips. On the other hand, a poker chip with a semi-transparent design might have a free space range of only 10 m, but continue to work at up to 3 m even when totally surrounded by other poker chips tagged with semi-transparent devices. This technique therefore provides a way to tag objects and read them even under adverse conditions that has heretofore been considered impossible. This includes directly reading a stack of currency or other paper documents, reading tags on the inside of a stack of poker chips, etc. Preferably, for plurality of RFID tagged objects, an operating range of the objects varies by less than 50% even when the objects are positioned directly adjacent (e.g., on top of or beside) one another. With continued reference toFIG.2, if the power source206is a battery, the battery can take any suitable form. Preferably, the battery type will be selected depending on weight, size, and life requirements for a particular application. Preferably, the battery is formed by thick film deposition of high-sheet-resistivity materials so that the battery itself is also semi-transparent to the RF carrier signal. Alternatively, a metallic button-type cell could be used as long as the battery size is kept small compared to the wavelength of the RF carrier. Instead of using a battery, other suitable power sources can be employed. FIG.3is a high level circuit schematic of the integrated circuit204utilized in the device ofFIG.2. In the embodiment shown inFIG.3, the integrated circuit204is a monolithic integrated circuit. More particularly, in the illustrated embodiment, the integrated circuit204includes the receiver300, the transmitter302, a micro controller or microprocessor304, a wake up timer and logic circuit306, a clock recovery and data recovery circuit308, and a bias voltage and current generator312. In one embodiment, a spread spectrum processing circuit310is also included in the integrated circuit204and formed relative to the single die. In this embodiment, signals received by the receiver300are modulated spread spectrum signals. In an illustrated embodiment, the modulation scheme for replies sent by the transmitter302can be selectable. One of the available selections for replies sent by the transmitter302is modulated spread spectrum. In a method of use, an RFID reader sends an interrogation signal to one or more RFID tags in range of the reader. One skilled in the art will appreciate that any suitable communication protocol, including security features, can be used. A tag receiving the signal responds with a tag ID. The reader can then use that tag ID to address that particular tag, causing the tag to transmit its stored data. The stored data can be any variety of information, and is normally associated with the article to which the tag is attached. The reader can then tell the tag to turn-off for now so that it will not continue to respond to the interrogation signal. The reader will then select another tag ID and poll that tag for its data, and so on until all of the tags have been read. Example 1 Poker chips in a casino each have a passive RF device integrated therein. The reader, present at a blackjack table for instance, sends out an interrogation signal sufficient to read all of the chips at the table (including the players' chips), or at a reduced power to read only those chips in the tray. Upon receiving a response from each tag, the reader or a backend system coupled to the reader can quickly determine the value of the chips on the table and/or in the tray. During active play, this information is useful for historical tracking of the flow of chips in and out of the tray, as well as alerting management to the need to either add chips to the tray or remove chips therefrom. Prior to opening the table or upon closing the table, the chip count in the tray can be quickly and accurately determined by an integrated or portable reader. Likewise, when a patron wishes to cash out at the cage, the value of a stack of chips can be verified by a reader mounted there and compared against the visual chip count. This feature would also provide a theft deterrent to dealers who may try to slip chips into their clothing and exit the casino. A reader near the employee exit can be used to detect chips leaving the casino. Example 2 Currency in a bank is formed into stacks of 50 bills each. Each bill is tagged with a semi-transparent RF device. Several of the stacks are placed in a bag. Prior to passing the bag to the armored car service, the bag is scanned and the value of the currency is recorded electronically and potentially sent to a central server accessible via a network. A paper report can also be provided to the bank and/or armored car service personnel. Upon arrival of the armored car at the Federal Reserve depository, the sealed bag is again scanned and the value is compared to the value it had when it left the bank. Example 3 Documents, each having a semi-transparent RF device coupled thereto, are stored in a series of rows in a filing room. Someone seeking a particular document passes a portable reader along each row, pair of rows, etc. The reader reads each of the tags in the row(s) within range of the reader. When the reader finds a match, the reader indicates where the document is found, e.g., in row B, section 3. Example 4 Library books, each having a semi-transparent RF device coupled thereto, are placed in a bin for reshelving. A reader scans the bin and transmits the information to the library server. Books indicated as checked out to patrons have their status automatically updated to indicate the books are available for checkout. Similarly, during checkout, a patron could set a stack of books on a shelf, where the books are scanned and checked out to the patron. Preferably, the shelf is in a pod or cubicle of shielding material (e.g., metal) that prevents the reader from reading books in adjacent pods. While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents. | 15,218 |
11861443 | DESCRIPTION OF THE INVENTION With reference toFIG.1, below a method for manufacturing a tag10for a fabric is described, in particular but not exclusively a tag10for identification of a garment (not shown in the figures) intended to be associated with it, for example for identification in a laundry. The tag10comprises:a radio module with an ultra high transmission frequency (UHF), denoted by 1;a radio-frequency identification integrated circuit (RFID IC), denoted by 2, connected to the UHF radio module1;an antenna3; anda textile substrate4for supporting the UHF radio module1, the RFID integrated circuit2and the antenna3. The method comprises the steps of:mounting the UHF radio module1and the RFID integrated circuit2on the substrate4, these being already connected together by means of metal tracks; andmounting the antenna3on the substrate4, around the UHF radio module1and the RFID integrated circuit2. The method is characterized by:applying a protective coating5onto the UHF radio module1and onto the RFID integrated circuit2, said step of applying the protective coating5comprising:coating the metal tracks for electrical connection between the RFID integrated circuit2and a UHF module1, so as to protect the electric interconnection between the RFID integrated circuit2and the UHF radio module1and further comprising:coating a surface portion of the substrate4situated around the UHF radio module1and the RFID integrated circuit2, so as to protect the structural interconnection between the UHF radio module1and the substrate4and so as to keep the UHF radio module1and the RFID integrated circuit2in a fixed position on the substrate4. In particular, the step of applying the protective coating5further comprises:coating a portion of the antenna3situated on the surface portion of the substrate3, in order to protect the structural connection between the antenna3and the substrate4. Even more particularly, the step of applying the protective coating5further comprises:total incorporation of the RIFD integrated circuit2and the UHF radio module1in the protective coating5above the substrate4. Specifically, the step of applying the protective coating5comprises, for example:pouring a resin or a glue anda step for solidification of the resin or glue. The protective coating5becomes rigid at the end of the solidification step. Advantageously. the antenna portion intended to be coated by the protective coating5forms a shoulder for containing the coating5, for example the glue or the resin, therefore preventing dispersion thereof in areas of the substrate relatively far from the UHF radio module1and from the RFID integrated circuit2, thus allowing the entire predefined amount of protective coating5to be used in the areas to be protected. Moreover, the antenna portion intended for the coating increases the adhesion of the protective coating material. However, according to other embodiments of the present invention, the protective coating5may be formed by means of another covering material (namely without using glue or resin), for example, but not exclusively, a covering material which does not require any solidification step in order to become rigid. It should be pointed out that the term “protective coating5” used in the present invention does not indicate necessarily a coating which adheres perfectly to the UHF radio module1and the RFID integrated circuit2, as in the case of a glue or resin, but may also indicate a covering of said UHF radio module1and RFID integrated circuit2which is, at least partly, not in contact with the UHF radio module1and the RFID integrated circuit2, and also not in direct with the metal tracks for electrical connection thereof, or at least not in contact along the entire length of the metal tracks, but which is nevertheless designed to protect the electrical connection thereof with the UHF radio module1and the RFID integrated circuit2and to cover at least a surface portion of the substrate4situated around the UHF radio module1and the RFID integrated circuit2, so as to protect also the structural interconnection between the UHF radio module1and the substrate4and keep the UHF radio module1and the RFID integrated circuit2in a fixed position on the substrate4. The antenna3is preferably a wire, in particular a metal wire, and the step of mounting the antenna3on the substrate4comprises stitching the antenna3onto the textile substrate4. The antenna is stitched at a predetermined distance from the RIFD integrated circuit2and the UHF radio module1and forms a—preferably closed—ring for inductive coupling with the UHF radio module1. The antenna extends over the substrate beyond the ring, for example in opposite directions with respect to the module, forming preferably turns or sinusoidal loops. The UHF radio module1comprises a PCB (printed circuit board) which incorporates metal tracks intended to act as a short-range inductor. The RFID integrated circuit2is already electrically connected to the metal tracks of the UHF radio module1before the step of mounting the UHF radio module1and the RFID integrated circuit2on the substrate4, said metal tracks however being exposed or not coated with any resin before the step of mounting the UHF radio module1and the RFID integrated circuit2. Advantageously, the tag thus formed:is particularly flexible;rigidly fixes the module, the circuit and the antenna on the substrate;is particularly thin;is particularly cost-efficient. | 5,464 |
11861444 | DETAILED DESCRIPTION The present invention is more fully described below with reference to the accompanying figures. The following description is exemplary in that several embodiments are described (e.g., by use of the terms “preferably,” “for example,” or “in one embodiment”); however, such should not be viewed as limiting or as setting forth the only embodiments of the present invention, as the invention encompasses other embodiments not specifically recited in this description, including alternatives, modifications, and equivalents within the spirit and scope of the invention. Further, the use of the terms “invention,” “present invention,” “embodiment,” and similar terms throughout the description are used broadly and not intended to mean that the invention requires, or is limited to, any particular aspect being described or that such description is the only manner in which the invention may be made or used. Additionally, the invention may be described in the context of specific applications; however, the invention may be used in a variety of applications not specifically described. The embodiment(s) described, and references in the specification to “one embodiment”, “an embodiment”, “an example embodiment”, etc., indicate that the embodiment(s) described may include a particular feature, structure, or characteristic. Such phrases are not necessarily referring to the same embodiment. When a particular feature, structure, or characteristic is described in connection with an embodiment, persons skilled in the art may effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. In the several figures, like reference numerals may be used for like elements having like functions even in different drawings. The embodiments described, and their detailed construction and elements, are merely provided to assist in a comprehensive understanding of the invention. Thus, it is apparent that the present invention can be carried out in a variety of ways, and does not require any of the specific features described herein. Also, well-known functions or constructions are not described in detail since they would obscure the invention with unnecessary detail. Any signal arrows in the drawings/figures should be considered only as exemplary, and not limiting, unless otherwise specifically noted. Further, the description is not to be taken in a limiting sense, but is made merely for the purpose of illustrating the general principles of the invention, since the scope of the invention is best defined by the appended claims. It will be understood that, although the terms “first,” “second,” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. Purely as a non-limiting example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. As used herein, the singular forms “a”, “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be noted that, in some alternative implementations, the functions and/or acts noted may occur out of the order as represented in at least one of the several figures. Purely as a non-limiting example, two figures shown in succession may in fact be executed substantially concurrently or may sometimes be executed in the reverse order, depending upon the functionality and/or acts described or depicted. It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may in fact be executed substantially concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment. Aspects of a system and method for automated inventory tracking includes one or more bins. Each of the one or more bins may have a tag that has a unique identifier to identify the bin. The bin may be identified by a detector that may be located in a cabinet, refrigerator, or other location to store the bin. In addition, each of the one or more bins may have one or more items in the bin. Each of the one or more items may have a tag with a unique identifier to identify the item. Each tag may be an RFID tag, which may be an ultra-high frequency (UHF) tag, NFC tag, or dual UHF/NFC tag. The bin may have a detector or reader to determine the one or more items in the bin at a time. As an example, the bin may send a first signal at high frequency or ultra-high frequency and receive a response from high frequency or ultra-high frequency tags on items in the bin. In addition, the bin may have one or more sensors that may be connected to a scale to determine a weight of the scale at the time. When an item is removed from the bin, the system may determine a removal of the item and the bin weight after removal. When the item is returned to the bin, the system may read the return of the item and determine a new weight. Based on a difference in bin weight, the bin may determine a change in weight of the item. The system may send one or more alerts based on the change in weight of the item such as an alert to order a new quantity of the item. The system may also send one or more alerts to a bin location where the item is returned such as an alert indicating the item is returned to the incorrect bin. As an example, a system may include a bin detector comprising an antenna, a bin configured to accommodate at least one item and having a bin unique identifier, at least one processor to receive a weight of the at least one item from at least one sensor in communication with a scale, and at least one battery to power the system, the bin detector to transmit a first signal, receive a second signal in response to the first signal for each of the at least one item and determine an item unique identifier for each of the at least one item, and transmit the bin unique identifier, the item unique identifier for each of the at least one item, and the weight information to the processor to transmit the bin unique identifier, the item unique identifier for each of the at least one item, and the weight information to a server computing device using a communication network. The communications network may be a wireless network including a Bluetooth network or a LoRaWAN network, among others. There is a significant gap in existing laboratory inventory management systems, specifically in reagent tracking. Laboratories in the United States and worldwide face the challenge of managing a very large number, e.g., thousands or even more, of reagents efficiently. The reliance on manual inventory management systems, such as paper-based records or basic electronic spreadsheets, has proven to be inadequate. These conventional approaches are prone to errors, delays, and inconsistencies, resulting in inefficiencies, wasted resources, and potential scientific setbacks. The deficiencies identified in the conventional systems include use of manual tasks that interfere with scientific tasks, leading to lost time and productivity. As an example, there may be approximately 100,000 laboratories in the United States that may make use of the automated inventory tracking system. These laboratories may eliminate duplicate orders, have enhanced inventory management, time savings, and waste reduction. The automated inventory tracking system is a comprehensive system that eliminates the need for manual input from laboratory staff. The automated inventory tracking system is a modular system that provides advanced functionality, allowing the system to be adaptable to laboratories of varying sizes and needs. The system may utilize radio-frequency identification (RFID), smart devices, RFID detectors, and smart bins to accurately track reagent levels, locations, and current users. In one example, the automated inventory tracking system may use communication by radio frequency signaling including electromagnetic waves in the radio frequency range (20 KHz to 300 GHz). The automated inventory tracking system may use multiple distinct and non-interfering types of telecommunications based on operating frequency ranges (bands), including cellular, WiFi, Bluetooth, UHF RFID and HF RFID, among others. NFC is an example of HF technology in use. As is known, HF (NFC) is 13.56 MHz and UHF ranges from 300-3000 MHz. NFC read distance can be centimeters whereas UHF can read over a meter away. In one example, the automated inventory tracking system may have cabinet detectors that are UHF and read bins and user badges that have UHF or dual tags. The bin detector may use NFC with NFC-tagged consumables. Alternatively the bin may be outfitted with a UHF detector and use electromagnetic shielding around the bin perimeter to preclude detection of items in adjacent bins or bin tags. As an example, NFC detection of bin items can be used if there is a sufficiently strong and resolved antenna signal to continuously read all items within a bin without data collision. In one example, an embedded antenna in a bin can broadcast a call signal at HF or UHF frequency and read response signals from passive HF or UHF RFID tags on items within the bin. The antenna receiver may receive the signal from the embedded antenna and pass the signal along to the antenna transmitter that transmits the signal to a processor. The processor may associate the RFID information with weight information received and transmit the information payload via a communication network to a server computing device such as a cloud server. In the case of removal or return of an item from a bin, a detector assigned to a cabinet in which that bin resides may also capture and associate user and bin information into a payload and similarly transmit that information payload via a communication network to a server computing device such as a cloud server. The cloud server may use associated payload information such as these to determine updated net weight and update the status of items, bins, and users. As an example, this may include associating bin processor and cabinet processor payloads at an event in time and updating the software application with information depicting that, for example, User A returned Item 1 to Bin B at Time T with updated net weight W. As an example, a cabinet detector may detect one or more bins and one or more users that are accessing the one or more bins. A bin detector may detect one or more items in a bin. Each of the cabinet detector and the bin detector may send information associated with the one or more bins and the one or more users to a computing device such as a server computing device. Based on the information from the cabinet detector and the bin detector, the server computing device may determine realtime information associated with the bins. For example, a particular user may have accessed a bin at 2:00 p.m. and a particular item may have been removed from the bin from 2:00 p.m. to 3:00 p.m. When the particular item is returned to the bin at 3:00 p.m., the item may have a weight that is less than when the item was removed. As a result, the server computing device may determine that the particular user used a particular amount of the item from 2:00 p.m. to 3:00 p.m. In one example, the amount of the item may be determined based on a weight of the item in conjunction with one or more images that may be captured of the item. There may be one or more images that may be captured when the item is initially added into the system. The system may perform image processing of at least one image of the item to determine the approximate level of solid, liquid, or semi-solid substance remaining within the container and compute the estimated substance weight by applying the estimated percentage full, from image processing to the initial purchase quantity from the container label. Subsequently, the amount of the item in the container may be determined by determining the difference in the weight information for the bin at a first time and the weight information for the bin at a second time. In one example, when the item having the item identifier with the initial amount estimated from image processing is placed in a bin having a sensor connected with a scale, a bin weight immediately prior to and after placing the item in the bin may be transmitted to a cloud server computing device. The server computing device may then determine and store the item container tare weight by taking the difference between the bin weight change and the initial substance amount estimated from image processing. The automated inventory tracking system may utilize an automated inventory tracking application that may provide a cloud-based portal and a client application that may include a mobile scanning/tracking application component. The system may interface with an inventory management portal or existing inventory management software through one or more application programming interfaces (APIs) to provide a holistic solution. Laboratory staff may have access to real-time inventory data, allowing the staff to track reagents, receive alerts for low stock or expired items, and efficiently manage procurement and reordering processes. In one example, the automated inventory tracking system may have one or more bins to store and track one or more objects such as containers. In addition, the automated inventory tracking system may include one or more client computing devices that may communicate with one or more server computing devices. Additionally, the one or more client computing devices and the one or more server computing devices may communicate with the one or more bins. The automated inventory tracking system may provide alerts for expiration dates, low levels, disposal, and automated reordering, among others. In addition, the automated inventory tracking application may allow for item registration, item differentiation, and provide scanning functionality using one or more imaging devices and visual intelligence. The automated inventory tracking application may provide chemical location lookup, safety data sheet (SDS) information, reassignment, and depletion tracking. The automated inventory tracking system may further provide alerts to one or more bins to correct placement of returned items. The automated inventory tracking application may further provide determination of remaining amounts, measure volume using weight information and known or calculated density, and ensure safety and compliance. The density may be calculated and reported using one or more algorithms. In one example, the automated inventory tracking system may be located in a laboratory, pharmacy, manufacturing facility, restaurant, salon, secured rooms or facility (i.e., evidence room) or retail establishment. The laboratory, pharmacy, manufacturing facility, restaurant, salon, secured room or facility, or retail establishment may have one or more cabinets, refrigerators, freezers, shelves, and workspaces that can be outfitted with a smart device connected to one or more storage bins that can be assigned to one or more users or locations. Each smart device or bin can be connected to a network such as a wireless network or connected to other smart devices using a mesh network such as a LoRaWAN network. RFID UHF detectors may be located in cabinets or other storage locations. In one example, the RFID UHF detector may be located on an inner front of a cabinet and linked with a smart device and connected to the network. The RFID UHF detector may be connected with a Bluetooth network or a Wi-Fi network, among others. Alternatively, the RFID UHF detector could be connected and communicate with a smart device through ethernet, RS-232, or otherwise a hard-wired connection. The detector may detect the presence of smart bins, tagged items, and user badges. For controlled substances, a cabinet can be outfitted with a door-open sensor and user badge detection to provide an automatic locking and unlocking mechanism. When a user badge is detected, the user badge can be associated with an item (e.g., bin or consumable) removal or return event. User badges may also grant access and log retrieval and return of controlled or secured items. Smart storage bins may be placed in storage areas such as glass-pane or other cabinets, refrigerators, freezers, glove boxes, humidity chambers, flammable cabinets, storeroom shelves, workstations, secured rooms, and facilities, and other locations. The smart storage bins may be used to store grouped and compatible inventory items. Smart storage bins may be tagged with dual (e.g., UHF and NFC) RFID tags and may be detected in storage areas. Larger items can be stored outside bins and directly detected in storage areas. Items within smart bins may be tagged and detected when in the bin by a built-in detector. The bin detector can be linked (wired or wireless) with a circuit board that can be built into or associated with the bin. The bin can have modulated detector strength based on bin size and shape or can be accomplished by lining inner bin walls with a metallic coating or another type of coating to block signal transmission. In addition, a bin may have an affixed panel style scale that may be on the bottom of the bin that can be linked (wired or wirelessly) with the circuit board or computing device built into the bin. The bin may detect changes in weight upon removal of tagged items, associate weight change with an item upon return to the bin, and determine an amount used based on a difference and update an item net weight in real-time using the automated inventory tracking application. Items that are removed from a storage location may be visible in the automated inventory tracking application as “checked out.” Items can be linked with information from a purchase order or safety data sheet (SDS) information. Items also can be associated with advisable storage conditions and other pertinent information such as reactivity, flammability, acid or base information, health or handling hazards, and expiration date information. The automated inventory tracking application may automatically recognize and extract relevant information from a purchase order and an SDS. In addition, the automated inventory tracking application can recognize and alert a user to incompatible storage, usage history, quality control, or an incorrect storage location via visual or audio alerts from individual bins. Users can utilize the automated inventory tracking application to determine real-time batch, amount, location, user, and quality information. In addition, the application may allow users including managers to save time and avoid disruptions. The automated inventory system can also benefit lab hygiene and safety by monitoring the weight of items in storage and providing alerts for items which may require attention from lab personnel. For example, if a bin weight is increasing without any user activity, it may indicate the one or more chemical inventory items are hygroscopic and absorbing moisture from the air, indicating a potential quality issue. Conversely, if a bin weight is decreasing in the absence of user activity it may indicate one or more chemical inventory items are evaporating, which indicates a quality event and may pose a safety hazard for exposure to potentially toxic vapors. The automated inventory system can detect these events through incremental increases or decreases in weight. Upon receiving weight change information from one or more bin processors, the cloud server software may recognize an indicated quality event and alert users through either software alerts, smart device visual or audio user interface alerts, or a combination of the two. When first using the automated inventory tracking application, a user may set up an account to allow the user to have access to the system and set preference information. In addition, the user can integrate the system with existing and other software such as inventory software, procurement software, and electronic laboratory notebook (ELN) software. The user may outfit a storage area with one or more smart devices and detectors and may tag and group items. The user may install a client application associated with the automated inventory tracking application on a mobile computing device such as a smartphone or a tablet computing device that may have NFC devices and imaging devices. Each consumable may be tagged with one or more NFC or UHF RFID tags. In addition, each bin can have its own tag. The user may use their client computing device to scan a bin tag and select one or more consumable/item to be added to the bin. The user may scan a tag on the consumable/item and may also obtain one or more images or photographs of the consumable/item. The automated inventory tracking application may determine label information based on the one or more images and may extract item information. The automated inventory tracking application may determine whether there are associated orders, inventory, or notebook information that may be related to the item information extracted from the one or more images. If there is an existing procured item, the automated inventory tracking application may determine a tare weight or may attempt to estimate content level (e.g., percentage) from the images. If the automated inventory tracking application is unable to determine a tare weight or attempt to estimate a content level, the user may enter information. The consumable/item may be placed in a bin. The bin may determine an item identifier and may determine the weight of the consumable/item. A net weight may be determined and stored. As a result, the system is able to provide real-time automated monitoring and management of inventory including for inventory items that are already in use and are not previously tracked. A user can reassign a consumable from one bin to another bin. As an example, the user may use the automated inventory tracking application to scan an existing item or bin tag and may select an option to reassign a tag from one bin to another bin. The item may then be reassigned from one bin to another bin. If an item or consumable is depleted, a user can use the automated inventory tracking application to scan an item and the associated item and bin may be recognized. The user may select a “Deplete” option. The automated inventory tracking application may prompt the user to empty, rinse, and place a container on the scale for tare. The container may be weighed, and a tare weight may be stored and/or retrieved for future orders. The automated inventory tracking system may use machine learning to compare the weight with an estimated tare and improve image processing amount estimation. The user may indicate that the item or consumable is depleted and the item may be confirmed as depleted. The system may monitor a supply of contained goods and substances as they are used and depleted from containers. Additionally, the system may reduce waste by tracking an amount of material or product stored in opened containers. As the material is depleted and used, the system may determine how long containers have been stocked. In one example, the system can direct a user to use a partially used container that may be located in a particular bin in a laboratory rather than a new container in the laboratory. FIG.1is a block diagram of an automated inventory tracking system100according to an example of the instant disclosure. As shown inFIG.1, the system100may include one or more smart bins102. Each bin102may have one or more items or consumables110that may be stored or housed in the bin102. The system100may further include at least one server computing device104and at least one client computing device106. The at least one server computing device104may be in communication with at least one database114. The client computing device106and the server computing device104may have an automated inventory tracking application112that may be a component of an application and/or service executable by the at least one client computing device106and/or the server computing device104. For example, the automated inventory tracking application112may be a single unit of deployable executable code or a plurality of units of deployable executable code. According to one aspect, the automated inventory tracking application112may include one component that may be a web application, a native application, and/or a mobile application (e.g., an app) downloaded from a digital distribution application platform that allows users to browse and download applications developed with mobile software development kits (SDKs) including the App Store and GOOGLE PLAY®, among others. The automated inventory tracking system100also may include a relational database management system (RDBMS) or another type of database management system such as a NoSQL database system that stores and communicates data from at least one database114. The data stored in the at least one database114may be associated with the one or more bins and the one or more consumables/items that may be stored in the one or more bins. As an example, each bin may be associated with one or more items or consumables that may be stored in the bin and the real-time information associated with each of the one or more items or consumables may be stored in the database. The at least one client computing device106and the at least one server computing device104may be configured to receive data from and/or transmit data through a communication network108. Although the client computing device106and the server computing device104are shown as a single computing device, it is contemplated each computing device may include multiple computing devices. The communication network108can be the Internet, an intranet, or another wired or wireless communication network. For example, the communication network may include a Mobile Communications (GSM) network, a code division multiple access (CDMA) network, 3rdGeneration Partnership Project (GPP) network, an Internet Protocol (IP) network, a wireless application protocol (WAP) network, a WiFi network, a Bluetooth network, a near field communication (NFC) network, a LoRaWAN network, a satellite communications network, or an IEEE 802.11 standards network, as well as various communications thereof. Other conventional and/or later developed wired and wireless networks may also be used. The client computing device106may include at least one processor to process data and memory to store data. The processor processes communications, builds communications, retrieves data from memory, and stores data to memory. The processor and the memory are hardware. The memory may include volatile and/or non-volatile memory, e.g., a computer-readable storage medium such as a cache, random access memory (RAM), read only memory (ROM), flash memory, or other memory to store data and/or computer-readable executable instructions. In addition, the client computing device106further includes at least one communications interface to transmit and receive communications, messages, and/or signals. The client computing device106could be a programmable logic controller, a programmable controller, a laptop computer, a smartphone, a personal digital assistant, a tablet computer, a standard personal computer, or another processing device. The client computing device106may include a display, such as a computer monitor, for displaying data and/or graphical user interfaces. The client computing device106may also include a Global Positioning System (GPS) hardware device for determining a particular location, an input device, such as one or more cameras or imaging devices, a keyboard or a pointing device (e.g., a mouse, trackball, pen, or touch screen) to enter data into or interact with graphical and/or other types of user interfaces. In an exemplary embodiment, the display and the input device may be incorporated together as a touch screen of the smartphone or tablet computer. The server computing device104may include at least one processor to process data and memory to store data. The processor processes communications, builds communications, retrieves data from memory, and stores data to memory. The processor and the memory are hardware. The memory may include volatile and/or non-volatile memory, e.g., a computer-readable storage medium such as a cache, random access memory (RAM), read only memory (ROM), flash memory, or other memory to store data and/or computer-readable executable instructions. In addition, the server computing device104further includes at least one communications interface to transmit and receive communications, messages, and/or signals. FIG.2is a diagram of a bin102of the automated inventory tracking system100according to an example of the instant disclosure. In one example, each bin102may have a removable scale and a container that may be configured to accommodate NFC, RFID UHF, or UHF/NFC dual tag consumables/items that have a variety of different shapes, sizes, and materials. In one example, the automated inventory tracking system100may have bins102that track items/consumables using RFID and may not include a scale component. In another example, the automated inventory tracking system100may have bins102that only have a scale component and do not track items/consumables using RFID. In another example, the automated inventory tracking system100may have bins102that track inventory quantity and location. Each bin may have an RFID detector component and a scale component. This configuration may allow a laboratory to track an amount of inventory items and monitor a specific location of an item within a laboratory. In an example, the bin102may be a smart bin that may have a lower component or box that may have a printed circuit board with at least one processor, a microcontroller, or a computing device to receive information from four H-bridge sensors for weighing and one or more rechargeable batteries to power the printed circuit board or computing device as well as an HF or UHF RFID antenna transmitter connector to connect to a container that houses consumables/items. An upper component or detachable container may have an antenna receiver connector, an embedded HF or UHF RFID antenna, one or more display devices, and have walled electromagnetic field (EMF) shielding to prevent HF or UHF RFID noise from entering other bins102. In one example, the bin102may be part of a system and may have a bin detector including an antenna, and it may be configured to accommodate at least one item110and have a bin unique identifier. In addition, the bin102may have at least one processor to receive a weight of the at least one item110from at least one sensor in communication with a scale, and at least one battery to power the system. The bin detector may transmit a first signal, receive a second signal in response to the first signal for each of the at least one item110and determine an item unique identifier for each of the at least one item, and transmit the bin unique identifier, the item unique identifier for each of the at least one item110, and the weight information to the processor to transmit the bin unique identifier, the item unique identifier for each of the at least one item, and the weight information to the server computing device104using the communication network108. In another example, a storage container may have a first removable component including a bin102configured to accommodate at least one item110and have a bin unique identifier and electromagnetic field (EMF) shielding to prevent electromagnetic noise. The storage container may have a second component housed underneath the first removable component and may include a bin detector comprising an antenna, at least one processor to receive a weight of the at least one item110from at least one sensor in communication with a scale, and at least one battery to power the storage container. The bin detector may transmit a first signal, receive a second signal in response to the first signal for each of the at least one item and determine an item unique identifier for each of the at least one item110, and transmit the bin unique identifier, the item unique identifier for each of the at least one item, and the weight information to the processor to transmit the bin unique identifier, the item unique identifier for each of the at least one item, and the weight information to the server computing device104using the communication network108. As shown inFIG.2, a bin102may have a number of components. Each bin102may be configured as a single-piece bin or may have multiple pieces that may be detached from one another such as a first piece or part to hold the items/consumables and a second piece or part to weigh the items/consumables110. In addition, each bin may have a variety of different shapes and sizes. In one example, the bin102may include one or more base station scales202. In addition, the bin102may have a container204to store one or more items/consumables110. The bin102may have walled electromagnetic field (EMF) shielding205that may be associated with one or more walls of the container204. There may be one or more cabinet RFID detectors that may be placed near one or more bins and that may be connected to the communication network108. The cabinet detectors may detect the presence of tags associated with bins and tags associated with users. The bin102may further include an antenna receiver connector206and an antenna transmitter connector208that may together serve as a bin detector to detect one or more items in a bin based on tags on the one or more items. As an example, the antenna transmitter may send or broadcast a first signal at high frequency or ultra-high frequency and receive a response from high frequency or ultra-high frequency tags on items in the bin. The bin102may have one or more printed circuit boards210having one or more processors or one or more computing devices to communicate with the bin detector, the client computing device106, and/or the server computing device104. In one example, one or more bins102may communicate with a bin router and the bin router may communicate with the client computing device106and/or the server computing device104. The bin102may have one or more batteries212that may be lithium ion batteries. The bin102may have a power source214that may be a universal serial bus (USB) power source. In addition, the bin102may have one or more scale H-bridges216. FIG.3is another diagram of bins102of the automated inventory tracking system100according to an example of the instant disclosure. As shown inFIG.3, each bin102may have a number of different shapes and sizes. A smaller bin302may be used for smaller vials of consumables that may have milligram precision and a larger bin or container304may have larger items associated with gram precision. A larger container may be fitted with an industrial scale for kilogram or larger quantities. In each of the examples, the bin102may be configured to operate as a consumable counter that may count a number of HF or UHF RFID tagged items110that may be stored in the bin102. In addition, each bin may have one or more display devices306that may be located on an exterior wall such as an e-paper display. FIG.4is a block diagram of the automated inventory tracking application112according to an example of the instant disclosure. As shown inFIG.4, the automated inventory tracking application112may have a number of components including a client application component402and a server application component that may include a cloud component404and an Internet or web portal406. The client application component402may be associated with the client computing device106and may be a mobile application executed by the client computing device106. The client computing device106may obtain information associated with the one or more bins102and may transmit the information associated with the one or more bins102to the server computing device104for storage in the database114. As an example, the client computing device106may send the information using one or more APIs and a user may be given access to the one or more APIs using OAuth. The automated inventory tracking application112may have a cloud component404that may include user and laboratory management that uses authentication and authorization services to grant API permissions to data sources in the database114. The cloud component404may allow the APIs to interact with the automated inventory tracking application112and the one or more bins102having Internet of Things (IoT) enabled devices can programmatically update inventory records in the database114. Each of the bins102can dispatch events, statuses, consumable HF or UHF RFID identifier information, and consumable weight information using the APIs. In one example, one or more bins102may communicate with a smart bin router that may transmit information to the cloud component using the communications network108. Additionally, the automated inventory tracking application112may include a portal406that may be a web portal to view real-time information associated with the automated inventory tracking application112. As an example, the client computing device106may communicate with the server computing device104and may have access using Oauth. Oauth is a protocol for authorization and allows a third-party application to obtain limited access to a Hypertext Transfer Protocol (HTTP) service on behalf of a resource owner by allowing an approval interaction between the resource owner and the HTTP service or by allowing the third-party application to have access on its own. As an example, Oauth allows a user to grant a third-party website or web service access to another website or web service without providing a password. As an example, the user may provide their username or handle and Oauth may grant access. As a result, the application112may permit a user to share information about their account with a third party application or website. The automated inventory tracking system100may use Oauth or another protocol for authorization to allow access to other associated applications and/or accounts. FIG.5is another flowchart of a method500of using the client application component of the automated inventory tracking application112according to an example of the instant disclosure. As shown inFIG.5, a user may scan an NFC or a dual NFC/UHF tag that may be associated with an item, consumable, or user502. As an example, the tag may be not recognized or unknown504. In this case, if a user is a new user, they can create a new user account to use the automated inventory tracking system100and automated inventory tracking application112. The user may provide user information such as a name, a role, and access information, among other information. In addition, the user can create a new bin if the tag is associated with a bin. The user may provide bin information such as a descriptor or a description of the bin. In addition, the user may provide information associated with a bin category. The user also may provide location information associated with the bin such as cabinet, refrigerator, or another location. If the tag is associated with an item or a consumable110, the user may provide new consumable information. As an example, the user may provide item information such as a name of the item, a vendor of the item, and other information. The item or consumable may be matched with an existing, ordered, or prepared inventory item. In one example, the automated inventory tracking application112may receive information associated with the existing, ordered, or prepared inventory items using the APIs. The item may be assigned regulatory and storage information. The item may be assigned to an available bin location based on item information and compatibility. In addition, the item may be placed in an assigned bin. In another example, the tag may be recognized or known506. The user may view information associated with a bin, a user, or consumable. The user may be able to modify/reassign the bin, user, or consumable. In addition, the user may be able to indicate that the consumable is depleted and may delete the bin, user, or consumable from the system. Authorization to modify or delete users, bins, or consumables from the system may be defined by user roles. FIG.6is another flowchart of a method of creating a new consumable600according to an example of the instant disclosure. As shown inFIG.6, a user may scan a tag that may be a new NFC/UHF tag in602. First, in604, the user may provide item information including name information and vendor information. The user also may use the client computing device106to obtain one or more images of the item. The user may search for the item by providing search information and the user may use the one or more images to obtain label information. The client computing device106and/or the server computing device104may perform image processing to capture and interpret the data from the label that may be based on machine learning. Next, in606, the user may match the item with an existing, ordered, or prepared inventory item. As an example, if the item is an existing or in-use item, the automated inventory tracking application112may use the one or more images to determine a level of liquid or solid in the container. If the item is a liquid, the automated inventory tracking application112may retrieve or report a density or may determine a density by estimating the density based on the information known about the item. If the item is prepared, information may be obtained from a laboratory notebook software application or system. If the item is ordered, information may be obtained from a procurement enterprise resource planning (ERP) system. A starting amount of the item may be stored in the database114. In addition, a tare weight may be stored in the database114if known. Next, in608, item regulatory and storage information may be obtained. An associated safety data sheet (SDS) may be obtained and uploaded to the server computing device104. The item may be assigned to safety and toxicity categories. In addition, storage and handling information may be determined. As an example, the item may have to be stored in a freezer or a glove box. Next, in610, the item may be assigned to available bin locations based on item information and compatibility. The item may be placed into a particular safety/compliance category and may be stored in a flammable cabinet, e.g., a cabinet having items that may be flammable. The item may have particular storage conditions and may have to be stored in a refrigerator. In addition, the item may have particular bin compatibility. Acids may have to be separated from bases. Next, in612, the item may be placed in an assigned bin. An identifier may be detected by the bin102and by the automated inventory tracking system100. A bin weight change may be detected. A tare weight may be determined from known/estimated amount and a gross weight change that may be determined. The tare weight may be stored in the database114. In one example, if a user places a particular item into a bin102that has items that are not compatible with the particular item, the bin may detect that the particular item is in an incorrect location. The bin may send information associated with the particular item to the server computing device104and the server computing device104may send an alert to one or more users that may be notified. The server computing device104may also send an alert back to the bin microprocessor and display the alert through the bin user interface. The alert may be provided digitally through the e-paper tag display on the bin or it may be an audio or visual alert from the bin smart device. In this example, the bin102may provide an alert that may be provided by one or more light emitting diodes (LEDs) and/or one or more sound producing devices that may be located on or associated with the bin. In response to the one or more alerts, the one or more users may then take action to move the particular item to a correct location. In another example, if a user places a particular item into a bin102that belongs in the bin, the bin may send information associated with the particular item to the server computing104device and the server computing device104may send an alert to one or more users that may be notified. In another example, the bin102may provide some sort of alert that may be provided by one or more light emitting diodes (LEDs) and/or one or more sound producing devices that may be located on or associated with the bin. FIG.7is another flowchart of a method700of removing an item or consumable110from a bin102at a first time and returning the item or consumable110to the bin102at a second time later than the first time according to an example of the instant disclosure. As shown inFIG.7, in702, an item110may be removed from a first storage area. The user may not be authorized to access items, such as items that are dangerous or Drug Enforcement Agency controlled, in a particular refrigerator or cabinet such as refrigerator/cabinet C. In one example, some users may be granted access to some storage areas but not others due to security reasons or other reasons. However, the user may be authorized to access items in refrigerator/cabinet A. The user may remove an item and this event may be recorded and stored. The event may have an associated item identifier, an associated user identifier, an associated bin identifier, a cabinet/refrigerator identifier, and a weight change. The event may be stored in the database114. One hour later, in704, the item may be returned. A new event may occur. The event may have an associated item identifier, a user identifier, a bin identifier, a cabinet/refrigerator identifier, and a weight change. A net weight may be determined. Notifications may be sent by the automated inventory tracking application112. FIG.8illustrates an example method800of determining a difference in weight of an item according to an example of the instant disclosure. Although the example method800depicts a particular sequence of operations, the sequence may be altered without departing from the scope of the present disclosure. For example, some of the operations depicted may be performed in parallel or in a different sequence that does not materially affect the function of the method800. In other examples, different components of an example device or system that implements the method800may perform functions at substantially the same time or in a specific sequence. According to some examples, the method800may include receiving and transmitting to a server computing device104, by at least one processor, weight information at a first time from at least one sensor device, the weight information comprising weight information for at least one item110in a bin102at block810. Next, according to some examples, the method800may include receiving, by a bin detector, a first request to access the bin102, the first request comprising an item removal having a first event comprising an item identifier, a user identifier, a bin identifier, and the weight information at a first time at block820. Next, according to some examples, the method800may include associating the first weight and item identifier into a first information payload and transmitting, by the bin detector, of the first information payload to the server computing device104, the server computing device further associating concurrent user information from a storage area detector and updating a status for an item as checked out by a detected user having the user identifier at block830. Next, according to some examples, the method800may include receiving, by the bin detector, a second request to access the bin102, the second request comprising an item return having a second event comprising the item identifier, the user identifier, and the bin identifier at block840. Next, according to some examples, the method800may include receiving, by the at least one processor, the weight information a second time from the at least one sensor device, the weight information comprising the weight information for the at least one item in the bin at block850. Next, according to some examples, the method800may include associating the second weight and item identifier into a second information payload and transmitting, by the bin detector, the second information payload to the server computing device104, the server computing device further associating concurrent user information from the storage area detector at block860. Next, according to some examples, the method800may include determining, by the server computing device104, a difference in the weight information at the first time and the weight information at the second time at block870. Next, according to some examples, the method800may include determining, by the server computing device104, weight information for an item110having the item identifier based on the difference in the weight information at the first time and the weight information at the second time then storing the updated weight information in a database and updating the status, amount, and bin location of the returned item in an automated inventory application at block880. In some examples, the method800may include sending, by a computing device such as the server computing device104, an alert based on the weight information for the item having the item identifier. In other examples, the method800may include sending an alert based on at least one of an expiration date of an item having the item identifier, a low level of the item having the item identifier, disposal information for the item having the item identifier, and automated reordering information for the item having the item identifier. In some examples, the method800may include recognizing and sending, by the server computing device104, an alert or purchase order request based on the weight information for the item having the item identifier, or recognizing an incompatible storage, quality concern, or an incorrect storage location and sending, by the server computing device, one or more alerts to a user, or one or more alerts to the bin to alert a user via visual or audio alerts. In some examples, the method800may include determining the weight information for the item having the item identifier by performing image processing of at least one image of the item to determine a level of solid, liquid, or semi-solid substance remaining within a container and computing an estimated substance weight by applying an estimated percentage full from image processing to an initial purchase quantity from a container label. In some examples, the method800may include determining the bin identifier based on a tag assigned to the bin and determining the user identifier based on a tag assigned to the user. As an example, the at least one detector may be at least one cabinet detector to determine the bin identifier and determine the user identifier and at least one bin detector to determine the item identifier. In some examples, the method800may include determining the item identifier based on a tag assigned to the item. As an example, the tag may be a radio frequency identification (RFID) tag. The RFID tag may be either a HF tag, e.g., an NFC tag, and/or an UHF tag. In some examples, the method800may include determining the bin identifier based on a tag assigned to the bin. As another example, the method800may include determining the user identifier based on a tag assigned to the user. As another example, the method800may include scanning, by a computing device, a tag, and assigning a unique identifier associated with the tag to one of a user, a bin, and an item, and transmitting, by the computing device, the unique identifier to a server computing device104. As another example, the method800may include obtaining, by at least one imaging device, a label of the item and performing, by the computing device, image processing to determine a name, chemical identifier number, product number, supplier, or other information about the item including an initial amount of a purchased item. As another example, the method800may include displaying information about the bin on an e-paper display of the bin102. FIG.9shows a graphical user interface (GUI)900of the automated inventory tracking application112associated with the system according to an example of the instant disclosure. As shown inFIG.9, the GUI900may include a portal. The portal may allow users to manage inventory of consumables as well as manage devices associated with the automated inventory tracking system100. The devices may include the one or more bins102, computing devices, and analytical lab balances and scales. The portal may provide a conceptual layout of laboratory storage by defining cabinets, workbenches, glove boxes, refrigerator units, and other storage spaces that may be in a laboratory. Each defined storage space may be assigned one or more devices that may be used to organize and locate consumables. A device may broadcast status information to the automated inventory tracking application112including battery life, connectivity information, consumable tracking information, device firmware information, and other information. FIG.10shows an example of computing system1000, which can be for example any computing device making up the computing device such as the client computing device106, the server computing device104, or any component thereof in which the components of the system are in communication with each other using connection1005. Connection1005can be a physical connection via a bus, or a direct connection into processor1010, such as in a chipset architecture. Connection1005can also be a virtual connection, networked connection, or logical connection. In some embodiments, computing system1000is a distributed system in which the functions described in this disclosure can be distributed within a datacenter, multiple data centers, a peer network, etc. In some embodiments, one or more of the described system components represents many such components each performing some or all of the function for which the component is described. In some embodiments, the components can be physical or virtual devices. Example system1000includes at least one processing unit (CPU or processor)1010and connection1005that couples various system components including system memory1015, such as read-only memory (ROM)1020and random access memory (RAM)1025to processor1010. Computing system1000can include a cache of high-speed memory1012connected directly with, in close proximity to, or integrated as part of processor1010. Processor1010can include any general purpose processor and a hardware service or software service, such as services1032,1034, and1036stored in storage device1030, configured to control processor1010as well as a special-purpose processor where software instructions are incorporated into the actual processor design. Processor1010may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric. To enable user interaction, computing system1000includes an input device1045, which can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech, etc. Computing system1000can also include output device1035, which can be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems can enable a user to provide multiple types of input/output to communicate with computing system1000. Computing system1000can include communications interface1040, which can generally govern and manage the user input and system output. There is no restriction on operating on any particular hardware arrangement, and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed. Storage device1030can be a non-volatile memory device and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memories (RAMs), read-only memory (ROM), and/or some combination of these devices. The storage device1030can include software services, servers, services, etc., that when the code that defines such software is executed by the processor1010, it causes the system to perform a function. In some embodiments, a hardware service that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as processor1010, connection1005, output device1035, etc., to carry out the function. For clarity of explanation, in some instances, the present technology may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software. Any of the steps, operations, functions, or processes described herein may be performed or implemented by a combination of hardware and software services or services, alone or in combination with other devices. In some embodiments, a service can be software that resides in memory of a client device and/or one or more servers of a content management system and perform one or more functions when a processor executes the software associated with the service. In some embodiments, a service is a program or a collection of programs that carry out a specific function. In some embodiments, a service can be considered a server. The memory can be a non-transitory computer-readable medium. In some embodiments, the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se. Methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer-readable media. Such instructions can comprise, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The executable computer instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, or source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, solid-state memory devices, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on. Devices implementing methods according to these disclosures can comprise hardware, firmware and/or software, and can take any of a variety of form factors. Typical examples of such form factors include servers, laptops, smartphones, small form factor personal computers, personal digital assistants, and so on. The functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example. The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures. Illustrative examples of the disclosure include: Aspect 1: A system comprising: a bin detector comprising an antenna, a bin configured to accommodate at least one item and having a bin unique identifier, at least one processor to receive a weight of the at least one item from at least one sensor in communication with a scale, and at least one battery to power the system, the bin detector to transmit a first signal, receive a second signal in response to the first signal for each of the at least one item and determine an item unique identifier for each of the at least one item, and transmit the bin unique identifier, the item unique identifier for each of the at least one item, and the weight information to the at least one processor to transmit the bin unique identifier, the item unique identifier for each of the at least one item, and the weight information to a server computing device using a communication network. Aspect 2: The system of Aspect 1, further comprising at least one display device to display information associated with the bin and electromagnetic field (EMF) shielding to prevent electromagnetic noise. Aspect 3: The system of Aspects 1 and 2, wherein the display device comprises an e-paper display. Aspect 4: The system of Aspects 1 to 3, wherein the bin comprises a first detachable storage area covered in walled EMF shielding to accommodate the at least one item and a second component housing to house the at least one processor, the at least one sensor, the at least one battery, and the antenna. Aspect 5: The system of Aspects 1 to 4, wherein the second component housing receives the first detachable storage area on a top of the second component housing. Aspect 6: The system of Aspects 1 to 5, wherein the weight comprises a first weight and the antenna determines removal of an item having a tag with a particular item unique identifier. Aspect 7: The system of Aspects 1 to 6, wherein the antenna determines return of the item having the tag with the particular item unique identifier, a second weight is determined by the at least one sensor, and a change in weight for the item having the tag with the particular item unique identifier is determined based on a difference between the first weight and the second weight. Aspect 8: The system of Aspects 1 to 7, wherein the antenna receives a response from a radio frequency (RFID) tag on each item in the bin to determine the item unique identifier for each item. Aspect 9: The system of Aspects 1 to 8, wherein the bin unique identifier, the item unique identifier for each of the at least one item, and the weight information are sent as a payload to the server computing device. Aspect 10: A method comprising receiving and transmitting, by at least one processor, to a server computing device, weight information at a first time from at least one sensor device, the weight information comprising weight information for at least one item in a bin, receiving, by a bin detector, a first request to access the bin, the first request comprising an item removal having a first event comprising an item identifier, a user identifier, a bin identifier, and the weight information at the first time, associating, by the at least one processor, the first weight and the item identifier into a first information payload and transmitting, by the bin detector, of the first information payload to the server computing device, the server computing device further associating concurrent user information from a storage area detector and updating a status for an item as checked out by a detected user having the user identifier, receiving, by the bin detector, a second request to access the bin, the second request comprising an item return having a second event comprising the item identifier, the user identifier, and the bin identifier, receiving, by the at least one processor, the weight information a second time from the at least one sensor device, the weight information comprising the weight information for the at least one item in the bin, associating, by the at least one processor, the second weight and the item identifier into a second information payload and transmitting, by the bin detector, the second information payload to the server computing device, the server computing device further associating concurrent user information from the storage area detector, determining, by the server computing device, a difference in the weight information at the first time and the weight information at the second time, and determining, by the server computing device, weight information for an item having the item identifier based on the difference in the weight information at the first time and the weight information at the second time and storing updated weight information in a database and updating a status, amount, and bin location of the item in an automated inventory application. Aspect 11: The method of Aspect 10, further comprising recognizing and sending, by the server computing device, an alert or purchase order request based on the weight information for the item having the item identifier, or recognizing an incompatible storage, quality concern, or an incorrect storage location and sending, by the server computing device, one or more alerts to a user, or one or more alerts to the bin to alert a user via visual or audio alerts. Aspect 12: The method of Aspects 10 and 11, further comprising determining the item identifier based on a tag assigned to the item. Aspect 13: The method of Aspects 10 to 12, wherein the tag comprises a radio frequency identification (RFID) tag. Aspect 14: The method of Aspects 10 to 13, wherein the tag comprises at least one of a high frequency RFID tag and an ultra-high frequency tag. Aspect 15: The method of Aspects 10 to 14, further comprising determining the weight information for the item having the item identifier by performing image processing of at least one image of the item to determine a level of solid, liquid, or semi-solid substance remaining within a container and computing an estimated substance weight by applying an estimated percentage full from image processing to an initial purchase quantity from a container label. Aspect 16: The method of Aspects 10 to 15, further comprising determining the bin identifier based on a tag assigned to the bin and determining the user identifier based on a tag assigned to the user. Aspect 17: The method of Aspects 10 to 16, further comprising displaying information about the bin on an e-paper display of the bin. Aspect 18: The method of Aspects 10 to 17, further comprising scanning, by a computing device, a tag, and assigning a unique identifier associated with the tag to one of a user, a bin, and an item, and transmitting, by the computing device, the unique identifier to the server computing device. Aspect 19: The method of Aspects 10 to 18, further comprising obtaining, by at least one imaging device, a label of the item and performing, by the computing device, image processing to determine a name, chemical identifier number, product number, and supplier about the item including an initial amount of a purchased item. Aspect 20: A storage container comprising a first removable component comprising: a bin configured to accommodate at least one item and having a bin unique identifier and electromagnetic field (EMF) shielding to prevent electromagnetic noise, and a second component housed underneath the first removable component comprising: a bin detector comprising an antenna, at least one processor to receive a weight of the at least one item from at least one sensor in communication with a scale, and at least one battery to power the storage container, the bin detector to transmit a first signal, receive a second signal in response to the first signal for each of the at least one item and determine an item unique identifier for each of the at least one item, and transmit the bin unique identifier, the item unique identifier for each of the at least one item, and the weight information to the at least one processor to transmit the bin unique identifier, the item unique identifier for each of the at least one item, and the weight information to a server computing device using a communication network. | 69,639 |
11861445 | DETAILED DESCRIPTION Throughout this specification the word “comprise”, or variations such as “comprises” or “comprising”, will be understood to imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps. The present disclosure relates to an electromagnetic coupler arrangement comprising a self-adaptive array of coupling elements for enabling a selective and highly efficient coupling of electromagnetic energy into planar metallic traces of arbitrary shape, in particular, for encoding RFID inlays. The non-limiting embodiments disclosed are particularly suitable for encoding inlays which are provided on a medium such as in an RFID printer/encoder. In non-limiting embodiments of the electromagnetic coupler arrangement an interrogator (sometimes also called “RFID reader”) is connected via a balun to a transmission line, thereby transforming the transmission line into a differential transmission line having two portions being fed with an output of the interrogator with equal amplitudes and phases shifted by 180°. A sequential array of coupling elements is shunted onto the differential transmission line. Between the connections of the individual elements of the array to the transmission line, it is provided for phase compensation so as to obtain a phase compensated differential transmission line. Between the phase compensated differential transmission line and each coupling element, a plurality of switchable resistor elements are foreseen so as to alternatively switch the resistance of the electric connection between the differential transmission line and the respective coupling element between a first (higher) and a second (smaller) value by operating a respective controllable switch. Further, an output terminal of the plurality of switchable resistor elements is connected with a harvester element that is further connectable to the respective coupling element. Thereby, the harvester element collects energy that is fed from the phase compensated differential transmission line to the respective coupling element. The harvester element collects energy when the device is powered and thereby converts RF power fed into the system into a DC voltage. The harvester element is coupled via a feedback loop to the respective controllable switch of the respective plurality of switchable resistor elements. A harvester element (“power harvester”) and the operation thereof is as such well-known in the art. A detailed description thereof is therefore herein omitted. For further explanations, reference is made, for instance, to the article “Self-Reconfigurable RFID Reader Antenna”, by Pavel Nikitin, IEEE RFID Conference, May 9-11, 2017 (cf. also patent publication U.S. Pat. No. 10,096,898 B2). The goal to be achieved with the above described structure is to selectively activate communication only between a part of the plurality of couplers forming the array and an inlay (transponder) present in the vicinity of the array. More specifically, only one (or several) of the coupling elements are to be activated which allow for a highly efficient communication with the particular inlay (transponder) having a specific geometry. For this purpose, the arrangement is initially set into a sensing state for sensing which of the couplers of the array provide for the most efficient coupling and are therefore to be activated. In accordance with the sensing result, the desired couplers are automatically activated (by switching the controllable switch so as to set the resistance between the phase compensated differential transmission line and the coupling element to the reduced (second) value). The resistance values of the remaining couplers are still switched to the high (first) value, so that only minimal interaction with the transponder is possible. In accordance with non-limiting embodiments, the sensing state is established as follows. At the beginning, when the interrogator is switched on, all resistors of the array are switched to the first (high) value. The interrogator then powers the system by feeding an (unmodulated electromagnetic) wave towards the differential transmission line. While being fed through a path with the first resistance value, the DC voltage established at each of the harvester elements gradually increases. However, the switches are not triggered by the increasing DC voltages. This can, for instance, be achieved by including an inverting operational amplifier into the feedback loop so that the switching state at the time of an increasing voltage remains the same as in a non-powered state at the beginning, i.e. being switched to the first value. As soon as a transponder (inlay) is present in the vicinity of the coupling elements, an interaction between the coupling elements and the transponder begins and leads to a mismatch and thus to an impedance change in the respective element of the array. This affects the DC voltage at the harvesting element. In particular, the higher the amount (efficiency) of interaction between a particular coupling element and the transponder, the lower the DC voltage at the harvester becomes. Specifically, in case of a certain strength of interaction, the DC voltage falls below a predetermined threshold (“second threshold”). This means that the respective switch is powered through the feedback loop and the resistance value of the respective element of the array is switched to the second (lower) value, with the result that the amount of energy powering the respective coupler element of the array is raised. In other words, the respective coupling element is activated due to the highly efficient interaction between the transponder and the coupling element, which leads to a decrease of the DC voltage at the harvester below the second threshold. It is an important property of the arrangement according to the present disclosure that in the above described sensing state the individual coupling elements are well isolated. Due to the isolation in the sensing state, destructive interference when all coupling elements are active is avoided. Further details of the operation will be described below, in accordance with individual embodiments of the arrangement of the switchable resistors. In particular, in the activated state, the DC voltage may be caused to rise again so as to exceed the higher first threshold, with the result that the respective feeding path is again deactivated, by switching the resistance to the first value. Preferably, the arrangement is constituted so that for each coupling element, in the presence of a particular inlay, a certain switching state is reached so as to remain stable. However, generally speaking, it may be said that only one or a few of the coupler elements of the array (those which have a sufficient amount of interaction to have the voltage decrease below the second threshold) are automatically switched into the active state with low value resistors, whereas the others will be set inactive by having the respective switches powered off. In this state, the actual communication with the inlay is performed by feeding the system as a modulated signal, in order to write (or read). The other couplers, for which the resistances of the respective feeding paths are set to the first value, will not be able to perform communications with sufficient efficiency, so that the communication concentrates on those coupler elements which are “activated”. By thus providing a self adaptive array of coupling elements, highly efficient communication with transponders (tags, inlays) of various shapes and geometry is flexibly enabled without the need to know or determine an exact location of an individual transponder. In the following, further details of non-limiting embodiments will be described with reference to drawings. FIG.1is a block diagram providing a system overview of an electromagnetic coupler arrangement1according to a non-limiting embodiment. The electromagnetic coupler arrangement comprises an interrogator10, a balun B, and a plurality of sequentially coupled elements (Element 1, . . . Element n, . . . Element N) of a phase compensated differential transmission line PC TRL. “Sequentially coupled” means a sequential electric coupling of the individual elements to the differential transmission line, which must be distinguished from the geometric shape, which may, for instance, represent a one-dimensional or two-dimensional array. For simplification, further elements arranged between the elements explicitly shown in the drawing and arranged between the wavy lines have been omitted. Further, the drawing has been broken between the balun and Element 1. To each of the elements, a coupling element TLL is coupled via switchable resistors ISO, together with a respective harvester element H. The switches for switching the resistance value are included in box ISO. Non-limiting embodiments of the arrangement will be described in more detail below with reference toFIGS.2and3. The feedback loops3from each of the harvester elements to the respective switches are indicated with dashed lines. More specifically, the interrogator10is adapted to feed electromagnetic energy (in the form of an unmodulated wave or a modulated wave representing a signal) to power the system, which is indicated by a single line extending from the interrogator10to the balun B. The balun splits the electromagnetic wave (signal) into two waves (signals) of equal amplitude and with a phase shift of 180°. Thereby, the balun B converts the transmission line extending from the interrogator into a differential transmission line. This is indicated by the two parallel lines forming the differential transmission line, onto which the coupling elements TLL and the harvesters H are shunted via the switchable resistors ISO. Between each of two elements of the differential transmission line, phase compensation for the respective electrical length is provided, which converts the differential transmission line in a phase compensated differential transmission line PC TRL. This is indicated by the boxes labeled “PC TRL” in the drawing. It is further noted that in the preferably used copper transmission lines, high-frequency (radio frequency RF or ultra-high-frequency UHF) losses are quite small. According to a non-limiting embodiment a frequency range of operation is about 860 to 960 MHz (Megahertz). In the drawing ofFIG.1, the phase compensated differential transmission line is terminated behind element N (on the right-hand side of the drawing) by means of a terminating impedance Zterm. The terminating impedance serves for compensating mismatches along the differential transmission line. The presence of the termination is, however, optional, and, depending on the system parameters, an un-terminated differential transmission line is within the scope of the present disclosure. For instance, in an exemplary case of 32 switching elements sequentially coupled to the single balun B, a good matching may be achieved without providing the terminating impedance. Generally speaking, the switchable resistors ISO force the coupling elements to alternatively be connected more strongly or more weakly to the RF path of the PC TRL. The inlay transponder impedance acts as a load when the inlay interacts electromagnetically with the coupling element. In the absence of an interacting inlay, the resistance of the coupling element is high. During the presence of an inlay (not shown in the drawing) interacting with the coupling element, when the coupling element is strongly connected the coupling element is in a loaded state, and when the coupling element is weakly connected the coupling element is in an isolated state. The isolated state is the default state. A trigger to change from the isolated state to the loaded state is achieved by electromagnetic interaction between the inlay, when it is present, and the respective coupling element. If a coupling element is sufficiently close to the inlay, there will be strong electromagnetic interaction, and the coupling element will trigger. The trigger locks the respective coupling element(s) to the loaded state as long as the inlay remains present. The internal harvester-based control feedback loop3cannot be externally interfered with and whether or not a coupling element is triggered is not monitored outside the coupling element. Regarding the geometric dimensions, it is noted that the label size of an individual label is generally smaller than the geometric size of the self-adaptive array of coupling elements. In other words, the processing for activating only a single one or a portion of the coupling elements forming the array (sub-array activation) is foreseen for short pitch applications. The size of the array (number of coupling elements in the coupler arrangement) is generally not limited. Typical values may be 8, 16 or 32, but without being limited to these. Out of the coupling elements of the array, a single one or a few (such as two or three) may be activated in the self-adaptive sensing procedure for a particular inlay shape and arrangement (position and orientation). However, the number of activated coupling elements is not limited to the above non-limiting examples either and, moreover, cannot even be foreseen in a particular situation, because it depends on the interaction parameters of each particular inlay with each coupling element, which are not externally controlled or communicated to the outside. In the following, we shall explain the operation of the coupler arrangement in more detail and with reference to non-limiting exemplary arrangements of circuitry in the boxes labeled ISO inFIG.1, which are illustrated inFIGS.2and3. FIG.2shows an exemplary arrangement of circuitry in the box “ISO” in accordance with a non-limiting embodiment. In accordance therewith, the electric input path splits into two parallel paths characterized by having different resistances, illustrated in the form of different resistors R1and R2. In accordance with the convention used in this specification, the resistance value of the first resistor, R1, is larger than the resistance value of the second resistor, R2. A switch2, arranged at an output side of the box, enables switching between the two parallel paths. The common output splits into two terminals for connecting the harvester on the one side and the coupling element on the other side. Operation of the switch is controlled by a signal that is provided through the feedback loop3from the harvester element H. This is indicated by the dashed line in the drawing. As a skilled person is aware, an obvious modification of the illustrated configuration is possible and does not affect the operation. In accordance with the obvious modification, the switch2is arranged not on the output side of the parallel circuit with the two resistors R1and R2but on the input side. It is further noted that for the connection of the harvester and the coupling element to the differential transmission line, two identical arrangements ISO as shown inFIG.1and illustrated in detail inFIG.2are connected in each section (element) of the phase compensated differential transmission line. In the following, the operation for automatically reaching a stable activation status of an individual coupling element in the presence of a particular inlay will be described. At the beginning, i.e. before feeding of energy from the interrogator10starts, the system is in a default state, wherein the resistances in all electric paths are switched to the (larger) first value, R1. Preferably, the arrangement is such that this corresponds to a non-powered state of switch2. Subsequently, the feeding of an unmodulated electromagnetic wave begins (sensing state). In this state, and as long there is no inlay present in the vicinity of a coupling element TLL, the coupling element TLL has quite a high resistance in itself, and thus despite the switchable resistance as the larger value R1, the harvester element H collects energy so as to establish a high-level DC voltage. This does not trigger any change in the switching state. In the preferred embodiment, wherein the feedback loop3comprises an inverting operational amplifier, a high DC voltage is converted to a low level feedback signal, so that the switch2remains non-powered. When an inlay approaches, electromagnetic interaction between the inlay and the coupling element TLL leads to a decrease in the DC voltage at the harvester element H, as generally explained above. The decrease in the DC voltage is more considerable (i.e. the resulting DC voltage is the smaller) the more intensive (more efficient) the electromagnetic interaction is. In accordance therewith, if there is highly efficient interaction between the inlay of a particular type and a particular one of the coupling elements TLL of the array, the DC voltage decreases below a predetermined threshold value (second, lower, threshold value), which triggers the switch2. In the particularly preferred embodiment with the inverting operational amplifier, a low level DC voltage is converted into a high level feedback signal, so that switch2is powered. Accordingly, the respective electric path through the switchable resistance element is switched to the lower value resistance, R2. This corresponds to an activation of the respective coupling element TLL and enables highly efficient interaction of that particular coupling element with the inlay. On the other hand, if the interaction between the inlay and another coupling element TLL is less efficient (for instance, due to a different relative geometric configuration), there is less decrease in the harvester DC voltage (i.e. the DC voltage level remains higher), it does not fall below the predetermined (second) threshold value, and the switch2is not triggered. Consequently, in such a case, the resistance of the electric path between the differential transmission line PC TRL and the coupling element TLL remains large (R1) and the coupling element TLL is thus in a deactivated state, wherein no efficient interaction between the coupling element TLL and the inlay is possible. In case of an activation, as a further consequence of the small electric resistance R2, the DC voltage at the harvester H again rises. When the predetermined first (higher) threshold value is exceeded, this leads to triggering the switch2to switch back to the higher electric resistance, R1. It must be borne in mind that a situation should be avoided wherein due to the above described behavior there is a permanent switching between the two switching states of switch2, in short time periods. In other words, this would be an unstable situation, so that no efficient coding would be possible. Rather, the “activated” switching state should remain at least for a period of time that is sufficiently long in order to subsequently feed and process a modulated signal for coding the inlay. In order to achieve this, it is important that there is not too large a difference between the two resistance values, R1and R2. More specifically, the ratio of the resistance value of the parallel combination of the TLL and the harvester to the sum of the resistance values of R2and the TLL-harvester parallel combination after the switch2has been triggered (when the inlay is present) must be strictly smaller than the respective ratio of the resistance value of the TLL-harvester parallel combination to the sum of the resistance values of R1and the TLL-harvester parallel combination before the inlay is present. On the other hand, this means that even in the case of the non-activated switching status (R1), there is still some interaction possible, so that even the “not activated” coupling elements TLL may contribute to the communication. In practical implementations, possible orders of magnitude may, for instance, be R1=1000Ω (Ohm) and R2=600Ω. Of course, these particular values are purely exemplary and for illustrative purposes and the present disclosure is by no means limited to these or similar values. FIG.3shows an exemplary arrangement of circuitry in the box “ISO” in accordance with another non-limiting embodiment. In this embodiment, the electric path is split between two parallel paths having resistance values R1and R2in a similar manner as in the embodiment ofFIG.2. However, the connection of the harvester element H is different. More specifically, the harvester element H remains permanently connected to the electric path with the higher resistance, R1. This is achieved by arranging the switch2at a position behind the terminal to which the harvester element H is connected, in the current flow from the phase compensated differential transmission line PC TRL towards the coupling element TLL. Thereby, only the connection between the differential transmission line PC TRL and the coupling element TLL is switched, whereas the connection to the harvester element H remains unaffected by the switching. The feedback loop3is again indicated by means of a dashed line. As can be seen from the drawing, there is a short circuited connection between the harvester element H and the coupling element TLL when the switch2is on the side of resistor R1. However, when switch2is switched towards resistor R2, the galvanic connection between harvester H and coupling element TLL is interrupted. The operation of the embodiment is generally similar to that ofFIG.2. In the same manner as described above, in the initial (non-powered) state, the switching position is towards the first (larger) resistance value, R1. Accordingly, in the sensing state, when electric energy in form of an unmodulated wave is fed, a high-level DC voltage is established at the harvester element H, in the same manner as in the embodiment ofFIG.2. When an inlay becomes present in the vicinity of the array of coupling elements TLL, this again leads to a decrease of the level of the DC voltage, depending on the strength (efficiency) of interaction. In case of highly efficient interaction, the DC voltage level decreases below the second (lower) threshold. Up to this point in time, the operation is the same as in the embodiment ofFIG.2, because the electrical connections are the same. More specifically, as long as the switch2is in the switching position connected to the higher resistance value R1, there exists a short-circuited connection between the coupling element TLL and the harvester element H. As a consequence, a situation of good matching (resonance condition) may be established in the circuit part comprising the (mainly inductive) coupling element and the (mainly capacitive) harvester element, without presence of an inlay. When an inlay becomes present, the situation is still the same as in the embodiment ofFIG.2. The electrical path between the differential transmission line PC TRL and the coupling element TRL is again switched to the lower value R2, i.e. the respective coupling element TLL is activated. However, after the switch2is triggered the situation is different from the embodiment ofFIG.2. In the case of the embodiment ofFIG.3, the resonance condition is interrupted by the triggered switch2. The harvester element remains out of a resonance circuit. Consequently, no fast rising of the DC voltage at the harvester element H is to be expected or the DC voltage may drop even further. In consequence, the activated status remains stable, even in the case of a very low second resistance value R2(one or even several orders of magnitude smaller than R1) so that from a practical point of view the electric path between the differential transmission line PC TRL and the coupling element TLL may be considered to be almost short-circuited. In case of an insufficient interaction strength (efficiency) between the inlay and the coupling element TLL, when the DC voltage level at the harvester element H does not fall below the predetermined (second) threshold value, the switch2is not triggered and the situation remains the same as in the embodiment ofFIG.2, i.e. the respective coupling element TLL is not activated. Thus, by means of the embodiment ofFIG.3, and in case of using resistance values of different magnitude (so that R2is much smaller than R1), a better selectivity and improved stability can be achieved. On the other hand, the contribution of the non-selected (not activated) coupling elements TLL to coupling is in that case negligible. It is worth mentioning that again, two respective boxes labeled ISO having the structure illustrated inFIG.3or any functionally equivalent structure are arranged per section (element) of the phase compensated differential transmission line PC TRL in order to establish the feeding connection for the respective coupling element TLL being shunted onto the differential transmission line. The differentially fed coupling element TLL may be one of various geometric arrangements, shapes etc. Some exemplary and non-limiting embodiments will be described in more detail below with reference to the drawings ofFIGS.4to6. In accordance with preferred non-limiting embodiments, the structures as explained below, serving as coupling elements TLL in an electromagnetic coupler arrangement according to the present disclosure, are generally embedded in a multi-layer structure (using one of well-known microstrip or strip line technologies) comprising the feeding layer, a metallic ground plane layer and the top surface layer to be arranged closest to the terminated planar metallic trace (inlay to be encoded). The top surface layer comprises the coupling elements, examples of which will be explained in more detail below. It is noted that although the reference label “TLL” (for: “transmission line loop”) is generally used for the coupling elements throughout the present disclosure, this does not imply any limitation of the structure of suitable coupling elements, in principle. In typical arrangements, there is only a small distance between the top surface layer (coupling elements) and the planar metallic trace (inlay) to be coupled, in the order of about 1 mm, or between 0.5 mm and 1.5 mm, without being limited to that range. A typical size of an inlay may be in the order of 4″ (inches), i.e. approximately 100 mm (millimeters), conforming to 4″ wide media used in bar code printers. More compact inlay designs also exist, being in the order of 30-50 mm, which are compatible with narrower media widths. FIG.4is a simplified functional illustration of a first example of a coupling element, which will be called “Differential Transmission Line Loop” (DTLL). The coupling element is constituted of a continuous transmission line (which may, for instance, be manufactured in the microstrip technology), which is shaped so as to form an (almost) closed loop6. The open ends of the loop6constitute terminals21and22that are fed with signals7and8(illustrated in form of dashed lines), having a phase difference of 180° with respect to each other. As was explained above with reference toFIG.1, signals7and8are output by a balun, as a result of splitting an input signal into two signals of equal amplitude and inverted phase, and fed via phase compensated differential transmission line PC TRL and the shunted connection as described in more detail above with reference toFIGS.1to3. As a consequence of the differential feeding, the direction of the current flowing in the loop6, illustrated by an arrow in the loop, can be the same throughout the loop (at a given instance of time). The design of the coupling element as a continuous transmission line loop, representing a distributed field theoretic component, can be performed without the consideration of matching. On the contrary, if discrete components were to be included at intermediate locations in the loop structure, for the purpose of input matching, unnecessary iteration would be needed between field theoretic coupling optimization and component value optimization. Thus a continuous transmission line loop offers a considerable design advantage as compared to mixing distributed and discrete components inside the coupling structure. For appropriate functioning, the DTLL must be arranged so as to oppose a ground plane (not illustrated). In accordance with a non-limiting embodiment, all discrete components are arranged on the feeding side, or balun side of the arrangement, that is, the opposite side of the ground plane relative to the coupling element. Hence, a non-limiting embodiment facilitates firstly (and independently) optimizing the geometry of the transmission line loop and secondly, after the geometry of the transmission line loop has been optimized, performing the impedance matching on the balun side, by choosing appropriate electric components. The balun has three functional properties. The first property consists of splitting the input signal into two parts equal in magnitude. The second property consists of shifting the two parts 180° apart in phase. The third property consists of an impedance transformation from a non-differential impedance of an external feeding system interface, such as, for example, 500, to a differential impedance level, as seen at the input of the loop. In other words, the balun according to a non-limiting embodiment may be regarded as including a transformer and designated as a “balun transformer” since it also fulfils the third function of impedance transformation. The impedance transformation is generally characterized by the impedance transmission ratio k. In the case relevant for a non-limiting embodiment, wherein a non-differential impedance is transformed into a differential impedance, k equals twice the ratio of the differential impedance value (at the input of the DTLL) and the non-differential impedance value (of the external feeding system interface). In case of a 50Ω external feeding system, and assuming the differential impedance level at the input of the loop to be 500Ω, the impedance transmission ratio would be k=5. Generally, the balun “sees”, at its output, a high reactive impedance (or high Q-value), which is due to the inductive character of the transmission line loop and the presence of the ground plane. By including the respective electric components, said impedance is matched with the impedance on the feeding side. A high Q-value (high value of the Q-factor or quality factor) corresponds to a highly efficient inductive coupling but at a reduced bandwidth, since the Q-factor generally expresses the relation of the resonance frequency of a circuit to the bandwidth (half power bandwidth). This means that the range of frequencies where it is possible to deliver power with high efficiency is limited. Therefore, the matching is preferably made in a way that reduces the Q-value to a certain acceptable extent. This can be done, for instance, by including an internal resistor, at the output of the balun. This is possible, taking into account the potentially very high coupling factors that can be achieved between the differentially fed transmission line loop and the inductive loop of an inlay at a single frequency, where some reduction in delivered power to the transponder chip is accepted, with a still overall high coupling factor exhibited by the DTLL over the desired bandwidth. As specific examples of coupler loop geometries, a plurality of coupler loops having super elliptical shapes with different parameters are illustrated inFIG.5. Super elliptical shapes suitable for non-limiting embodiments are parametrically defined in Cartesian coordinates x and y in accordance with the equations x=a❘"\[LeftBracketingBar]"cosθ❘"\[RightBracketingBar]"2msgn(cosθ)y=b❘"\[LeftBracketingBar]"sinθ❘"\[RightBracketingBar]"2nsgn(sinθ)a,b>0m,n≥2θ∈[0,2π]. In these equations, parameters a (length) and b (height) are of a length dimension and define the size of the super ellipse in the x- and y-dimensions, respectively (thus being a generalization of the half axes of an ordinary ellipse) while parameters n and m define the curvature, i.e. the deviation from an ordinary ellipse (n=m=2) towards a rectangular shape (for n, m>2). θ is the variable parameter of the parametric representation of the curve. InFIG.5, for a, parameter values of a=7.5 mm (millimeters) (upper examples) and a=15.5 mm (lower examples) have been illustrated. b has been set fixed to b=4.6 mm in all examples. For n and m, values of n=m=2 (left-hand side examples) and n=m=20 (right-hand side examples) were used. The dashed lines indicate the symmetry axes. The dual symmetry axis is common for the super elliptical shape. From an electrical perspective, due to the strongly constrained electromagnetic field, the coupler loop input at the terminals is well approximated by a function which only depends on the length and trace width of the loop, and not the shape. Thus, there are many other asymmetrical shapes not covered by this particular geometric form, which are possible candidates for efficient reactive near field coupling, and the loop is not limited to the particular geometric shape illustrated inFIG.5. As simulations show, if the length dimension of the loop (x-axis ofFIG.5) becomes so large that the transmission line loop of the coupler also covers the inlay antenna (radiator) of an RFID tag brought into close vicinity of the coupler, besides the coupling with the current loop of the inlay, also coupling with the inlay antenna becomes important. This may lead to destructive interference. FIG.6illustrates an alternative geometric shape of a coupling element. The shape of the coupling element illustrated inFIG.6differs from the DTLL shapes described before in that the actual coupling element comprises not only a single loop but has a more complex shape, wherein a transmission line16is wound several times around a fixed center20so that there is at least one winding of the transmission line between the two differentially fed terminals. This shape may be called an “elongated spiral shape”. More specifically, the transmission line of an elongated spiral shape used as a coupling element in the example ofFIG.6is a continuous transmission line16of finite length with terminals21and22on both ends, wound around a fixed center20and formed into a planar shape. Each winding comprises straight portions that are arranged parallel to each other and the terminals are located so that there is at least one winding of the transmission line in-between. The illustrated shape is thus similar to a “spiral” but differs from the actual geometric figure of a spiral (which is characterized by a distance from a center point continuously increasing with the length of the curve) in that it has (substantially) parallel “flattened”, i.e. (substantially) straight portions. As will be explained in more detail below, it is these portions, which particularly contribute to an enhanced coupling efficiency. In the illustrated example, the substantially straight and substantially parallel portions are connected by curved portions. The two terminals21and22are arranged along a line that is perpendicular to the direction in which the transmission line16leaves the terminals21and22. The size of the coupler element (TLL) is highly dependent upon the electrical length of the transmission line forming the TLL. The electrical length of the transmission line (TL) portion being confined to the top surface of the multi-layer arrangement may not exceed 180° (one half wavelength), to avoid zero crossings of the current. As reference, a straight TL with 1 mm width on an FR4 substrate having a thickness of 1 mm has a 180° electrical length corresponding to approximately 92 mm of physical length. Hence, any TLL has a surface size restriction corresponding to this TL length. It shall be noted that for an arbitrary geometrical shape of the TLL, sub sections of the loop being very close to each other interact electromagnetically. Hence, the actual size of the TLL may be affected by the shape of it. For instance, a super elliptical shape may have a TL length being rather close to the reference straight TL. On the contrary, a spiral shape has several sub sections being close to each other. For any DTLL realization, the differential input impedance corresponding to a 180° electrical length has a real value. As a consequence of the differential feeding at the terminals21and22, at each instance of time the current direction in the elongated parallel portions of the transmission line pattern16is the same. This corresponds to a more evenly distributed current density and thus generated magnetic field over an extended area of the top surface layer (actually: the whole surface of the coupler) rather than only in the proximity of a single loop-shaped conductor. This enables an enhanced flexibility in achieving highly efficient coupling with RFID tags of various shapes. On the other hand, the same as with the transmission line loop structures described before, the field strength decreases rather quickly outside the area of the transmission line, so that the magnetic field remains confined to the area of the coupler surface only and high spatial selectivity is not affected. For the sake of completeness, it is noted that a simple spiral shape (in its strict mathematical sense, i.e. without flattened elongated portions) would be suitable as a geometry of the coupling element according to the present disclosure. However, the particular advantage of an even current distribution over the top surface so as to achieve an enhanced flexibility regarding various RFID tag shapes would not be achieved thereby. It is moreover emphasized that the above described exemplary geometric arrangements are merely examples for illustrative purposes and the present disclosure is by no means limited to these examples. Rather, any suitable differentially feeding the coupling element structure that a skilled person is aware of or will become aware of in the future is understood to be within the scope of the present disclosure. In accordance with a further non-limiting embodiment, a multilayer electromagnetic coupler arrangement for coupling electromagnetic power to an electric current loop of an RFID tag of arbitrary geometric shape, by means of reactive near field coupling, is provided. The electromagnetic coupler arrangement comprises a top surface layer forming a top surface of the electromagnetic coupler arrangement to be arranged closest to an RFID tag which the electromagnetic power is to be coupled to. The top surface layer further comprises a transmission line (transmission line loop) of a spiral shape or an elongated spiral shape for achieving the electromagnetic coupling by inductive coupling with a current loop of the RFID tag. The transmission line loop of a spiral shape or an elongated spiral shape is a continuous transmission line of finite length having a spiral shape or an elongated spiral shape with terminals on both ends. The terminals are located so that there is at least one winding of the transmission line in-between. The electromagnetic coupler arrangement further comprises a metallic ground plane layer and a feeding layer for feeding the two terminals. In accordance with non-limited preferred embodiments of the further non-limiting embodiment, the terminals are suitable to be differentially fed by signals equal in amplitude and 180° phase shifted. Also preferably, the transmission line is a continuous transmission line of finite length of an elongated spiral shape, which is wound around a fixed center point so that each winding comprises straight portions that are arranged parallel to each other. Further preferably, the two terminals are arranged on a straight line extending perpendicularly to a direction in which the transmission line leaves the terminals. In accordance with preferred non-limiting embodiments, the elongated spiral shape is a planar shape. It is noted that the foregoing has outlined some of the more pertinent non-limiting embodiments. It will be clear to those skilled in the art that modifications to the disclosed non-limiting embodiments can be effected without departing from the spirit and scope thereof. As such, the described non-limiting embodiments ought to be considered as merely illustrative of some of the more prominent features and applications. Other beneficial results can be realized by applying the non-limiting embodiments in a different manner or modifying them in ways known to those familiar with the art. This includes the mixing and matching of features, elements and/or functions between various non-limiting embodiments being expressly contemplated herein so that a person of ordinary skill in the art would appreciate from this disclosure that features, elements and/or functions of one embodiment may be incorporated into another embodiment, as skilled in the art would appreciate from this disclosure that features, elements and/or functions of one embodiment may be incorporated into another embodiment as appropriate, unless described otherwise, above. Although the description is made for particular arrangements and methods, the intent and concept thereof may be suitable for and applicable to other arrangements and applications. For instance, throughout the operation non-limiting embodiments described herein, the case has been described in detail wherein the connection of all coupling elements is initially, i.e. when the feeding of an unmodulated electromagnetic wave to be applied in the sensing state starts, in the non-activated state, i.e. the state where the path from the phase compensated differential transmission line to the coupling element is switched to the first (high) resistance value. However, the opposite case is possible within the framework of the present disclosure as well. In that case, the DC voltage at the harvester element quickly reaches a high level (beyond the first threshold), which may trigger the switch to switch the connection from the second (low) resistance value to the first (high) value and then the operation proceeds as described above. For the sake of completeness, it is noted that the former alternative may not operate together with the structure as illustrated inFIG.3. In that case, the harvester element may not sufficiently quickly trigger switching, because it is always connected to the higher resistance path. Alternatively, within the framework of the present disclosure, the sensing state may be prepared while an inlay is already present and when the initial switching state is towards the higher resistance. In that case, the DC voltage at the harvester element, after an initial raise, may drop back down for one or several coupling elements establishing strong interaction with the inlay, which triggers the switch. For a structure such as that illustrated inFIG.2, it has been described that there may be a stability problem in that a coupling element once activated will become again deactivated already after a short period of time that is insufficient for a reliable encoding procedure. As a solution for avoiding this, it was proposed to choose the resistance values R1and R2which are not too different from each other. As an alternative solution, a time delay element may be considered, which delays the switching back to the high resistance state by a time that is sufficient for the intended coupling, in particular encoding, procedure. In summary, the present disclosure relates to a wireless electromagnetic coupler arrangement for reactive near field coupling comprising a sequential array of coupling elements which are geometrically arranged one- or two-dimensionally. By means of feeding an unmodulated wave of electromagnetic energy to each of the coupling elements and a respective associated harvester element, in an initial sensing state, an automatic selection of a single coupling element or plural coupling elements which establish a particularly strong and efficient interaction with an inlay is performed. By means of a respective feedback loop, a switchable array of resistances is used to activate the selected coupling element(s) for coupling of information to/from the loop and to de-activate the remaining coupling elements. The self adaptive array of coupling elements according to the present disclosure is flexibly applicable for coupling to planar metallic traces (in particular: RFID inlays) of arbitrary geometric shape without the need for a specific calibration or location procedure. There is no need for scanning the inlay geometry (“inlay profiling”) or for external control either. | 44,793 |
11861446 | DETAILED DESCRIPTION While example methods and apparatus are disclosed, modifications to the example methods and apparatus may not be described in detail as they may be well known to a person of ordinary skill in the art. A piece of equipment, such as, for example, a material preparation and analysis equipment, may be able to handle many different types of accessories, where each accessory may need to have different parameter settings or the same accessory may need to have different parameter settings for use with different samples. For example, if the equipment is a cutting device that uses an abrasive cutting wheel to cut samples or workpieces, the desired constant parameter may be a surface speed of the cutting wheel, or linear speed of the cutting wheel at its edge. The surface speed may vary with different samples being cut, with different type of cutting wheel, different type of abrasive, disposition of the abrasive on the cutting wheel, whether liquid coolant is used, type of liquid coolant used, etc. Furthermore, the initial size of the cutting wheel is needed such that the initial rotational speed of the cutting wheel can be set for the correct surface speed. Accordingly, some parameters may need to be entered or set. Disclosed methods and apparatus provide for less onerous ways to enter the various parameters, as well as reducing errors when entering the parameters. The task of keeping track of the parameters used for a given sample may also be made easier by the disclosed methods and apparatus. Traditionally, the user is required to have advanced knowledge of machine and sample properties and/or have access to such knowledge to input the correct parameters. This information may now be combined with the consumable information to ensure safe and effective operation. Accordingly, information can more easily be passed to the machine. This allows the machine to provide greater process control and less user intensive interface, leading to fewer human errors, and a higher likelihood of success by the end user. Setup speed may be improved as less time is needed to set machine parameters. In addition, scanning work orders, user badges, and similar items may allow for the machine to provide more accurate job costing and productivity feedback. Data backup may also be provided by transferring parameters/information from machine to machine. Data transfer may also be performed easily by sending settings via, for example, barcode, QR code, or other indicia that is electronically readable to individual devices/equipment or to multiple devices/equipment. The QR code is an example of a multi-dimensional barcode. As used herein, the term “tag” refers to a physical machine-readable label storing machine-readable data. Tags may be implemented of electronic indicia, such as single-dimensional barcodes or multi-dimensional barcodes, and/or using electromagnetic waves emitted by the physical label, such as via radio frequency communications. The data stored in a tag may be extracted (e.g., read, received, etc.) using unidirectional communications (e.g., a camera or optical code scanner reading a printed or otherwise physically affixed indicia, data broadcast by the tag) or bidirectional communications (e.g., interrogation of a communications device and a corresponding response containing the data). FIG.1illustrates a block diagram of an example equipment with a tag reader in accordance with aspects of this disclosure. Referring toFIG.1, there is shown an equipment100including processing circuitry110, memory112, input/output (I/O) interface114, and/or circuitry116. The equipment100may be any equipment that may be used for different purposes for treating a material, such as, for example, material preparation and analysis. For example, the equipment100may be a cutting or sectioning saw, a grinder or polisher, a microscope or other analysis tool, a hardness tester, etc. Accordingly, an example of the equipment100may be a cutting device that uses an abrasive cutting wheel. The equipment100may also be used with, for example, a cloth or grinding disc. The equipment100may also be used with, for example, hardness test blocks, which may be used for regular calibration of the equipment100. When the test block is out of specification, that test block may be flagged so that it is not used. A consumable product used in conjunction with the equipment100is referred to herein as the consumable130. In some equipment100, the consumable130may be rotated via, for example, a spindle102driven by an actuator (not shown), which may be controlled by the processing circuitry110. The processing circuitry110may be any type of processor or logic circuitry that is capable of executing instructions stored in a memory, including the memory112, and/or otherwise performing logic functions based on inputs. Example processors include central processing units (CPUs), systems-on-a-chip (SOCs), field programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), discrete logic, and/or any other type of controller, processor and/or, more generally, logic circuitry. The memory112may comprise volatile and non-volatile memory, including mass storage devices. The memory112may be used to store information received by or input into the equipment100, and information processed by the processing circuitry110. For example, the memory112may store equipment parameters associated with identifiers or codes that may be attached or otherwise associated with the consumable130. The identifiers or codes may be unique (e.g., to uniquely identify a consumable, test sample, or other element) and/or non-unique (e.g., the same identifier is attached to multiple instances of the same type of consumable). Additionally or alternatively, the memory112may store measured test data in association with identifiers or codes. For example, test conditions and/or results may be stored in the memory112in association with unique or non-unique codes for subsequent analysis. The I/O interface114is described in more detail below with reference toFIG.2. The circuitry116may comprise various hardware circuitry and/or devices that may be needed for operation of the equipment100. For example, the circuitry106may comprise an actuator to rotate the consumable130. The I/O interface114may comprise a tag reader120that may be used to read indicia, such as RFID tags, NFC tags, barcodes, multi-dimensional barcodes such as, for example QR codes, etc. The tag reader120is shown as being part of the I/O interface114for ease of explanation, however, the tag reader120may be an integrated part of the equipment100or a separate unit from the equipment100that is able to communicate with the equipment100, either via a wired and/or wireless communication. The wired communication may, for example, use any of the different protocols such as, for example, USB, Firewire, TCP/IP, SCSI, IDE, or other protocols that may be appropriate for the cutting device100. The wireless communication may use any of the different protocols such as, for example, Wi-Fi, Bluetooth, NFC (near field communication), or other protocols that may be appropriate for the tag reader120. In some examples, the I/O interface114may be communicatively coupled to another device (e.g., a smartphone, a tablet computer, a desktop computer, a laptop computer, etc.) capable of reading the indicia and communicating the content of the indicia to the equipment100. FIG.2is block diagram of an example user interface of an equipment in accordance with aspects of this disclosure. Referring toFIG.2, there is shown the example user interface200that includes an input interface210, an output interface220, and a transceiver230. The user interface200may also include a tag reader240athat may be similar to the tag reader120. The user interface200may be used to implement the I/O device114ofFIG.1. The user interface200may be a part of the equipment100. The example input interface210may include any type of input device, such as a keyboard, a pointing device (e.g., a mouse, a trackpad), a microphone, a camera (e.g., gesture-based input), a touchscreen, buttons that can be rotated and/or pushed, sliding knobs, and/or any other type of user input and/or output device. The example output interface220includes any type of video output device such as, for example, an LCD display, an LED display, etc., tactile feedback devices that may vibrate, audio output device such as speakers, and/or any other output devices that may be used to provide information or notice. The output interface220may display, for example, status/commands that may be entered for the cutting device100. The example transceiver230communicates via wired and/or wireless communication with other electronic devices. The wired communication may, for example, use any of the different protocols such as, for example, USB, Firewire, TCP/IP, SCSI, IDE, or other protocols that may be appropriate for the cutting device100. The wireless communication may use any of the different protocols such as, for example, Wi-Fi, Bluetooth, NFC (near field communication), or other protocols that may be appropriate for the cutting device100. The transceiver230may be used to control and/or view the status of the equipment100. For example, an electronic device250may be used to enter parameters for a cutting tool, such as, for example, the initial diameter of the cutting wheel, the desired surface speed, etc. if the equipment100is a cutting device with a cutting wheel. The transceiver230may also allow tables, instructions, etc. to be downloaded, for example, to the equipment100. Additionally or alternatively, the parameters may be determined by reading indicia on the cutting wheel and/or on the material under test using the tag reader240a. The tag reader240may read indicia, such as RFID tags, NFC tags, barcodes, multi-dimensional barcodes such as, for example, QR codes, etc., which may be present on the consumable130, packaging of the consumable130, on the material under test, on the packaging of the material under test, and/or on a tag attached to the material under test. The tag reader240amay be able to read various different types of encoded information, as well as plain text. The information read from the indicia may be transmitted to the equipment100to be processed and/or stored by the processing circuitry110and the memory112, respectively. Therefore, the tag reader240amay be used to scan and download tables, instructions, etc. The instructions may be displayed for the user of the equipment100. For example, if the equipment100is a cutting device, the memory112may store parameters such as the sample material being cut, the overall part number of the material being cut, abrasive material type on the abrasive cutting wheel, size of the cutting wheel, concentration of the abrasive material on the abrasive cutting wheel, thickness of the abrasive cutting wheel, bonding agent material type, bonding agent material hardness, etc., in association with a code that would be attached or otherwise associated with (e.g., on the packaging of) a cutting wheel. The input interface210and/or the transceiver230may enable an authorized changes to the data associated with the codes stored in the memory112. For example, while a set of parameters may be stored as the default for a given code, actual operating conditions may justify changing of one or more of the parameters for that code. The tag reader240amay be a built-in module (e.g., a built-in barcode reader, a camera for recognizing barcodes and/or multi-dimensional barcodes such as, for example, QR codes, an NFC reader, etc.) where the indicia is scanned over it. Additionally or alternatively, an attached tag reader240amay be provided with a wired connection from the tag reader240ato, for example, the user interface200to enable reading of indicia such as those, for example, that are impractical to move to the built-in module. The electronic device250may also display, for example, status of the equipment100on the electronic device250. For example, the status may be the status that may be displayed on the output interface220and/or other information that may not be displayed on the output interface220. There may also be in place of, or in addition to, the tag reader240a, a detached tag reader240b. The detached tag reader240bmay be similar to the tag reader240a. The detached tag reader240bmay communicate with the transceiver230of the user interface200similarly as the electronic device250. When wired communication is used for the detached tag reader240b, a cord may be plugged into an appropriate socket in, for example, the user interface200. The socket may be in any part of the equipment100. Accordingly, the user interface200, and its component blocks, may be logical blocks rather than physical blocks that denote a physical location. Generally, the block diagrams inFIGS.1and2may be logical blocks. FIG.3illustrates a block diagram of an example tag reader reading an indicia in accordance with aspects of this disclosure. Referring toFIG.3, there are shown the equipment100and the tag reader120communicating with the I/O interface114via the cable302. The tag reader120may be used to read, for example, the barcode312on the consumable130. Additionally or alternatively, the barcode312may be located on the packaging, documentation, and/or any other location associated with the consumable. In one example, the consumable130is a cutting wheel mounted to a sectioning saw, and the parameters may be set to rotate the cutting wheel at an appropriate rotational speed for a desired surface speed. In another example, the consumable130is a polishing cloth or grinding disc, which may not provide obvious visual cues to a user that the cloth or disc is reaching the end of its useful life limit. The processing circuitry110monitors the remaining useful life of the cloth or disc and indicates to the user (e.g., via the output interface220) to change the cloth or disc when the remaining useful life is below a threshold. The processing circuitry110may monitor the useful life by, for example, keeping track of how long the consumable is used. The consumable use may be an estimate based on how long the consumable has been attached to the equipment100, the time the consumable130has been in use, the parameters or conditions under which the consumable130has been used, and/or other criteria. If the consumable130is tagged with a non-unique code, the processing circuitry110may reset the usage monitor each time a tag is scanned and/or in response to another input indicating that the consumable130has been replaced. For example, the user may be instructed to scan the tag (or other indicia) for a fresh consumable130when replacing a spent consumable with the fresh consumable130. When replacing a consumable130with the same type of consumable, the user may be permitted to input an indication that the same type of consumable is being used for replacement, in lieu of scanning the tag. The workpiece320may be identified by its indicia, such as a barcode, a multi-dimensional barcode such as, for example, a QR code, etc. attached to it. In other examples in which the consumable130is tagged with a unique code, the processing circuitry110stores the usage data for the consumable130in the memory112in association with the code. Thus, if the consumable130is removed and then replaced onto the equipment100, the usage data is restored for further tracking by the usage monitor. In an example, the consumable130is a hardness test block that may be used for calibrating a hardness tester. The test block may be identified by a corresponding tag or indicia, such as a barcode, a multi-dimensional barcodes such as, for example, a QR code, etc. Based on the tag or indicia, the equipment100determines acceptable calibration test result ranges, acceptable test parameters for the test block, use limit for the test block (e.g., based on indentation counts, indentation parameters, and/or upper indentation density limit for the test block), and/or other information for monitoring the consumption of the test block. The example equipment100also stores monitored usage information in the memory112in association with the unique identifier of the test block, which may be recalled for subsequent uses of the test block and determination whether the test block has reached the end of operational life. The equipment100further compares calibration test results to permissible test results identified using the identifier. When the test results are out of this range, the equipment100may be calibrated and/or flagged for maintenance. The tag reader120may transmit the data in the barcode312to the I/O interface114, and the I/O interface114, for example, stores the barcode data in the memory112. The processing circuitry110then processes the barcode data, and stores the processed data in the memory112. If the barcode data is determined to be a product number, the processing circuitry110can look for associated parameters for that product number. The associated parameters may be, for example, a diameter of the cutting wheel. The processing circuitry110may also determine, for example, from the product number that the type of workpiece that is to be cut may need to be identified. Accordingly, a barcode of the workpiece may be read by the tag reader120. If the processing circuitry110determines additional information is needed, the request may be output via the output interface220. The processing circuitry110can now control the rotational speed of the cutting wheel for the desired surface speed with the available information. FIG.4is a flow diagram illustrating an example method of using equipment with a tag reader in accordance with aspects of this disclosure. Referring toFIG.4, there is shown a flow diagram400with blocks402to408. The example method may be implemented using machine readable instructions, which may be stored in the memory112and/or executed by the processing circuitry110. The example method is described below with reference to the equipment100ofFIGS.1-3, where the consumable130is a cutting wheel. In block402, the tag reader120may be used to read, for example, the barcode312to determine the initial parameters associated with the consumable130. For example, for the consumable130(the cutting wheel), the parameters may include information such as, for example, abrasive material type on the cutting wheel, size of the cutting wheel, concentration of the abrasive material on the cutting wheel, thickness of the cutting wheel, bonding agent material type, and/or bonding agent material hardness. Additionally, further information may be read in via the tag reader120. This information may be about the item being cut. The information may be, for example, a part number of the item being cut, material makeup of the item being cut, etc. At block404, a surface speed to use for the product310(the cutting wheel) may be determined from the various parameters read with the tag reader120. At block406, the equipment100may determine the present rotational speed of the product310to see if it is in acceptable margins. At block408, the present surface speed may be determined. This may be done, for example, by determining the cutting wheel size, and, hence its circumference, and multiplying the circumference by the rotational speed. At block410, the rotational speed of the cutting wheel108may be adjusted to bring the surface speed to the desired speed. When the surface speed is within a desired tolerance and does not need to be adjusted, the next step may be to back to block406. FIG.5is a flow diagram illustrating an example method of using equipment with a tag reader to generate equipment settings in accordance with aspects of this disclosure. Referring toFIG.5, there is shown a flow diagram500with blocks502to506. The example method ofFIG.5may be implemented using machine readable instructions, which may be stored in the memory112and/or executed by the processing circuitry110. The example method shown in the flow diagram500may use the tag reader120to read, for example, parameters as described with reference toFIG.4or encoded settings for setting up the equipment100. The parameters or the encoded settings may be encoded as an RFID tag, an NFC tag, a barcode, a multi-dimensional barcode such as, for example, a QR code, etc. At block502, parameters or the encoded settings may be read by the tag reader120and used to set up the equipment100. At block504, the equipment100may be used to perform the desired operation. For example, if the equipment100is a cutting device with a cutting wheel, the equipment100may be set for a specific surface speed for cutting a sample. However, in other cases, the equipment100may not be operated, but only used to generate equipment settings. At block506, the equipment settings may be exported. This may occur regardless of whether the equipment100was operated. The equipment settings may be encoded by the processing circuitry110and transmitted via the transceiver230to various devices. For example, the equipment settings may be transmitted to other machines directly, to a printer to print barcodes or multi-dimensional barcodes such as, for example, QR codes to be read by other similar machines, to a display of a portable device such as a cell phone, a laptop, a tablet computer, etc. so that the encoded setup information can be scanned. The equipment settings may also be sent to appropriate devices to encode an RFID tag, an NFC tag, etc. The equipment settings may also be listed in, for example, alphanumeric characters and transmitted for manual entry in other machines. This may comprise, for example, sending to a printer to be printed, or sending to a display of an electronic device to be displayed while an operator enters the settings. The electronic device may be a machine that is to be set up or a device such as, for example, a cell phone, a laptop, a personal computer, a tablet computer, etc. Other embodiments may encompass transmitting the settings to be encoded by another device that then encodes the settings, and prints the barcode/QR code or encodes the settings in an RFID tag, an NFC tag, etc., for reading by for example, the tag reader120of an equipment100. Different examples may also allow different forms of inputs to be entered. For example, the tag reader120of an equipment100may read machine settings from a barcode, then additional information may be read for specific consumables and/or workpiece (or sample). The processing circuitry110of the equipment100may then process all the input and may modify the machine settings read from the barcode as needed. FIG.6is a flow diagram illustrating another example method of using equipment with a tag reader in accordance with aspects of this disclosure. Referring toFIG.6, there is shown a flow diagram600with blocks602to608. The example method ofFIG.6may be implemented using machine readable instructions, which may be stored in the memory112and/or executed by the processing circuitry110. At block602, the tag reader120of an equipment100may read, for example, the barcode312of the consumable130. The consumable130and its usage may be identified via the barcode312. At block604, the product300may be used in operation of the equipment100. The operation may comprise, for example, polishing a workpiece. The processing circuitry110may keep track of how long the product300is in use. At606, when the processing circuitry110determines that the product300is reaching the end of its life, for example, by the amount of time it was in use, then at608the processing circuitry110may output an alert via, for example, the output interface220. Then the process may return to monitoring at block604. At block608, various embodiments may, rather than continue to monitor at604, terminate operation. This may be due to concern that continuing operation may damage the equipment100or the workpiece. Various embodiments may also allow some multiple number of warnings before terminating operation. FIG.7is a flow diagram illustrating another example method of using equipment with a tag reader in accordance with aspects of this disclosure. Referring toFIG.7, there is shown a flow diagram700with blocks702to708. The example method ofFIG.7may be implemented using machine readable instructions, which may be stored in the memory112and/or executed by the processing circuitry110. At block702, a particular consumable (e.g., the consumable130) may be identified by reading, for example, a tag that may be attached to the consumable130. The tag may also have, for example, an acceptable calibration result range for that consumable130. At block704, the equipment100is calibrated using the consumable130, and at block706the results of the calibration are compared with the acceptable calibration result range for the consumable130. If the calibration is within the acceptable calibration result range, then the process may proceed to step702for the next consumable130and/or equipment100. If the calibration is not within the acceptable calibration result range, then the process may proceed to block708. At block708, the consumable130and/or the equipment100may be flagged as being out of range. The flagging may be, for example, a warning on the output interface220, such as, for example, a video display. The process may then continue at block702with the next consumable130and/or equipment100to be calibrated. A database may also be updated to keep track of all the consumables. This may give a history, for example, of the consumables being calibrated. The database may be exported to characterize the consumables. The database may be exported, for example, by transmitting it using the transceiver230. The format of the data may be pre-arranged depending on the receiving entity. The database may then be used by the receiving device to analyze the consumables and their calibration history. Accordingly, it can be seen that various embodiments of the disclosure may disclose a method for handling data by a materials preparation and analysis equipment, where the method comprises receiving, by the equipment, indicia associated with a consumable used by the equipment. The method may further comprise identifying, by the equipment, one or more parameters associated with the consumable based on the indicia. The indicia may be processed by processing circuitry in the equipment to identify the one or more parameters. The one or more parameters may then be used to control the equipment. The indicia may be stored in memory. Identifying may comprise decoding information encoded in the indicia. The encoded setup information may be in one or both of: barcode and a multi-dimensional barcode. The received information may be received from a tag reader and/or via a transceiver. One or more parameters may be encoded as encoded setup information, where the encoded setup information may be transmitted, for example, to another electronic device. The encoded setup information may be one or both of: a barcode and a multi-dimensional barcode. The indicia may include an identifier of a tool and an acceptable calibration result range for the tool. The tool may be calibrated, and the calibration result may be compared to the acceptable calibration result range. When the calibration result is not in the acceptable calibration result range, the tool may be flagged as being out of calibration. For example, the tool may be flagged by displaying a warning on a video output device. Various embodiments of the disclosure may disclose a materials preparation and analysis equipment that includes processing circuitry, memory, and an input/output interface comprising a tag reader where the equipment may be configured to receive information via the input/output interface. The input/output interface may also comprise a transceiver, and the equipment may be configured to receive information via one or both of: the tag reader and the transceiver. The information may be regarding one or more of: the equipment, a consumable used by the equipment, a workpiece for the equipment, and a tool. At least a part of the information may be in an indicia in the form of, for example, one or both of: a barcode and a multi-dimensional barcode. The processing circuitry may be used to decode the encoded information. The information may be processed to be at least a portion of setup information for the equipment. Accordingly, the present methods and systems may be realized in hardware, software, and/or a combination of hardware and software. The present methods and/or systems may be realized in a centralized fashion in at least one computing system or in a distributed fashion where different elements are spread across several interconnected computing systems. Any kind of computing system or other apparatus adapted for carrying out the methods described herein is suited. A combination of hardware and software may include a general-purpose computing system with a specific program or other code that, when being loaded and executed, controls the computing system such that it carries out the methods described herein. Another implementation may comprise one or more application specific integrated circuit or chip designed for cutting/abrading tools. Some implementations may comprise a non-transitory machine-readable (e.g., computer readable) medium (e.g., FLASH memory, optical disk, magnetic storage disk, or the like) having stored thereon one or more lines of code executable by a machine, thereby causing the machine to perform processes as described herein. As used herein, the term “non-transitory machine-readable medium” is defined to include all types of machine readable storage media and to exclude propagating signals. As utilized herein the terms “circuits” and “circuitry” refer to physical electronic components (i.e. hardware) and any software and/or firmware (“code”) that may configure the hardware, be executed by the hardware, and or otherwise be associated with the hardware. As used herein, for example, a particular processor and memory may comprise a first “circuit” when executing a first set of one or more lines of code and may comprise a second “circuit” when executing a second set of one or more lines of code. As utilized herein, “and/or” means any one or more of the items in the list joined by “and/or.” As an example, “x and/or y” means any element of the three-element set {(x), (y), (x, y)}. In other words, “x and/or y” means “one or both of x and y.” As another example, “x, y, and/or z” means any element of the seven-element set {(x), (y), (z), (x, y), (x, z), (y, z), (x, y, z)}. In other words, “x, y and/or z” means “one or more of x, y and z.” As utilized herein, the term “exemplary” means serving as a non-limiting example, instance, or illustration. As utilized herein, the terms “e.g.” and “for example” set off lists of one or more non-limiting examples, instances, or illustrations. As utilized herein, circuitry is “operable” to perform a function whenever the circuitry comprises the necessary hardware and code (if any is necessary) to perform the function, regardless of whether performance of the function is disabled or not enabled (e.g., by a user-configurable setting, factory trim, etc.). While the present method and/or system has been described with reference to certain implementations, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the present method and/or system. For example, block and/or components of disclosed examples may be combined, divided, re-arranged, and/or otherwise modified. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the present disclosure without departing from its scope. Therefore, the present method and/or system are not limited to the particular implementations disclosed. Instead, the present method and/or system will include all implementations falling within the scope of the appended claims, both literally and under the doctrine of equivalents. | 32,256 |
11861447 | DETAILED DESCRIPTION Examples of the present disclosure can comprise devices and methods for storing and retrieving emergency information using a payment card. The disclosed technology can be a payment card designed to store emergency information on the card in addition to the payment information stored on the card. In some examples, the payment card can include a designating symbol, color, image, or other markings to notify emergency responders that the payment card can be used to access emergency information. As will become apparent, the payment card and the method for storing and retrieving the emergency information can take many forms and can be implemented using many methods and/or devices. Although certain examples of the disclosed technology are explained in detail, it is to be understood that other examples, embodiments, and implementations of the disclosed technology are contemplated. For example, although referred to in the context of payment cards (e.g., credit and debit cards) it is contemplated that the disclosed technology can be used with cards other than payment cards (e.g., government identification cards, transit cards, access cards, etc.). Accordingly, it is not intended that the disclosed technology is limited in its scope to the details of construction and arrangement of components set forth in the following description or illustrated in the drawings. The disclosed technology is capable of other embodiments and of being practiced or carried out in various ways. Such implementations and applications are contemplated within the scope of the present disclosure. The components described hereinafter as making up various elements of the disclosed technology are intended to be illustrative and not restrictive. Many suitable components that would perform the same or similar functions as the components described herein are intended to be embraced within the scope of the disclosed technology. Such other components not described herein can include, but are not limited to, for example, similar components that are developed after development of the presently disclosed subject matter. Referring now to the drawings, in which like numerals represent like elements, examples of the present disclosure are herein described. FIGS.1A-2Bare depictions of various possible configurations for cards that can be used to retrieve emergency information according to examples of the present disclosure.FIGS.1A-1Bare diagrams of examples of cards that include a magnetic stripe or an embedded integrated circuit chip, respectively, whileFIGS.2A-2Bare diagrams of examples of contactless cards. For simplicity, throughout this disclosure when reference is made to a card102ait should be understood that such reference can refer to a magnetic stripe card102a, a smart card102b, a contactless card202a, or a radio frequency identification (RFID) contactless card202b, and vice versa as applicable. In the example shown inFIGS.1A-2B, where the card102ais a payment card, the card102acan have, for example, an account number, a cardholder name, and an expiration date printed on the front as is common to payment cards. In some examples, the card102acan also have an identifying mark104to indicate that the card102can be used to retrieve emergency information. The emergency information can include, but is not limited to, the patient's name, age, address, emergency contact information (i.e., names, phone numbers, and addresses of designated emergency contacts), and medical information. The medical information can include, for example, the cardholder's blood type, allergies, medications, medical history information, primary care physician information, insurance information, a do not resuscitate order, and any other information as would be useful to render aid to the cardholder. In some examples, the emergency information can be stored directly on the card102a. In other examples, the card102acan store information, such as a link, that can be used to retrieve the emergency information from a server. FIG.1Ais a diagram of a magnetic stripe card102awith a magnetic stripe106that can be used to retrieve emergency information. The magnetic stripe106can be used to simultaneously store payment information and emergency information. In this example, the magnetic stripe106can be used to store the emergency information directly on the magnetic stripe card102aor information that can be used to retrieve the emergency information from a server. The magnetic stripe106can be configured to be read by a standard card reader; and thus, can be used to both execute payment transactions in the normal manner and to provide emergency information. FIG.1Bis a diagram of a smart card102bhaving an embedded integrated circuit chip108that can be used to retrieve emergency information. The chip108can be used to simultaneously store payment information and emergency information. As before, the chip108can be used to store the emergency information directly on the smart card102bor to store information (e.g., a link) that can be used to retrieve the emergency information from a server. The chip108can be configured to be read by a standard chip reader when physically inserted (or, “dipped”) into the chip reader. Thus, the smart card102bcan be used to conduct standard financial transactions and provide emergency information to emergency responders when necessary. The smart card102bcan also include a sensor110that is configured to restrict access to the emergency information stored on the smart card102b. The card102bcan be configured to allow communication only with user devices outputting specific communication signal patterns or frequencies, for example, as detected by the sensor110. In yet another example, the sensor110can be a fingerprint scanner or other biometric scanner installed on the smart card102bthat is configured to allow access to the emergency information only if an authorized cardholder provides (i.e., the cardholder or someone he or she has authorized to access the card) his or her biometric data (fingerprint, facial recognition, or other biometric data) for access. In this example, an emergency responder can use the unresponsive cardholder's fingerprint to access the emergency information stored on the payment card by placing the cardholder's finger on the sensor110when the smart card102bis inserted into a card reader. As discussed, the card102a, whether having a magnetic stripe106, a chip108, or both, can be configured to store payment data as well as emergency information data. The card102acan have the payment data and emergency information data stored in the magnetic stripe106alone, the chip108alone, or in both the magnetic stripe106and the chip108. Alternatively, the card102can have emergency information stored on the magnetic stripe106and payment information stored on the chip108, or vice versa. Furthermore, although shown as having a single magnetic stripe106and a single chip108, the card102acan have more than one magnetic stripe106and more than one chip108. For cards102that have more than one magnetic stripe106or chip108installed, for example, one magnetic stripe106or chip108can store the payment information while another magnetic stripe106or chip108can store the emergency information. The magnetic stripes106or chips108can be clearly indicated by color, symbol, image, or other markings to distinguish between the magnetic stripe106or chip108that stores the emergency information and the magnetic stripe106or chip108that stores the payment information. FIGS.2A and2Bare diagrams of examples of contactless cards202aand/or202bthat can be used to retrieve emergency information. As before, the contactless cards202aand/or202bcan be used to store both payment information and information used to retrieve emergency information. In this example, the contactless card202aand/or202bcan be used to store the emergency information directly on the card202aand/or202bor information that can be used to retrieve the emergency information from a server. The contactless card202aand/or202bcan be any type of contactless card that is capable of storing information. In some examples, as shown inFIG.2A, the contactless card202acan have an antenna204, a sensor206, a processor208, a memory210, and first and second applications212a,212binstalled in the memory210. In other examples, as shown inFIG.2B, the contactless card202aand/or202bcan be a radio frequency identification (RFID) contactless card202bthat includes an RFID chip214. The contactless card202aand/or202bcan be configured to receive an input from a user device (e.g., a mobile device, a card reader, or other device). The input can include a request to establish communication with the contactless card202aand/or202b. The sensor206can detect the input, e.g., by detecting specific input sequences, access codes, encryption keys, etc., via the antenna204. In some examples, an application executing on a user device can communicate with the contactless card202aand/or202bafter the user device is brought sufficiently close to the contactless card202aand/or202bto enable near field communication (NFC) between the user device and the contactless card202aand/or202b. The contactless communications can involve various communication methods, such as those defined in the International Organization for Standardization's (ISO) 14443 standard. The processor208can determine the appropriate format for the input such as, for example, the NFC Data Exchange Format (NDEF) or the Europay, Mastercard, and Visa (EMV) format. In other examples, communication between the contactless card202and the user device can involve Application Protocol Data Units (APDUs). When an application is selected, specific APDU messages are exchanged. In EMV, for example, there are various certificate exchanges and requests for signing transaction data. For RFID chip214type applications, on the other hand, the application is selected, and then “File select” and “File read” commands are sent. In response to receiving the input as detected by the sensor206, the processor208can activate a first application212astored in the memory210. As a non-limiting example, the processor208can include a state machine with various transitions governed by the outcome of authenticity tests at various states. If the received data is consistent with the EMV standard or the NDEF data standard, for example, the data will pass an appropriate authentication check and the state machine can activate the first application212a. Activating the application first212acan include initiating communication directly and/or indirectly between the first application212aand the user device. Once activated, the first application212acan communicate, via NFC, with the user device. The contactless card202aand/or202bcan be configured to store both payment data and emergency data. In some examples, the contactless card202aand/or202bcan be configured to isolate the payment data and the emergency data using a first application212aand a second application212b. Accordingly, when the first application212ais activated, a first set of data (e.g., payment data) is available for transmission and when the second application212bis activated, a second set of data (e.g., emergency information) is available for transmission. Further, the first application212amay be unable to access some, or all, of the data of the second application212b, and vice-versa. In this way, the contactless card202aand/or202bcan be used both for facilitating payments and for providing emergency information, while also securing and separating the payment information and the emergency information. In another example, the sensor206installed on the contactless card202aand/or202bcan be configured to restrict the contactless card202aand/or202bto communicate only with authorized user devices. The contactless card202aand/or202bcan be configured to communicate only with user devices outputting specific communication sequences or frequencies as received by the antenna204and detected by the sensor206. In yet another example, the sensor206can be a fingerprint scanner installed on the contactless card202aand/or202bthat allows access to the emergency information only if the cardholder provides his or her fingerprint for access. In this example, an emergency responder can use the unresponsive cardholder's fingerprint to access the emergency information stored on the payment card by placing the cardholder's finger on the sensor206. FIG.3is a schematic diagram of an example of a system300for storing emergency information using a payment card, according to some examples of the present disclosure. The emergency information can be stored either directly on the card102aas discussed previously, or the emergency information can be stored on a remote server306. In some examples, the cardholder302can authorize a financial institution304to store the information for him or her. A cardholder302can communicate with a financial institution representative in person, over the phone, via email or fax, through postal mail, through a financial institution website, through a mobile application, or by other methods, to provide the emergency information to the financial institution304and authorize the financial institution304to store the information. The financial institution304can store the information in several different ways. In one example, the financial institution304can store the emergency information directly on the card102awhen the financial institution304manufactures the card102ato send to the cardholder302. In this example, the cardholder302can receive the card102awith the emergency information pre-loaded onto the card102a. The financial institution304can store the emergency information directly on the card102aby storing the emergency information, for example, on the magnetic stripe106, the chip108, the memory210, or other storage devices on the card102a. The financial institution304can additionally (or alternatively) store the emergency information on a server306. The server306can be owned and managed by the financial institution304or by a third party. In some examples, the financial institution304can store the emergency information just long enough to transfer the emergency information to be stored directly on the card102a. The financial institution304can store the emergency information on a server306, for example, after receiving the emergency information via an online portal (e.g., from a form filled out by the customer). The information provided by the customer can be stored temporarily on the server306until it is transferred to the card102aand then can be deleted. As another example, the financial institution304can store identification information, rather than emergency information, directly on the card102aand separately store the emergency information on a server306. In this example, the identification information stored on the card102acan be used to retrieve the emergency information from the server306. This arrangement has the advantage of being able to store larger amounts of data on the server306than can be stored directly on the card102a, among other things. The server306can also be configured to ensure the emergency information is accessed only by authorized individuals by requiring authentication prior to accessing the emergency information. The emergency information stored on the card102aor on the server306can be periodically updated, either by the cardholder302or by the financial institution304. The cardholder302can be given access to update his or her emergency information on the server306, for example, via the financial institution's304website or mobile application. Alternatively, the cardholder302can bring the card102ato the financial institution304to be assisted by a financial institution representative or use a station in the financial institution304to update the information. The cardholder302, for example, can provide his or her emergency information to a financial institution representative who enters the information into the financial institution's system and updates either the card102aor the server306, depending on the chosen method of storage. Alternatively, the financial institution304can have stations located at various locations (e.g., branch offices, automated teller machines (ATMs), drug stores, grocery stores, convenience stores, etc.) where a cardholder302can bring his or her card102aand add the emergency information to the card102a. The financial institution304can have a station at its branch locations, for example, to enable a cardholder302to fill out a form with his or her emergency information and then insert a card102ainto a card reader to write the data onto the card. The station can either write the emergency information directly onto the card102aor write identification information onto the card102aand send the emergency information to a server306. As described, the server306can be configured to store the emergency information for subsequent retrieval using the identification information stored on the card102a. In many cases the emergency information will be information that a cardholder302would prefer to remain confidential and accessed by only emergency responders and only when necessary. To help protect the cardholder's private information, therefore, the financial institution304can erase any data stored on the card102aonce it is no longer needed. The card102acan have an application, for example, that is configured to erase any emergency information stored on the card102aafter the card102ahas expired. This can occur, when the card102ais used for the first time after it has expired, for example, or after a predetermined amount of time. Similarly, when stored on a server306, the server306can erase any emergency information when the card102aexpires or after a predetermined amount of time. The financial institution304can also restrict access to the emergency information if the card102ahas expired or if the card102ahas been reported lost or stolen. FIG.4is a schematic diagram of another example of a system400for storing emergency information using a payment card, according to some examples of the present disclosure. As depicted inFIG.4, the financial institution304can send the card102ato the cardholder302and enable the cardholder302to add emergency information to the card102arather than requiring the financial institution304to add the emergency information. The cardholder302can add emergency information to the contactless card202using his or her user device402, for example, and send the data wirelessly to the contactless card202using NFC technology. In some examples, the contactless card202can be configured to allow multiple write operations so that a cardholder302can update the emergency information as frequently as desired. Alternatively, the contactless card202can be configured to allow a single write operation to enable the cardholder302to permanently add the emergency information to the contactless card202but writing over the data or changing data (e.g., providing false data) is prevented. The cardholder302can either add emergency information or identification information to the card102a. When identification information is used, the cardholder302can separately update his or her emergency information on a server306. The data can then be accessed using the identification information. FIG.5is a schematic diagram of an example of a method500for retrieving emergency information using a payment card, according to some examples of the present disclosure. The emergency information can be retrieved by an emergency responder502when the emergency responder502encounters a cardholder302who is unresponsive or otherwise unable to provide his or her emergency information. The emergency responder502can search the cardholder302and/or his or her belongings, for example, to find the card102ahaving an identifying mark104indicating that the card102acan be used to retrieve emergency information. The emergency responder502can then use the card102ato retrieve the emergency information using a user device504(e.g., either directly through an application or by calling a service center). The user device504can include, for example, a smartphone, a laptop computer, a tablet, a handheld card reader, or other electronic device. In some examples, the user device504can include a card reader, or be configured to communicate with a card reader, so that the emergency responder502can read information stored on the card102a. As an example, the user device504can be a device specifically designed and designated to be used only by emergency responders502and is therefore limited in use to only authorized individuals. Alternatively, the user device504can be any type of user device504(e.g., a smartphone) with an application configured to access the emergency information. To protect the privacy of the cardholder302, the application can be configured to only allow authorized emergency responders502to access the emergency information via the application. The emergency responder502can download an application to his or her smartphone, for example, and be required to enter his or her credentials to enable use of the application. Once logged in, the emergency responder502can then retrieve the emergency information from a card102a. If the emergency information is stored directly on the card102a, the emergency responder502can use the user device504to read the emergency information directly from the card102ausing a card reader, for example, or by using NFC technology. To protect the cardholder302from those who may want to use the emergency information for malicious purposes, the emergency responder502can be required to enter credentials—e.g., before he or she is authorized to download or use the application. The emergency responder's credentials (e.g., password, biometric data, pass code, or the like) can be verified by an application installed on the card102a, the user device504, the server306, or any combination thereof. In other examples, the emergency responder's credentials can be verified directly on the user device504using the user device's biometric authentication systems or a device access code. Alternatively, the user device504can retrieve identification information from the card102a. The user device504can then use the identification information to communicate with the server306via (e.g., via a cellular network506), provide the identification information to the server306, and (when authenticated) retrieve the emergency information. The user device504can display the cardholder's emergency information (or read it aloud) to the emergency responder502. This, in turn, enables the emergency responder502to provide proper assistance. In another example, the emergency responder502can retrieve identification information, rather than the emergency information, from the card102ausing the user device504. The user device504can then communicate with a server306via a cellular network506, provide the identification information to the server306for authentication, and receive a cardholder-specific passcode from the server306. The user device504can then provide the cardholder-specific passcode to the card102ato retrieve the emergency information stored on the card102a. In yet another example, the user device504can include an application that enables the emergency responder502to take a picture or otherwise view the card102awith the camera of the user device504to retrieve the emergency information. In this example, the application on the user device504can use user device's504camera to obtain identification data of the cardholder from the card102aand communicate with the server306via the cellular network506to retrieve the emergency information. The application installed on the user device504can be enabled to recognize the account number on the card, the card holder's name, the financial institution information, the expiration date, the card verification value (CVV), a barcode or Quick Response code (QR code), or other information visible on the card102a. The user device504can communicate with the server306to provide the identification data retrieved from the picture of the card to the server306. As before, the emergency responder502may be required to enter credentials and be authorized to view the emergency information. The server306can then send the emergency information to the user device504for display. Where the card102aincludes a barcode or QR code, the user device504can scan the barcode or QR code using the camera to retrieve the emergency information. In some examples, the barcode or QR code can have the emergency information stored directly in the pattern. In other examples, the barcode or QR code can store identification information in the code that the user device504can then use to retrieve the emergency information from a server306. The emergency responder502can alternatively retrieve the card102aand utilize an automated teller machine (ATM) or a credit card terminal to view the emergency information. In this example, the emergency information can be stored on a server306in communication with an ATM or credit card terminal. The emergency responder502can then insert the card102ainto the ATM or credit card terminal, enter his or her credentials, and view the emergency information directly on the ATM or credit card terminal, or be sent the information by the ATM or credit card terminal. FIG.6is a flow chart of an example of a method600for retrieving emergency information using a payment card, according to some examples of the present disclosure. The method600can include an emergency responder obtaining602a payment card storing emergency information from a cardholder who is unresponsive or otherwise unable to provide emergency information (e.g., by searching through the cardholder's wallet or purse). As described, the payment card can have a specific identifying mark to indicate that the payment card can be used to retrieve emergency information. The emergency responder can use the user device to establish604a communication link between the user device and the payment card. The emergency responder can communicate with the payment card, for example, using a card reader or NFC technology. The user device can receive606the cardholder's identification information as well as a security challenge from the payment card through the communication link with the payment card. The user device can establish608a communication link with a server. The user device can provide610verification data and the cardholder's identification information to the server to cause the server to retrieve612the cardholder's specific access code. The server can reestablish614a communication link with the user device and provide616the access code to the user device. The user device, now having retrieved the cardholder's access code from the server, can once again establish618a communication link with the payment card and provide620the access code to the payment card. The user device can retrieve622the cardholder's emergency information from the payment card and display624the emergency information on the user device. The method600of retrieving emergency information using a payment card as shown inFIG.6is offered merely as an example and can be modified in accordance with many of the previously described examples. FIG.7is a flow chart of another example of a method700of retrieving emergency information using a payment card, according to some examples of the present disclosure. The method700can include an emergency responder obtaining702a payment card from a cardholder who is unresponsive or otherwise unable to provide emergency information (e.g., by searching through the cardholder's wallet or purse). As described, the payment card can have a specific identifying mark to indicate that the payment card can be used to retrieve emergency information. The emergency responder can retrieve704the cardholder identification information from the payment card using the camera installed on the user device. As an example, the emergency responder can use the camera of the user device to obtain identification information from the card by taking a picture of the card or holding the card in front of the camera. The user device can establish706a communication link with a server. The user device can provide708authentication data and the cardholder's identification information for the server to authenticate710the received authentication data and cardholder's identification data. The server can retrieve712the cardholder's emergency information and reestablish714a communication link with the user device to provide716the emergency information to the user device. The user device can display718the emergency information on the user device for the emergency responder. The method of retrieving emergency information using a payment card as shown inFIG.7is offered merely as an example and can be modified in accordance with many of the previously described examples. The specific configurations, devices, and the uses for various elements can be varied according to particular design specifications or constraints using a card102a, a user device402(or user device502), a financial institution304, a server306, cellular network506, a cardholder302, an emergency responder502, or a method600(or700) according to the principles of this disclosure. Such changes are intended to be embraced within the scope of this disclosure. The presently disclosed examples, therefore, are considered in all respects to be illustrative and not restrictive. The scope of the disclosure is indicated by the appended claims, rather than the foregoing description, and all changes that come within the meaning and range of equivalents thereof are intended to be embraced therein. While certain examples of this disclosure have been described in connection with what is presently considered to be the most practical and various examples, it is to be understood that this disclosure is not to be limited to the disclosed examples, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation. Exemplary Use Cases The following example use cases describe examples of particular implementations of using a payment card to retrieve emergency information. These examples are intended solely for explanatory purposes and should not be considered as limiting. Jack, a cardholder with Central Bank, adds his emergency information to his payment card soon after he receives a new card from the bank. Jack enters his emergency information into an application on his mobile phone. The phone then transfers the emergency information to a chip on the card using NFC technology. A few days later, Lucy, an emergency responder, is called to a grocery store and finds Jack unresponsive. Lucy searches Jack's wallet and finds a payment card with an identifying mark that indicates the payment card can be used to retrieve Jack's emergency information. Lucy retrieves her mobile phone, opens a designated application, and enters her credentials to verify that she is an authorized emergency responder. Lucy then brings Jack's payment card near her mobile phone and, using NFC technology, Lucy's mobile phone wirelessly communicates with Jack's payment card to obtain identification information. Lucy's mobile phone then communicates with a remote server using a cellular network to provide the server with her credentials and Jack's identification information. The server authenticates Lucy, and determines Jack's identity, and retrieves an access code for Jack's payment card. The server then provides Jack's access code to Lucy's mobile phone. Lucy's mobile phone provides Jack's access code to his payment card using NFC technology. Jack's payment card then authenticates the access code using an installed application and retrieves Jack's emergency information stored in a memory installed on the payment card. The payment card then sends the emergency information to Lucy's phone via the same wireless communication link. Lucy's mobile phone can now display Jack's emergency information enabling Lucy to provide Jack with appropriate assistance. In another example, James, an emergency responder, encounters Jill, a cardholder with Generic Bank, who is unresponsive. James searches Jill's purse and finds a payment card having an identifying mark that indicates the payment card can be used to retrieve emergency information. James retrieves his mobile phone, opens a designated application, and enters his credentials indicating that he is an authorized emergency responder. James then places the payment card in front of the camera of his mobile phone and the application uses the camera to obtain identification information from Jill's payment card. James's mobile phone then communicates with a remote server to provide the server with his credentials and Jill's identification information. The server then authenticates James's credentials, determines Jill's identity, and retrieves Jill's emergency information that she had previously uploaded to the server via her financial institution's website. The server provides Jill's emergency information to James's mobile phone. James's mobile phone then displays Jill's emergency information to enable James to appropriately respond to Jill's situation. In yet another example, Abby, an emergency responder, encounters Tom, a cardholder who is unresponsive. Abby searches Tom's wallet and finds a payment card having an identifying mark that indicates the payment card can be used to retrieve emergency information. Abby retrieves her mobile phone, opens a designated application, and enters her credentials indicating that she is an authorized emergency responder. Abby then connects a card reader to her mobile phone and inserts the payment card into the card reader. The card reader reads Tom's emergency information from the embedded integrated circuit chip installed on the card and provides the emergency information to Abby's mobile phone. Abby is able to view Tom's emergency information on her mobile phone and appropriately respond to Tom's situation. | 34,421 |
11861448 | DETAILED DESCRIPTION OF THE INVENTION The present invention discloses tag technologies constructed to incorporate pulse sensing, sensor or other data generation or activation technologies that are configured to create appliance/device, garment, container, technology, network or data module relationships, processes or next step data actions. For example, a tag can be incorporated into a CPG; FMCG; product; food item; container; user; appliance/device; technology; or garment. Tag data in communication or connection with data modules can be enabled when a tag generates or activates certain data, combinations of data or thresholds, levels or amounts. Data module or store relationships and connections with an appliance/device, garment, technology, container or network can include home, retail, wholesale, manufacture, hospitality, industrial, healthcare, agricultural or food/recipe tag data generation or activation; tag signal processing and conversion to synthesized human speech or text; and monitoring, tracking, locating and reporting: product order placement and fulfilment; product freshness or projected shelf-life; ambient air conditions; body, tissue or product vital signs, movement or change; healthcare and well-being monitoring; consumer product/service engagement; authentication; merchandising and marketing; or data authentication and management, among others. As used herein tag generated or activation data or technology can include a tag configured to generate or activate data using pulse sensing; a sensor; selective user controls to manage tag data exchange or activation; or unique tag, user, data module, product, service, provider identifiers, code or executable code or references thereto, and combinations thereof. For example, in one embodiment, a tag sensor can be made from a sensing material such as a metal or other similar based material and comprise a tag antenna to generate data (“pulse sensing”). A tag can also be configured to detect or measure the presence, level, quantity or threshold levels of a gas, volatile organic compound, chemical or stimuli (“gas” or “gases”) in a container or ambient area. A tag can detect or react to the presence of a gas with a change in a tag resistivity or signal output. This resistivity or signal output change combined with a tag, product, user or purchaser identifier can be interrogated and sent by a reader to a local or remote appliance/device, network, server or inventory management system and related data set modules to analyze and interpret the signal change and to provide next step data actions. See for example, U.S. Pat. No. 9,563,833 and U.S. Patent Application Nos. 20180290809, 20180093814, 20180249735 and 20170263100, which are incorporated herein by reference in their entirety. A tag can connect to, communicate with or affix to a user, product, container, appliance/device, garment, technology or ambient area. A tag can comprise one or multiple sensors configured to detect different or unique gases, to pulse or modulate a signal or other tag generation or activation technologies. In another embodiment, a tag with a sensor portion can be combined with a temperature and humidity sensor to communicate with an appliance/device and network. Analyzed and processed tag and sensor generated data can compare gas, temperature and humidity levels or concentrations of gas of a product or food item (e.g., meat) in a container or appliance/device storage compartment to track freshness and spoilage grades that can include high, medium, low or spoiled designations. Further, a tag can be configured to detect levels of freshness or spoilage of a product; an ambient gas; element or condition; detect the presence of harmful ambient gases, contaminants or explosives; monitor, track or report body vital signs, movement or changes to a body or tissue, among others; and to provide appliance/device, garment, technology, container, user or network notifications when certain tag generation or activation data such as body vital signs or gas levels do not respond or report, fail, maintain, reach or exceed a predetermined, selected or software or AI selected profile, signature or comparative database level, threshold, amount or combination. Pulse sensing can incorporate techniques described by X. Hui and E. C. Kan, “Vital signs over multiplexed radio by near-field coherent sensing”,Nature Electronics, vol. 1, doi: 10.1038/S41928-017, January 2018. (Featured inNatureResearch News) (including Electronic supplementary materials), which is incorporated herein by reference in its entirety. Provided is a method using electromagnetic energy or radio wave pulse sensing directed by a tag into a user's body or tissue to allow interrogation by a reader to receive the generated or pulsed data. Pulse sensing or NCS can modulate the external and internal vital signs, motion or change of a user's body or tissue (e.g., human, livestock, poultry, pets, etc.) onto multiplexed radio frequency signals by integrating a digital identification or code that transmits with a generated signal. Pulse sensing or NCS can utilize both RF signal amplitude and phase to sense and isolate a user's body or tissue vital signs, movement or change, among other applications. In one application, the high frequency component of pulse sensing or NCS signal can be used to mitigate body movement interference to collect more accurate blood pressure, heartrate or other body metrics. A unique ID corresponding to a tag provides improved discrimination of the signal from ambient interference and other tags. Pulse sensing or NCS tag application can but does not require a tag to directly contact a user's skin and can be used as a remote monitoring sensor with either active or passive radio frequency identification tags in proximity or attached to or embedded in a user; attached to a garment, appliance/device, container, technology or other attaching or supporting mechanism to hold and position a tag to an area to be monitored and interrogated by a reader to track, monitor and report in real-time a user's heart or respiration rate, blood pressure or breath effort, among other health and well-being metrics. Another embodiment provides a tag, a tag with a sensor portion or an executable code or reference to an executable code stored on a tag circuit or memory that can be interrogated and executed by a reader or appliance/device. Code in one embodiment can provide data, instructions, algorithms or software to interpret or process tag generated data for pulse sensing, sensor data such as tag resistivity or signal output changes, modulated or demodulated pulsing signals, tag data conversion to synthesized human voice or text to provide response or query actions or next step data actions. For example, tag data or code can be configured to open an application in an appliance/device to setup and operate a tag and data when activated, as discussed herein. A tag can be activated to automatically access an appliance/device application to set up user data and controls to monitor, track and report pulse sensing, sensor data generation or next step data actions which can be selected, configured and programmed by a user with tag, product, marketing, network or appliance/device suggestions or recommendations. Further, tag data or code can be configured so that when activated a product or service order is automatically placed; a product is placed into a virtual shopping basket; a product or marketing landing page is automatically opened with an order option; a product image is displayed on an appliance/device or television display with a purchase or browse option; and to provide the previously noted options including voice, text or converting the aforementioned actions into synthesized human speech or text for a user to engage with an appliance/device. Further, a tag sensor can be configured to control the execution of a code or code reference so that the code is executed only when a sensor is activated or combinations thereof. For example, a computer system, program or method can be configured to process tag generated or activated data, as described herein, to include an appliance/device or network comprising a processor with a computer-readable memory and readable tangible storage medium with program instructions stored on the tangible storage medium to be executed by a processor. A network computer system, program or method, as described herein, can receive the executable content or unique identifiers from a tag, appliance/device, garment, container, technology or network reader. The content can be a program code that can be executable by a reader, appliance/device, garment, technology or network and the content can be provided by a tag to a reader to cause the content to be executed. The content can be executed by retrieving a code that can include a location specified by a reference to the code. A reference can be part of the received content. The code or content can be sent to a respective appliance/device, garment, technology or network module to be executed which can include executing any of the actions or next step data actions noted herein. Associated or tag generated or activated data can be retrieved, analyzed or processed using an appliance/device, garment, technology or network data to perform any of the next step data actions described and discussed herein. FIG.1shows a tag9that can be used, affixed or attached with or to a user, appliance/device, garment, container or product. A tag can be configured to incorporate data generation, sensor11or activation technology such as an executable code, tag identifier, unique user or product data or identifiers such as a stock keeping code (SKU), universal product code (UPC), European article number (EAN), serial or model number configured to generate or activate data for tracking, monitoring, locating, reporting, order placement, marketing, product freshness and user body vital signs and related levels or thresholds and reporting levels and any others described herein. Tag data can be interrogated by a reader system comprising a reader device with an antenna. With a passive or semi-passive tag, an antenna broadcasts an interrogation signal to a tag that responds by transmitting generated or activation data back through the sensor or antenna to the reader. Data can be transmitted to or in communication with a tag data module or processing system which can be localized in an appliance/device, garment, technology, container or network in communication with data modules or a computing appliance/device or network to store, analyze, process, convert or respond to data queries, prompts or next step data actions by using software, algorithms, code, directions, neural networks, AI and other processes or combinations to interpret data or to transmit data to a remote computing appliance/device via cloud or other network system to store, analyze, convert or interpret data and respond to user, sequenced or automated data queries. Data processing can be localized, network or data-based and configured as an internet or subscription service. As shown inFIG.2, a subscription based service can be structured to provide analysis, interpretation and queries, as noted herein, of generated or activated data configured in modules or next step data actions as well as providing user product goods, services or providers. Data processing, modules or next step data action systems can process received generated, activated or next step data actions depending on the intended function or use of a tag. For example, a tag processing system can be configured to provide automated or sequenced product order placement and delivery, food freshness or user vital signs for body, tissue or product data. A tag processing system can further be configured to analyze, authenticate, validate and determine tag signals, generation or activation such as pulse sensing, gas concentration levels, location or mobility monitoring, code, data and unique identifiers and unique tag signal generation with specific or designated actions, which can include any of the disclosed herein, with an appliance/device or network. As shown inFIG.1, a tag9with an antenna10or sensor portion11can receive an RF interrogation signal from a reader and broadcast an RF response to an interrogation signal with pulsing, generated, activation or next step data actions. Typically tags incorporate linear- or circular-polarized antennas that can be constructed as a series of nested conductive rectangular patterns with adjacent patterns constructed to contact a short conductive lead that can also connect to or communicate with a sensor. An embodiment inFIG.1shows a single12or double perforation line13or a pre-cut application14to facilitate separating tag components. These applications can be manufactured or constructed into or around a circuit17, antenna10, sensor11or substrate16to easily separate tag components along a line of perforations or other pre-cuts to disable a tag by separating the antenna, sensor or circuit from each. Furthermore, a pull tab180connected to a substrate, perforations or other material to separate the components can be incorporated to separate and disable a tag. A user can separate the antenna, sensor or circuit from each other, in any order or combination, to control, deactivate, reconfigure or disable a tag. Additionally, a tag can be configured to visually or digitally confirm the aforementioned steps or processes. A tag reader or appliance/device can be used to confirm any of the steps or processes herein and provide confirmation with an appliance/device, display or with synthesized human speech, text or an audible reader confirmation that a tag signal or generated or activated data is not sent or is no longer received. The aforementioned tag structures are constructed to deactivate a tag. A reader with associated software or processing can identify a tag as unreadable or absent and in turn provide a next step data action such as placing a product order for an absent product, converting said action into synthesized human speech or text to inform a user or appliance/device that a product is absent, an order has been placed or to ask if an order should be placed or provide another or sequenced data action. As shown inFIG.1, a circuit, sensor or antenna, or portions thereof, can be sandwiched between two layers of packaging material15. A sensor can be hermetically sealed by a layer until the removable material is removed to expose the sensor. The removable material can be reapplied after removal to recover the tag, sensor or components. The layers can be manufactured so that the adhesion of the circuit, sensor or antenna to the upper or lower layer of the packaging material is greater than its adhesion to the upper or lower layer which in turn can be affixed to an appliance/device, garment, container, product or user. Another layer or part thereof can be provided on either side of a tag including an adhesive material so that a layer or portion thereof can be removed and the tag can be affixed or attached to a user, product, container, garment or appliance/device using the exposed adhesive material. In this way, a tag can be adhesively disposed or attached to a user so that the tag and components are protected by outer layers and allow generated tag data to be interrogated through a plastic layer. Further, this application can produce a peel-off layer16affixed by an adhesive material or layer to a circuit, sensor or antenna, or parts thereof. A circuit, sensor or antenna can be removed or destroyed by delaminating the tag by pulling or removing the upper or lower layer of material from the tag and removing a circuit, sensor or antenna, or parts thereof. A pull tab connected to the upper or lower layer of packaging can be used to facilitate the delamination process. The tag can be configured so that only a portion of the circuit, sensor or antenna is removed; for example, the portion above a peel-off line12. In one example, this process can leave a pair of short antenna lines or connections, contacts or stubs attached to the circuit, sensor or antenna. A pull tab can also provide a conducive material on one side contacting the tag and specific points or connectors so that when the tab is pulled it either breaks, disconnects or deactivates existing connections or in reverse can create new connections to change the tag resistivity, create tag connections or reception. Furthermore, a tag, components or a sensor portion can be sealed individually or together with a signal blocking material to prevent the tag from sharing data or being interrogated by a reader until the sealing material is removed. When the material is removed it allows tag data to be shared and read by a reader or can allow the tag/sensor portion to generate data and to be interrogated by a reader. In another embodiment, a tag portion can be sealed and covered by a material (e.g., a plastic) to allow tag data such as tag or product identifiers or other unique tag information to be read by a reader through the plastic material covering the tag to enable supply chain or inventory management monitoring and tracking purposes but to prevent the sealed sensor to generate data until the material sealing the sensor is removed from the tag. In each case, a sealing material can be removed by peeling off a layer, using a pull tab, puncturing the sealing material or a coating material covering tag components can be removed as noted to activate a sensor. Tag sealing material can include foil, insulated foil, metal-on-paper sticker, or metal or signal blocking material, plastic or any other materials depending upon the tag's intended use. A supply chain tag can use a plastic covering to allow tag data to be collected and read while a sensor is sealed and a signal blocking material such as foil to prevent a reader from interrogating tag activation data such as an executable code or unique identifiers or data. In another embodiment, the present invention provides tag information exchange control by providing tag structures to allow a user to alter a tag to inhibit the ability of a reader to interrogate the tag. For example, see U.S. Pat. Nos. 7,277,016 and 7,253,734, which are incorporated herein by reference in their entirety. A user can selectively disable a tag circuit, memory, sensor or antenna, or combinations thereof, to prevent the exchange of tag data between the tag and an associated reader configured to result in a next step data action such as placing a product order or marketing action. Next step data actions can cause a product or service to be automatically ordered; placed into a virtual shopping basket/list; open a product ordering, marketing or other landing page; generate a voice or text response or query to a user regarding said actions; cause a product image to be displayed on an appliance/device display or television with an order page or product purchase option; or combined functions and any other actions disclosed herein, among others, that can be configured to initiate other data or initiated action. A user can also manage, select or program next step data actions or sequences via an appliance/device, network, data module, application or user account. Another aspect of the present invention provides tag data exchange control using tag structures such as a material to cover and seal a tag circuit, tag, sensor portion or tag components, or combinations thereof, to prevent the exchange of tag or sensor data until a user selectively decides to share said tag data and next step data actions. A user can selectively remove a cover or expose a tag to allow or to cause a reader, appliance/device, garment or technology to receive the tag sensor or activation data. The cover can be reapplied as desired. Tag activation data can include, among others, executable code; a reference to an executable code; unique identifiers, codes or descriptors for technology, container, appliance/device, garments, sensors or products; pulsing or sensor data or a tag's resistivity or signal output change, or combinations thereof, among others. In one example, a tag or a tag with a sensor or activation data can incorporate a switch to connect or disconnect tag antenna reception to control tag data generation or activation or interrogation by a reader. See U.S. Pat. No. 8,844,831, which is incorporated herein by reference in its entirety. A tag with a switch can be configured to generate or activate tag data and next step data steps as disclosed herein. For example, in one embodiment a tag switch can be configured to be placed in a deactivated mode so that a tag is prevented from generating, activating or sharing tag data or next step data actions or a reader interrogating said data. A tag can be configured to be placed into an activated mode to enable a tag to generate, activate or transmit tag data or be interrogated by a reader and initiate next step data actions. A user can selectively remove a cover to expose a tag to allow tag generation and allow a reader, appliance/device, garment, container or technology to receive the tag generated or activation data. The removable cover can be reapplied as desired. Tag activation data can include, among others, executable code; a reference to an executable code; unique identifiers, codes or descriptors for technology, container, appliance/device, garments, sensors or products; pulsing or sensor data or a tag's resistivity or unique or signal output change, or combinations thereof, among others. A reader, however, can be configured to read the stored tag data on the circuit or memory, through a plastic covering but not the generated sensor data until a plastic seal covering is removed to allow an exposed sensor to react to gases, generate and transmit data. The plastic seal can be reapplied to cover the sensor as desired, such as to prevent generated sensor data from being interrogated by the reader. Furthermore, a tag, as described herein, without a cover can be disposed inside an open/close or tag antenna connect/disconnect container or wrist band structure or device, for example. A tag can be disposed or embedded inside a wrist band or tag connector with an open/close aperture to expose a tag or connect/disconnect a tag antenna structure. When the band aperture is open or connected a tag can generate data and transmit data to a reader and when the aperture is closed or the antenna disconnected the tag is sealed or disconnected and cannot generate or transmit data. The container/band can be constructed with materials to prevent the exchange of tag generated or activation data when the container/band is closed such as a foil, connecting/disconnecting a tag antenna from the circuit or memory using a slide mechanism or other as discussed in the tag switch embodiment or by using other non-readable plastics or materials or readable materials depending upon the intended use purposes of a tag. For example a combination of apertures can be used containing unique data generation or activation data and use a combination of sealing materials. When a container/band is opened or tag antenna connected a tag can generate or activate and transmit to a reader or an appliance/device and provide next step data actions. When closed or disconnected tag data is prevented from being generated, activated or transmitted. This application can be used for any appliance/device or network to selectively allow a user to control a tag, data generation, activation or data transmittal and next step data actions. This application can be used as a digital key to a home, car, door or entrance, payment or any other smart home or digital control device for a smart home appliance/device. As noted, all next step data actions can be selected, programmed or determined by a user or software or AI with an appliance/device, network, application or user account. Another embodiment provides a tag with a detached antenna with a tag circuit containing a unique combination of user, appliance/device, container, food, technology or product identifiers, descriptors, executable code or reference to executable code. When the tag antenna is detached or disconnected tag data cannot be generated, activated, shared or sent to a reader. When an antenna is attached or connected to or placed into contact with a tag circuit or memory tag data can be generated, activated and read by a reader to provide next step data actions. Another embodiment provides an alterable tag with a substrate or material sealing a tag and components that can include a sensor or antenna structure extending in different directions from the circuit structure. Notches, perforations, or slits can be included to facilitate tearing or removing tag components or for pulling tabs or removing laminates. Notches can include circles, squares, triangles or rectangles. Suitable pull tabs can be provided in the vicinity of the notches or perforations to facilitate tearing to deactivate or reconfiguration a tag. The removable substrate ore material can be reapplied to seal the tag or components as desired. For example, a user can tear or pull off a portion of a tag with the tabs, notches or perforations which can show the visibly altered tag and expose a tag or components. The remaining circuit, sensor or truncated antenna can still allow a tag to transmit data but only in close proximity. This can allow a user to disable a tag to prevent generating, activating or transmitting data such as pulse, code or sensor data, to place a product order or other action when a reader detects a product as absent or any other next step data action. Another embodiment provides a tag including a circuit, sensor or antenna, or combinations thereof, structured to generate or provide data activation technology and disposed inside an open/close container or device and configured to provide next step data actions. An open/close container can be constructed with material to seal the tag, sensor or activation data inside a container so that sensor or data cannot be shared or read when the container is closed. To activate the tag generation or activation data a user opens the container and causes a reader to interrogate the tag, generated or activation data, or combinations thereof, to initiate next step data actions. When a user closes the container the tag is sealed inside the container and data cannot be read until a user reopens the container. In another embodiment, a tag and a circuit or memory with data generation or activation technology can be constructed in proximity to an unattached or disconnected tag antenna. An appliance/device or container button, slide or push mechanism can allow a user to connect a tag circuit or memory and data generation or activation technology with or to a tag antenna to allow a reader to interrogate the connected tag and antenna to read the generated or activation technology to automatically initiate a next step data action. When a user releases the mechanism in one embodiment, a spring or mechanism can separate or disconnect the tag circuit, memory, data generation or activation component from an antenna to cause it to be unreadable. In other embodiments, the aforementioned can be constructed so that each component of a tag can be mechanically or otherwise separate or separated from each other or in combination to cause the same mechanical or next step data action results. Another aspect of the present invention provides a tag that incorporates a data generation or activation technology configured to create data connections and relationships with network or data modules or stores. For example, tags; CPG; FMCG; food items; containers; healthcare and well-being monitoring and reporting; users; garments; and appliance/devices can comprise corresponding unique identifiers that can be activated by the aforementioned technologies and can be configured to enable data items in a network, data module or data management platform. See for exampleFIGS.3and4which depict a tag network incorporating appliance/device or network data modules and stores and next step data actions. The data management platform can control the chain of custody; business rules module; product order distribution; healthcare and well-being providers and services; and manufacturer, retailer and wholesaler food, product, service providers and others noted herein. A business rules module can identify rules to determine which data module or item to use and the proper sequence. For example, data modules can comprise placing product or service orders; monitoring, reporting or responding to healthcare and well-being generated tag data; product payment services; placing orders for pharmaceutical or medical products or services; and other data disclosed herein. As previously noted, a user can program, select or choose next step data actions and sequences. This platform can provide improved communication and performance throughout the aforementioned food, product, service supply, inventory and order and payment chain by providing improved communication and interaction between a user and differentiated and fragmented products and services by creating an automated and efficient monitoring, reporting and delivery platform. For example, a tag with a generation or activation technology can be attached to the exterior or interior of a CPG/FMCG container for food or medication to monitor and report: use; location; freshness or expiration dates; reorder a product; or can be incorporated into a garment to monitor a user's heart and respiration rate, blood pressure or breathing activity and activity. The tag or activation technology generates a unique identifier for these products, services or actions which enables appliance/device or network data modules. Activated or generated tag data can automatically enable or activate data modules in an appliance/device or network and next step data actions. A tag can be configured to create data relationships and connections using tag data generation or activation technologies with a unique identifier corresponding to a tag, generated or activation technology. Next step data actions stored in a data management network or module can be enabled by a code, unique identifier or tag generated or activation technology. A data management network can also control the chain of custody and provide the necessary business rules software program. The business rules software program can be configured to identify the rules to determine data modules or stores and next step data actions to select or activate. For example, next step data actions can include home, retail, wholesale, manufacture, hospitality, industrial, healthcare, agricultural or food/recipe data, information, queries, searches or actions that can more specifically can include: product order placement or fulfilment; product or ambient freshness monitoring, tracking and reporting; ambient air monitoring, tracking and reporting; vital sign, healthcare and well-being monitoring, tracking and reporting; tag data synthetic human speech or text conversion; unique tag signal validation, tag initiated recipe recommendations and ingredient, product, food identification; or marketing data actions as discussed herein; and provide respective next step data action module processing, analysis or conversion as well as other actions and tasks noted herein. Next step data actions can be configured as data modules or stores to provide any of the data, actions, queries or searches required to carry out any appliance/device, garment or network process described herein (“next step data actions”). For example, a home, retailer, wholesaler, manufacturer, hospitality, industrial, healthcare, agricultural or food/recipe transactional database can support next step data actions. A module can be constructed to update the network with data, next step data actions, activation technologies and related serial number and identifiers to ensure secure, authenticated and accurate data and provide search and query functionality. All modules can include deep learning algorithms, AI, neural network or other disclosed interaction, management, query and search for data and to interact, data share and communicate with other modules and provide or initiate data and other next step data actions. A management process or module can update network data modules or databases with data, next step data actions or tag generation or activation technologies and with unique issued serial identifiers. A database manager can also be configured to provide the rules or guidelines to address or edit next step data actions, database or module records that can include an appliance/device application to access or query network, data modules or generated or activation technologies, next step data actions or processes, data conversion or analysis methods, processes or results. An appliance/device or network user can download a tag data application or activate an application using a tag generated or activated application as discussed herein. When an appliance/device scans a tag, generated or activation data or next step data actions it can engage the application to allow an authorized user to query a network for tag data or to register, authorize, edit or cancel a previous data action with an appliance/device application. These actions can also be accomplished using an established user account with a login and secure password as described herein. A user can also query a specific tag identification number, recent activity or history regarding user, account, unique, activated or generated tag signal or data or to edit, amend a user and data or terminate the operation of a tag or account or edit account, user or tag data. Network data access or edits can be configured to provide user or account access confirmations to an appliance/device or secure codes or biometric authorizations to protect network access. An appliance/device application can be configured to provide tag data, activation technology or data generation results, verification or authentication methods such as voice, text, codes, gps or camera functionalities. As noted, an appliance/device application can be activated to allow a user to review history, voice, image or text messages or next step data actions or provide network or data module searches by text, voice or image. As shown inFIGS.3and4, a tag with generated or activation technology can be incorporated into an appliance/device, garment or container and can generate a unique identifier. Activated or generated tag technology can communicate a unique identifier to a network to enable network data or next step data action modules to process tag activated or generated data. In another embodiment as depicted inFIGS.2-7, networks, systems and methods in accordance with the embodiments herein can include the use of a subscription service, internet or network model, to allow CPG/FMCG products to be locally or remotely monitored, controlled and managed using appliance/device or technologies via a network such as the internet in combination with a subscription service or others described herein. A user can establish parameters and controls via a user or network account for an automatic subscription service to provide product/service order placement; fulfilment or delivery; replenishment; upgrade; substitution; replacement or status of products; food and ambient freshness reporting; or body vital sign monitoring and reporting; among others disclosed herein. A system and method can incorporate a network of tags attached to or in communication with FMCG; food; garments; a user; or an appliance/device and configured to provide real-time reporting regarding tag generated or activated data that can include code activated actions, sensor resistivity or signal output change or pulse sensing modulations, among others. A home or workspace can monitor and track tags for generated or activated tag data that can include pulse sensing, sensor or code activated actions. Appliance/device readers monitoring and tracking tags can create an automated system to also track, monitor or report a user's activity, garment or badge use or location or movement of a specific tag as well as track, monitor, report and provide next step data actions for tag generated or activated data such as food or ambient air freshness, product/service order and fulfilment or user vital sign tracking and reporting. Further, a tag can incorporate any tag or user generation, activation or identifier data disclosure herein and can be embedded or placed into or attached to a user or their body such as under their skin, in a garment or appliance/device. In one embodiment, internet or network automated product monitoring or replenishment can include a subscription service to facilitate user products, services or purchases by and among manufacturers, retailers or wholesalers. These networks, systems and methods can also provide data collection, data communication, data analysis or tracking of products or services, or combinations thereof, to improve order product placement, automatic replenishment of goods and delivery by using tags. The subscription service can provide numerous services ranging from product order placement, fulfilment or delivery; food freshness monitoring, tracking and reporting; and monitoring, tracking and reporting user vital signs and providing next step data actions and any others noted herein. By way of example, and as depicted inFIGS.7-10and12, a user may pour a glass of milk and realize the bottle is half empty. The user can locate a tag located at the half-mark of the bottle, remove a tab cover sealing a tag sensor or activation technology so that a sensor reaction to an ambient stimulus can immediately or concurrently change the resistivity or signal output of the tag which is read, analyzed and interpreted to automatically place a purchase order and delivery for the milk container or include the product into a virtual basket/list of goods. The user can then place the container into an appliance or pantry area. Additionally, a user can also remove a tab from a tag to allow a reader to interrogate a tag activation data such as a code or unique product, tag or other identifier, or combination thereof, configured to place a product order and provide other next step data actions described herein. When a user removes a cover from a tag, a data activation technology or sensor reaction can convert into a synthesized human voice or text via an appliance/device or network tag signal voice/text translation module that can be configured to ask “User would you like to reorder the same brand and size milk container or would you like to try a substitute” or “User, your milk has been reordered and should be delivered Tuesday. Would you like an appliance/device confirmation?” If an order is immediately placed, a prior confirmation can be sent to a user via an appliance/device such as a smartphone to accept or decline the purchase, to place the order into a shopping list, search for a substitute or to increase the quantity of the order prior to the order being placed, fulfilled or delivered. In another example, an appliance/device reader can interrogate a tag located on the chest or wrist area of a garment (e.g., pocket or cuff) or bedding located in proximity to body areas to be monitored, tracked and reported to provide user vital sign readings. These tag signals can be converted into synthesized human speech or text to provide a user with heart or respiration rate, blood pressure, breath effort, among others, by transmitting the data to the appropriate appliance/device or network data modules to be processed. In another embodiment an appliance/device, garment, technology or reader in communication with a network can be configured to process or to interpret the resistivity level, signal output change or modulations of a tag or a sensor portion to activate next step data actions. These tag signals or changes can be processed and converted into or can generate a synthesized human voice or text response via an appliance/device, garment, technology or network by using tag signal to voice/speech or text conversion/recognition to represent the actions or next step data actions which can include automatically placing a product/service order, fulfillment or delivery; placing a product/service into a virtual shopping list or opening a product order placement, marketing or other landing page; providing product, food or ambient freshness reports, food or container shelf-life projections, absence or availability updates; or generating a voice/text response or query to a user regarding said actions or next step data actions or combinations thereof, among others. For example, as shown inFIG.11, a generated or activated tag data or signal conversion process can enable a tag on a CPG/FMCG container or food item, garment or tag attached to a user to be converted into synthesized human speech or text by providing a pre-recorded, voice activated or generated synthesized human voice or text notice, message, recommendations or suggestions that can provide notice that a food item is spoiled or has a defined remaining shelf-life, provide notice a product is absent, available or has been ordered and request a verbal, texted or other interface order confirmation or that a user's heart rate or other vital signs are elevated and provide user suggestions, recommendations or an emergency request/response or event specific contacts. Tag generated synthesized human speech or text enables voice recognition and human speech engagement to create dialog and conversation between a generated, activated, detected or changed tag signal, status, event or next step data action with a user and can provide an appliance/device, garment, container, technology or network and data module recommendations, suggestions, orders or contacts regarding generated, activated, detected or changed tag signals, status or events by automatically connecting a user to a product, service or provider via appliance/device or network data modules. For example, in one embodiment, a tag data interface can be configured to provide real-time entry and interactive prompts such as an analog/digital signal to convert tag signals to digital patterns and be decoded or recognized by template-matching or feature analysis. Tag generation or activation data can communicate or transmit to a software program or it can activate a range of appliance/device, garment or network analysis or conversion operations or processes that can integrate with data modules and next step data actions. This process allows tag generated, activated or next step data actions in real-time to transfer, communicate and convert electronically transmittable signals or data into synthesized human speech or text to create tag data user interactions with products, services, providers, marketing and other data modules as described herein. For example, a method for tracking freshness or expiration dates of food or containers can include attaching tags to a food or container and placing them into a storage area. Tags can include unique data relating to tag, technology, food or container data or identifiers. Data can also include food or container identification data, freshness data, expiration date or container open date data. An appliance/device reader can interrogate tags and store the data on an appliance/device or network and retrieve relevant container open/close freshness or food data from a network and can also convert the data into synthesized human speech or text that can be stored as a message, retrieved by a user when providing a trigger word or speaking to a speaker or enable an appliance/device or notify a user at a predetermined time that a food or container is expired or not fresh. Tag generation, activation and structures discussed herein are configured to function with an appliance/device, garment, container, network and a smart speaker embodiment disclosed herein. Smart speakers are designed to perform tasks or services using voice engagement and interaction with a user. A smart speaker can be activated when a trigger word or a conversation or words are detected. Once detected, user verbal queries or commands, in some cases following a trigger word, are captured and sent to a remote or network service for interpretation and user input results. See for example, Patent Application No. 20180260680, which is incorporated herein by reference in its entirety. FIGS.13-15depict a smart speaker or functionality integrated into an appliance/device (“speaker”) that can communicate with, connect to or incorporate a tag reader with an appropriate interface to a speaker operating system or remote network in communication with said speaker. In this embodiment, a speaker can be configured to perform tasks or services from received tag signals that can be converted into synthesized human speech or text to provide voice engagement and interaction by, between and among a user, a tag, generated or activated data, a smart speaker and suggested or next step data actions and appliance/device or network data modules, or combinations thereof. A speaker can be activated when it detects or receives certain generated or activated tag data signals which can function as trigger words, if necessary, or next step data actions. Once detected and received, tag signals can be analyzed and processed into synthesized human speech/text or into suggested or next step data actions that can also be converted into verbal queries or commands to engage or interact with and among a user, tag and speaker. A speaker can also be configured to display a tag signal result or next step data action onto an appliance/device display or television or can send a result to a user or an appliance/device via voice message, text, graphically, numerically or image, or combinations thereof. A tag signal, text or audio input can be processed locally or remotely and in one instance can validate a tag generated or activated signal via unique tag, product or code identifiers to allow or authorize data conversion or initiate next step data actions. A speaker can be configured to ask or provide a voice acknowledgement each time tag data is generated, activated or next step data actions are initiated such as when a tag signal is received, a product order is placed, when a tag is identified as absent or when a product order is activated by a code, tag or product or other unique identifier. A speaker can also be configured with a trigger word, command or sound to cause a connected or speaker in communication with a reader to scan for tags and generated or activated data and confirm receiving said data. A speaker can also function without the use of trigger words with a user engaging a speaker with commands, words or conversation. Provided inFIG.13is a speaker configured to communicate with or incorporate an embedded reader module to scan and read tag generated or activated data to: locate, monitor and report goods or items; provide product, food or container freshness levels; container or food shelf-life projections; product absence or availability updates; monitor and report ambient air quality; monitor, track or report body vital signs; generate voice response or user queries regarding tag signals or data generated, activated or next step data actions, among others. Furthermore, a smart speaker can be configured to convert generated, activated or next step data actions into user voice or text responses and queries depending upon the context. A speaker can also be configured to detect or read stored or generated tag data and to interpret output information in a radio frequency regime; for example, frequency, frequency shift, signal intensity, or other detectable or sine based information or data. In certain embodiments, a method can include detecting an output of the radio frequency identification by a reader. A reader can connect to or communicate with an appliance/device, container, garment, user or network or other computing device with algorithms, software, neural network, AI, data modules or instructions to compute, analyze, process, compare and respond to the provided, queried or generated data locally or remotely via a network, internet, cloud, edge computing or other similar network and network interfaces or combinations thereof. FIG.14shows an example of an appliance/device or speaker that can incorporate components of a speaker system and connect to a network or another appliance/device, container, user, garment or technology. A speaker in communication with an appliance/device, tag, container, product, user, garment or network can connect or communicate with user data modules or stores and data stored or provided in a user account, profile or an appliance/device as discussed herein. A speaker can include a tag, a tag reader, sensor, voice listener, parser, intent handler, commitment engine, entity tracker and an output device. A sensor can include a microphone to receive natural language inputs or signals from a user or appliance/device, signals and a reader to receive tag generated or activated data signals and corresponding next step data actions. A reader, signal data conversion module, voice listener, parser and intent handler can work together or in combination to convert natural language inputs or tag signal inputs into synthesized natural language or text and into next step data actions or commitments that can be executed by a speaker, appliance/device or network. A reader can interrogate a tag for generated or activated data signals and store said data locally or remotely. A commitment engine stores commitments in commitment storage. An entity tracker can provide context information for a tag signal or commitment engine and/or other data modules. A user or a tag can initiate or at a contextually appropriate time a commitment engine can execute a commitment or next step data action and provide output, such as tag to audio signals or text, among others. FIGS.14-15provide examples of a remote service providing tag generated or activated data conversion into synthesized natural speech, text or natural language processing of a speaker system. In one example, a tag reader or signal converter, voice listener, parser, intent handler, entity tracker and commitment engine are disposed in an appliance/device or network. Tag or sensor generated or activated data from a tag, generated or activated data technology or a user device can communicate to an appliance/device, network or data modules. In another embodiment, data can be provided locally by an appliance/device. In one embodiment, tag generated or activated data from a user, appliance/device, container, food item or garment can be interrogated by a speaker and transmit the appropriate data and next step data actions to analyze, process or convert said data. FIG.15depicts an embodiment of a computing system that can provide a method or process described herein. An appliance/device or computing system can include a logic processor, volatile memory and a non-volatile storage device. A system can also include a reader, a tag, a display, input and communication system, among others. A logic processor can include an appliance/device or network configured to execute instructions and a volatile memory can include an appliance/device that can include random access memory. Furthermore, a non-volatile storage device can include an appliance/device or network configured to hold instructions executable by a logic processor to implement the methods or processes described herein. A speaker can incorporate a function or mode to store or segregate unique user or event data, converted synthesized human speech, text, graphical, numeric or tag data or next step data actions and provide discrete user access or contextually appropriate data sharing. In one embodiment, a speaker can connect to, communicate with or incorporate an indicator light located on the front panel of a speaker or on any other visible area. An indicator light can activate when a speaker or network provides, stores or queues unique user data or when a user shares data with multiple users such as a specific or general message, record, user or event data in response to a user query or related data or via generated or activated tag data, results or reporting or next step data actions. For example, a speaker indicator light can activate in a color to notify a user there are pending tag product purchase orders or virtual basket orders that require a user's voice or text confirmation or an order page confirmation prior to authorizing placement; recent or historical vital sign indicators, levels or reports to review; confirmations of online purchases/services and follow-up items such as additional, missing data or information queries or calendar alerts such as meetings or events, among others. Different activated light colors can represent unique data meanings allowing a user to see an activated speaker light and provide an access or trigger word to the speaker to share relevant data. Further, a flashing light can notify a user of an emergency or can flash to confirm receipt of a tag or purchase order, trigger word, command or next step data action. Data can be stored for each user locally or remotely. For example, when a user observes an activated light the user can speak their name as a trigger word, command or access. The speaker can then authenticate the user's voice and respond by saying “Hello User 1, you do not have any messages or pending purchase orders” or “Hello User 1, you have no actions” or the speaker may respond by saying “Hello User 1, User 2 left you a message and you have 2 purchases you need to authorize. Would you like to hear the message or authorize your purchases or do it later?” A user can respond accordingly. Furthermore, a speaker integrating voice recognition software can identify a user and provide the aforementioned materials. As shown inFIG.16, a speaker can incorporate an interface to connect or communicate with a telephone number as discussed herein so that an authorized user can call, message or text to connect and communicate with a speaker using an appliance/device to provide voice or text commands to request information, receive stored messages or information, unique user data or to activate or adjust connected or communicated smart home appliances/devices or technologies. Furthermore, a user or tag can provide group or family messages using a speaker as a digital voice bulletin board or forward messages or data to participating user's appliance/device. A speaker can be configured to send or forward any tag, camera, phone or voice generated or activated data or message received, converted or stored with a speaker to any and all authorized and participating users via voice-to-text, voice-to-voice or image, or combinations thereof, to an appliance/device messaging, display, among others, which can include next step data actions. In another example, a speaker can provide discrete user data sharing. For example, a speaker reader can interrogate a tag for generated or activated data with a speaker configured to only share certain interrogated tag data in the form of voice or text with a user's appliance/device and not with the speaker voice interaction or engagement functionality. A speaker healthcare mode can interrogate a user's garment or a tag configured to read body vital signs and to restrict sharing the data to a user's appliance/device. In this example, a user can be provided data access and a data restriction so data is only shared with a user's appliance/device. Furthermore, a user can access or receive data with an appliance/device or smartphone that does not integrate a reader. In another example, a speaker in a healthcare mode, as previously noted, can be configured to provide a user with tag body vital sign data. A reader can be configured to provide generated or activated tag data with a restricted data sharing setting to provide the data only to a user, caretaker or doctor. Another tag data sharing setting can allow data to be shared with speaker voice engagement and interaction activation functionality so tag data can be converted into synthesized human speech or text and shared with a user or appliance/device. A speaker or reader can also be configured to share certain data and threshold levels or ranges with authorized users. For example, a user can elect an option on a speaker, appliance/device, network or healthcare account or application to transmit certain body vital sign data, in a certain range or level, to a specific user and provide that all other vital sign ranges or levels outside of certain thresholds transmit automatically to select family, caregiver, doctor or emergency services via voice-to-voice, voice-to-text or image/video, among others. For example, body vital sign ranges, levels or thresholds can be designated good, fair, serious or critical and user notifications can be based on these or similar health status designations. As previously noted, a speaker can activate a flashing light or siren or other alerting noise to notify of an emergency with immediate user action required. Furthermore, a speaker can also provide notice to a family member, caregiver, doctor or other via an appliance/device, as previously noted, or that a user is not properly using or allowing a tag to monitor, track or report body vital signs such as when no signal is received or reported or can send user notices when conflicting or anomalous historical or comparative user data is generated or collected that can indicate mismatched or incorrect tag or garment usage. As previously noted, in one embodiment, a tag can be configured to generate or activate data and next step data actions. In one embodiment, the present embodiment can provide a method and system for a multimodal remote speaker healthcare monitoring system or network. The system can include a sensing/NCS tag that can be placed in proximity to a user; attached to a user; or attached, embedded or sewn into a user's garment or bedsheets, among others. A tag can be configured to generate signals to detect body vital sign data from a user that are interpreted with software, algorithms, AI or neural networks. Single or multiple tags can connect to or communicate with a user to generate data when placed in proximity to chest, arm, wrists, head/eyes, neck, joints, abdomen/stomach or any other body or tissue area intended for monitoring and data generation. A speaker healthcare monitoring system can provide benefits to any user; especially, remote well-being or healthcare monitoring of seniors, users with disabilities, patients and individuals with existing conditions, pre-post-natal or fetal monitoring and sleep tracking, monitoring and reporting. A speaker healthcare monitoring system can include a reader configured to interrogate tag generated or activated data which can also include automatic product ordering or delivery, food freshness monitoring and reporting, physical location sensing and body vital sign reporting. Generated or activated tag data can be sent to an appliance/device or network to interpret and analyze data which can be displayed on an appliance/device display or sent to other authorized appliances/devices for caregivers, doctors, contacts, users, providers or others described herein. In another embodiment, tag data can be managed, processed or analyzed by a speaker with a software-defined radio (SDR) integrating a microcontroller embedded in the speaker or via a proxy device. In yet another embodiment, an appliance/device, reader or a reader embedded into a speaker, as previously discussed, can be configured to convert tag signals and activation data, such as executable code or unique identifiers, into synthesized human speech, text, numeric or graphics, or combinations thereof, to identify a user, provide verbal body vital sign data to a user appliance/device or to engage a user with text or speech/verbal queries and responses regarding the data. Data processing and analysis can be provided locally or remotely via a network. Generated or activated tag data and a speaker can connect to or communicate with a healthcare server, network or data modules for data storage and user information management or with an appliance/device, medicine cabinet, pantry or network healthcare data modules and next step data actions. As previously noted, tags can be placed in close proximity or attached to a user; attached, embedded or sewn into a garment or bedding or disposed, attached or embedded into a band, strap, harness, smart watch band, automobile chest seat belt or any other type device (“tag connectors”). Further, a position tracking or human activity recognition (“har”) device or sensor can also be incorporated and in communication with tag connectors and incorporated into a monitoring and tracking platform in communication with tag generated or activated data and next step actions. Continuous user monitoring activities can detect or anticipate unsafe situations such as falls, among others. The present embodiment can also include a vision-based recognition system using a camera in communication with a speaker or tag connector to record video sequences and to recognize user activities by combining images with computer vision algorithms. Another embodiment can include a radio-based recognition system that can implement ZigBee, WiFi or tags to anticipate or detect human mobility issues. Another embodiment can integrate a sensor recognition-based system that can incorporate micro-electromechanical systems (“mems”) sensor technologies such as an accelerometer, gyroscope, barometric or magnetometer to detect and anticipate body movements. In one embodiment mems can combine an accelerometer with a gyroscope and be configured to recognize user fall detection or gait analysis, among others. Tags and indoor positioning technologies and sensors as previously noted can be combined into a garment or tag connector to track, monitor and report a user's body vital signs, physical position and activity by connecting and communicating with a speaker healthcare data management or network. Continuing, a tag can generate data and transmit these signals to a speaker as noted herein. An appliance/device or speaker can provide local data processing, compression or storage for certain data processing applications and can transmit data to an appliance/device, healthcare server or network via the Internet or network and related data modules. As previously discussed, depending upon the data level, range or threshold the data can be sent to a user, caregiver, doctor, hospital, emergency service, provider or other appliance/device. A speaker can also be configured to provide video monitoring as noted or a communication service connecting to an appliance/device or television display to facilitate communication and data sharing with a user, caregiver, doctor or provider. User, tag, garment or tag connector identifiers or information can be provided for each user. A user can connect or communicate with one or more tags or position sensing devices. User information can be manually entered into a user appliance/device or speaker application, account or other computing device using a unique username or password to identify a user and tag or sensing device in one embodiment. In another embodiment, a tag or combined tag functionality can provide a speaker, appliance/device or other computing device account or application activation with an executable code or other activation data to automatically open an account, application or registration page to provide basic user data with voice or text that can include integration of an appliance/device camera or scanner to scan tag, garment or tag connectors that incorporate a quick response (QR) or bar code, or any other code disclosed herein, or use voice and voice prompts to complete required account information, editing or modifications to associate a tag and usage with a user to allow tracking, monitoring and reporting of generated tag body vital sign data. A system can also provide any combinations of the aforementioned. A remote speaker healthcare monitoring platform can function as a continuously operating and real-time data generating, gathering, analysis and response network, subscription or service. In one embodiment, an application can activate a user login page. A user can also open/close or connect/disconnect an activation tag or device as discussed herein to initiate a user login or use an appliance/device or smartphone camera or system scanner to input a QR, bar or other code from a user tag, garment, tag connector or position sensing devices which can also activate an application or user program. Speaker or appliance/device applications, programs, menus or modules can be accessed via appliance/device, voice, text or display. For example, a speaker or appliance/device voice or main menu can include a “Health” or “Product/Food” or “Medication/Emergencies” or “Comments” menu, combinations or others. A Product/Food section can include scanned or read tag available products or food inventory in a refrigerator, pantry or medicine cabinet and provide food or container freshness levels as well as projected shelf-life. Users can access and review placed virtual or pending product shopping lists and anticipated delivery times/dates. A user can access this section to modify, cancel or place product/service purchase orders. Multiple authorized users can access a user profile or page to allow a caregiver or family member to monitor the well-being of a user and also place services or product orders with deliveries. A Medication section can provide current and historical medication usage, availability and comments regarding usage, side effects, etc. As noted in the Product/Food section, medications can be ordered, delivered with monitored usage. An Emergencies section can provide recent or historical medical issues such as incidents, emergency room or treatments, among others. A Comments section can provide recent or historical family, caregiver or doctor observations regarding well-being, eating, general habits, discomforts and other health or well-being observations. A “Health” menu can provide numerous sections. A “Health Tracker” module can be configured to generate and gather data for heart rate, blood pressure, respiration rate, breathing activity or effort and pre-post-natal or fetal monitoring and to display data with voice, text, numerically or graphically with individual rates, levels, thresholds or combined data, comparative data for age, health, weight, height or conditions, vital statistics and medical history and historical user data and with other related information and provide a data share option to transmit data to other user appliances/devices. A “Health Data” module can provide a user's records for activity data, heart rate, blood pressure, respiration rate, breathing activity and pre-post-natal data and also provide self or auto-diagnosis data. A “Speaker Setting” module can allow a user to verify proper and correct tag usage, set and adjust user and automatic health or contact notifications such as health, caregiver, doctor, hospital or emergency or other. A “Health Contact” module can allow a user a quick connect to communicate with family, a caregiver, doctor or other using text, speech or camera. Other modules can be created or added depending upon a user's needs and user questions regarding generated or gathered data can be queried to a speaker with voice or text via the speaker function. Generated or gathered healthcare data can be remotely accessed by authorized users to receive, save or manage data files via a network for each user, among others. Caregivers, doctors or other authorized users can read or analyze the generated, activated or gathered data to provide diagnostic information or advice to a user, family member or caregiver. Any time account data is edited or changed a voice or text message and confirmation can be sent to a user. Any disclosure herein or combinations thereof can be incorporated into a speaker healthcare platform, a speaker, appliance/device or technology. A healthcare platform as noted can be used via an appliance/device application. Furthermore, this healthcare model can also be implemented as an agricultural data module or store and used to monitor animals such as cattle, pigs, horses, chickens, etc., and monitor chicken egg laying and animal gestation or pregnancies, etc. In one example, a GPS sensor or system in communication with a speaker healthcare platform can attach to a collar and include a tag and a reader, or combinations thereof to track, monitor and report animal health. In another example, tags can be attached to a user and a reader carried or attached to an appliance/device, drone, dog, llama or other that can monitor a user or animals. Also disclosed is a method and system for the use of multiple brand logos or names and communicative indicia on a tag, garment, healthcare speaker or platform, appliance/device, technology, food item, container or product packaging (“product”) wherein at least one of the brand logos or names (such as a secondary or tertiary brand logo or name) can be used to represent unique, different or distinct products, services or benefits from the primary brand logo or name. Secondary or tertiary brand logos or names can communicate to consumers either an enhanced, unexpected or unseen use or benefit for a product that can include a benefit such as a product/service order placement, fulfilment or delivery; tracking, monitoring or reporting product freshness and projected shelf-life; tracking, monitoring or reporting body vital signs; processing a purchase order payment; or providing vital sign related recipes with respective next step data actions. A method and system is provided to manufacture, package, market or sell an appliance/device, garment, healthcare speaker, container or platform and a method and system that can be used with the tag embodiments described herein and to provide consumers with product packaging that includes technologies, communicative indicia, text or tags to effectively and immediately communicate to a user specific primary, secondary or tertiary benefits such as benefits regarding technologies, usages, interaction and communication, advantages, product availability and complementary usage with an appliance/device or product and consumer goods by using minimal surface area on said product packaging and a method and system to provide user product packaging information regarding an appliance/device or consumer goods to effectively and immediately communicate specific primary, secondary or tertiary advantages or benefits of an appliance or container a technology. For example, a combined primary, secondary or tertiary brand logo or name descriptor can include the use of any of the following communicative indicia or text to form a benefits descriptor and association and in any combination hereof or as disclosed herein:[primary brand logo or name][combined description of association or inter-relation][secondary brand logo or name]or[tertiary brand logo or name] For example, a product or packaging can include a primary brand logo or name which includes a product with a technology and a secondary or tertiary brand logo or name or icon for any of the following: a healthcare speaker or network, container, an appliance/device, garment, a smartphone, speaker, beacon, tag, user interface, a retail grocery store, food distribution, delivery or service company such as an internet, cloud or product provider or any others disclosed herein. A logo and brand name for an appliance/device, garment or technology immediately informs a consumer that there are additional non-obvious or unseen benefits associated with a product, container or technology and a logo for a food distribution channel such as a retail grocery store or a cloud service, immediately informs a consumer of same or other similar type product availability and that this information can be located, read, downloaded or accessed in whole, part or additionally provided via a tag and accessed with a reader enabled appliance/device or network. In another example a garment can incorporate tags to generate body vital sign data that can include a brand logo or name for a tag, healthcare speaker or network, a speaker, appliance/device, phone or network service with a reader that can interrogate said tags to provide user bodily vital sign data. This can also be used for pre-post-natal or baby garments and clothing such as bedding, diapers or children/baby clothes. Brand logos or names or any combinations of the aforementioned products, services or technologies can be combined. FIG.17depicts an embodiment of a product inside a container with a technology and brand logos and names. A sealable container222is provided with a cover221disposed thereon. The cover can include a one-way valve220to allow air to be evacuated from inside said container when a vacuum is applied to it to create a vacuum environment therein. An adhesive, film or plastic material227can cover the one-way valve to protect the valve and container contents and can be removed to allow a vacuum environment to be created inside said container. A tag228, or a combination of tags, as described in the embodiments herein, can communicate or connect with each other, as well as for any of the other embodiments described herein, and can be located individually or together inside or outside of said container or appliance/device interior to monitor, track and report a level of product freshness, change, gas levels or contaminants, identify a product or container or the location of a product or a container or an appliance/device that can include a home or work setting. A primary brand logo or name223can represent a product. A secondary brand logo or name224can represent a technology such as a tag or a cloud service that provides a subscription service to detect product freshness levels and shelf-life projections or to order or replace a product. A brand logo or name descriptor225can describe or represent brand logos or names. A tertiary brand logo or name226can represent an appliance/device that can function with said product, container or technology. Further, a tag can also incorporate an executable code or provide unique tag or product or other identifiers, as discussed herein, to identify or register a tag, usage, software or algorithms to analyze or interpret tag data, among others. A tag, attached to or inside a container, or tag open/close or connect/disconnect device to order a dedicated product/service can incorporate a brand logo or name on the tag, tag and container or tag device to inform a user that the tag contains specific or unique data to said product/service. For example, as shown inFIGS.7-10, a tag can be disposed inside a closed container with a product. A sensor portion can be configured to include single or multiple sensors to detect gases from a product when sealed inside a container. A tag can also be configured with a sensor to detect a gas not produced or emitted by a product. This configuration can allow a tag in a sealed container to detect gas levels produced by a product inside a container so that an appliance/device reader can detect a change in the resistivity level or signal output of a tag and said change can be analyzed and interpreted to notify an appliance/device or user that a product is fresh or spoiled and also provide a projected shelf-life. A tag positioned to direct signals into a food product or item can also provide generated or gathered data that can be analyzed and interpreted to provide an indication of a change in a product as noted herein. Additionally, in response to a tag or signal change a product order can be placed into a virtual shopping basket, an order can be placed or a request to replace a product if spoiled, among others. Further, a sensor can be configured to read a gas not associated with a product or spoilage process. For example, a tag in a sealed container can detect oxygen, or parts of oxygen: for example, and in a retail/work setting this can indicate that a container has been opened, tampered with or is broken depending on the reader setting, software or algorithms used and in a home environment an appliance/device can provide notice to a user that a product/service order is automatically placed or a container has been opened, among others. In another example, a user opens a sealed container and a tag inside the container detects an ambient gas not generated or stored inside the sealed container or associated with a product or a spoilage process, but can detect an ambient gas that can be analyzed and interpreted as an open container and for an order to be placed. The change analysis and interpretation can be configured for a home or work environment reader, software or algorithm. Any detection or order can provide a user or an appliance/device with a notice, an option to purchase or replace or to place a product into an online or virtual shopping basket and provide a user or appliance/device voice/text confirmation for each process. Additionally, a product delivery can be requested with each action with a confirmed time/date or provide available delivery times/dates for a user to select that can include a delivery confirmation. An order can be made immediately, provide for a delay period such as 24 hours and only allow one purchase per product per a determined time period unless a user indicates otherwise in a profile, application, account, appliance/device, network or related operation. In the following description, as depicted inFIGS.5,6,12,14and15, and for purposes of explanation and not limitation, specific details are set forth for particular networks, communication systems, computers, terminals, appliances, devices, components, techniques, storage devices, data and network protocols, applications, software products and systems, operating systems, development interfaces, hardware, etc. in order to provide a thorough understanding of the present invention which can apply, function and operate to and with any of the appliances/devices, containers, garments, platforms, tags, products, technologies and interfaces disclosed herein to allow appliances/devices or networks augmented or virtual reality applications and software, Artificial Intelligence (AI), neural network applications, image and event recognition, voice/text activation and recognition software or hardware and interfaces to connect, communicate and interact with a user, user interface, appliance/device, garment and software and hardware programs and interfaces for connected or wireless communication via cloud, internet, satellite and other types of systems and services to operate, interact and communicate with any of the appliances, technologies, interfaces, networks or data modules or stores (“data modules”) disclosed herein. As noted, all FIGs. where appropriate can disclose appropriate appliance/device, network or other interfaces. It will be apparent to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. Detailed descriptions of well-known networks, computers, digital devices, storage devices, components, appliances/devices, technologies, techniques, data and network protocols, software products and systems, development interfaces, operating systems, and hardware are omitted so as not to obscure the description of the present invention. The operations described herein can be implemented as executable code stored on a computer or machine readable non-transitory tangible storage medium (e.g., floppy disk, hard disk, ROM, EEPROM, nonvolatile RAM, CD-ROM, etc.) that are completed based on execution of the code by a processor circuit implemented using one or more integrated circuits; the operations described herein also can be implemented as executable logic that is encoded in one or more non-transitory tangible media for execution (e.g., programmable logic arrays or devices, field programmable gate arrays, programmable array logic, application specific integrated circuits, etc.). FIG.6depicts a product management system100. The system100comprises a plurality of user interface devices120and a main server150interconnected via a communication network140. Various networks140may be implemented in accordance with embodiments of the invention, including a wired or wireless local area network (LAN) and a wide area network (WAN), wireless personal area network (PAN) and other types of networks. When used in a LAN networking environment, computers may be connected to the LAN through a network interface or adapter. When used in a WAN networking environment, computers typically include a modem or other communication mechanism. Modems can be internal or external, and can be connected to the system bus via the user-input interface, or other appropriate mechanism. Computers can be connected over the internet, an intranet, extranet, ethernet, or any other system that provides communications, such as by the network140. Some suitable communications protocols may include TCP/IP, UDP, OSI, Ethernet, WAP, IEEE 802.11, Bluetooth, Zigbee, IrDa or any other desired protocol. Furthermore, components of the system may communicate through a combination of wired or wireless paths. The system100can be accessed via any user interface device or appliance or device120that is capable of connecting to the main server150. A user interface device or appliance or device120comprises a display, and preferably a touch screen display, reader, a video/camera and a microphone for inputting voice/sound and a speaker. An exemplary user interface device or appliance/device120contains a web browser or similar program, allowing in some embodiments for a secure SSL connection, and able to display HTML and CSS. This includes user interface devices or appliances/devices120such as tablets, iPads, Mac OS computers, Windows computers, e-readers, and mobile user devices such as an iPhone, Android, Samsung and Windows Phone. Preferably, the user interface device or appliance/device120is a smart: appliance, phone, speaker, display, television or tablet, among others. The user interface devices or appliance/device120can connect to the server150via the internet and/or wirelessly, such as through a mobile telephone, cloud or network140, and/or any other suitable medium. User interface devices or appliances/devices120are able to communicate to the main server150so that content can be started on one user interface device or appliance/device120and later continued on a separate user interface device or appliance/device120. The user interface device or appliance/appliance120preferably includes an I/O interface that allows a user to interact with the system100. The I/O interface may include any hardware, software, or combination of hardware and software. The CPU of the user interface device or appliance/device120can be implemented as a conventional microprocessor, application specific integrated circuit (ASIC), digital signal processor (DSP), programmable gate array (PGA), or the like. The CPU executes the instructions that are stored in order to process data. The set of instructions may include various instructions that perform a particular task or tasks, such as those shown in the appended flowchart. Such a set of instructions for performing a particular task may be characterized as a program, software program, software, engine, module, component, mechanism, algorithm or tool. The memory is preferably non-transitory memory and can include random access memory (RAM), ready-only memory (ROM), programmable memory, flash memory, hard drives, and the like. The memory, can include application programs, OS, application data etc. The exemplary computing device120can also include a network module connected to an antenna to communicate with the rest of the system or network100. The main server150described herein can include one or more computer systems directly connected to one another and/or connected over the network140. Each computer system includes a processor, non-transitory memory, user input and user output mechanisms, a network interface, and executable program code (software) comprising computer executable instructions stored in non-transitory tangible memory that executes to control the operation of the main server150. Preferably, the memory is non-volatile. Processor is in communication with the memory, which can be on the server or remote to the server. Similarly, the processors functional components formed of one or more modules of program code executing on one or more computers. Various commercially available computer systems and operating system software can be used to implement the hardware and software. The components of each server can be co-located or distributed. In addition, all or portions of the same software and/or hardware can be used to implement two or more of the functional servers (or processors) shown. The main server150can run any desired operating system, such as Windows, Mac OS X, Solaris or any other server based operating systems. Other embodiments can include different functional components. In addition, the present invention is not limited to a particular environment or main server150configuration. Preferably, the main server150is a cloud based computer system. The main server150includes a web server and the query processing unit. The web server receives the user requests and sends it to the query processing unit. The query processing unit processes the request and responds back to the user interface device120via the web server. The query processing unit fetches data from the database server if additional information is needed for processing the request. The database is stored in the non-volatile memory. The term “database” includes a single database or a plurality of separate databases. The main server150can comprise the non-volatile memory or the main server150can be in communication with the non-volatile memory storing the database. The database can be stored at different locations. A computing environment can include a server computer or any other device, appliance or system to provide computing functionality which can include a plurality of computing appliances or devices configured in one or a plurality of server or computer banks. In one embodiment, a computing environment can include a plurality of computing appliances or devices including a hosted or grid computing resource or other distributed computing arrangement. Relevant data modules or stores can include tag, user, home, retail, wholesale, manufacturer, hospitality, industrial, healthcare, agricultural or medicine/food/recipe data, information, identifiers, prices, profiles, historical, trends, health and food data, usages and recommendations for related products, services or providers and in some examples can provide: appliance/device; tag data; tags; tag signal conversion to synthesized human voice/text; device; body vital signs (i.e., heart rate, blood pressure, respiration rate, breath effort, pre-post-natal-fetal) and individual or health demographic data or profiles; biometric (and as noted herein) analysis and application; product, food, recipe and cooking times; expiration date data and of open containers; expiration date or shelf-life data after a container, food or medication is open; order-purchase-processing data; gas, volatile organic compound and chemical and product freshness and threshold levels, images and signatures; user or product profiles with comparative, historical and age/condition specific data for the aforementioned can be stored in data modules or a data location accessible to the computing environment and can comprise a plurality of data modules or stores. Stored data or modules can be associated with the operation of the various processes, applications and/or appliance or device functions described herein. Data modules or stores can allow a user, appliance/device, server or network to perform data searched, queries and data management and combine data, analysis or conversion processes, services and next step data actions from data or tag data sources that can include networks, servers, files, sets, packages, among others. Further, the terms “module”, “program” or “engine” can also be used herein to describe part of a computing system integrated to perform a particular task or function. Data modules or stores can include any data or next step data actions disclosed herein that can connect, communicate or process individually, in sequence or combination herein. Software program modules and data stored in the non-volatile memory of the main server150can be arranged in logical collections of related information on a plurality of computer systems with associated non-volatile memories. The software and data can be stored using any data structures known in the art including files, arrays, linked lists, relational database tables and the like. The server150and user interface devices, appliances or devices120are programed to perform the methods and processes described herein. For example, an appliance/device such as an Apple, Nokia or Samsung can include or communicate with a tag reader and use an application to access tag data, conversion data, product ordering, healthcare or marketing information and data/information from a tag to manage said content either alphabetically, by type, location, module, store or section or product or body vital sign type and a user can add personal notes and other information to said retrieved materials such as comparative information and material, store name and location of said product, time, future sales dates, discounts, product specifications, recipes and ingredients and to read, for example, thin film tags in communication with a tag to view recorded gas or temperature and other sensor information, market and transport time and product processing, storage and transportation history of a product or container, product ordering, health data, purchasing, payment processing and delivery information, etc., said data which can also be stored or located on a data module herein. Said information can be accessed directly from a tag or via a product, container, garment or appliance/device application that can wirelessly access said information via a network, cloud or internet communication or connectivity or as described herein and can include tracking, monitoring and reporting product or ambient conditions/freshness and in containers or appliance compartments or body vital signs and medications and can enable product ordering, purchasing, payment processing, product delivery and any others described herein. A network or appliance/device can connect or communicate directly or indirectly with a local, remote or network computing appliance/device, server or product inventory/purchasing management system or other means to a third party product/service provider such as online or physical product manufacturer, retailer or wholesaler, fulfilment or product delivery service or payment/processing service, pharmacy or healthcare or well-being service or provider, credit/debit card provider/service or product/service payment, financial or bank provider to allow and facilitate product ordering, purchasing, payment processing or product delivery, among others. For example, any of the appliances/devices, local, remote appliance/device or network computing devices, servers or product/inventory management systems can contain a purchaser identifier (name, address, biometrics as disclosed herein, financial/banking or account information, contact information and payment methods as disclosed herein) which can represent or identify a purchaser, user, individual, business, corporation, etc., and can be added to an order to identify or complete a replacement or purchase order, payment processing, business or home delivery request or order, and any others noted herein. Further, an appliance/device, network or data modules can also store or manage backup data that can include comprise one or more of: user data; marketing material; product information, product ordering, purchasing and payment processing data, technology and information; container and appliance applications; recipes and cooking instructions; chemical, gas or volatile organic compound signatures or profiles and threshold levels and any other information described herein including user contact data (user address, bank, credit/debit card or third party payment information), appliance specific data and can include network or appliance biometric access or authorizations using facial, eye/iris, fingerprint, palm or voice recognition connected to or in communication with an appliance and operating systems for product ordering, purchasing and payment processing or delivery and data and network configuration data, authorization and access. In certain situations, appliance/device or network backup data can be encrypted and a memory structure can comprise a removable or non-removable secure or non-secure element. As provided herein, a tag and a camera can each include a network interface. For example, a tag and camera can connect and communicate with each other and be in data communication with a computing environment through a network interface. This allows a tag and a camera to directly communicate with various data modules and applications across a network, cloud and others. Another embodiment includes an appliance/device with a tag reader and a camera connected to or in communication with cloud computing or internet network applications to read tags and products into an appliance/device inventory system and record container open date status and product expiration dates which can also be network connected to food recipe platforms, distribution or service provider networks as described herein. A user with an appliance/device or secure code access, website or network in communication with an appliance/device, pantry or operating and inventory management system, as described herein, can be configured to access, view, review and monitor appliance/device food, container, medication status processes or inventoried products and to place product orders via a connected distribution and service provider network. Containers with technologies can be configured to operate and control product ordering, purchasing, payment processing, delivery and appliance temperature, humidity, venting and other operating settings. For example, as shown inFIGS.7-10and20, an appliance/device inventory management system can comprise a digital list of products and tags for use with an appliance/device program or application (“app”). Devices can perform numerous inventory management functions; see for example, Patent Application Nos. 20140009291, 20160162715, WO2016109563 and WO2016109533, which are incorporated herein by reference in their entirety. A user can create a digital list of basic, shopping, retail, work or home products (“list”) and provide the quantity or number of products to be maintained in an appliance, pantry or home or work area, such as a living, bed or bathroom, supply or file cabinet (“modules” or “room”). This data list can be entered and stored into an appliance/device or network by a user, tag reader, a camera, and voice, via a network connection or lists can be preloaded to an appliance/device or via another appliance/device or a combination as disclosed herein, with said products identified as basic, shopping, home, work or room products. The appliance/device program or app can be configured to function in several modes. An app mode can read all tags within reading range in an area or room and display all tag products on a handheld device or appliance display. The app can be configured to display only the tag products that are not located or identified or that are absent within a reading range when compared to a stored data list in an appliance/device or last virtual basket/shopping list. The app can operate in a home or work room mode; for example, a user can select a supply cabinet or bathroom mode and the app can be configured to read and display only products that are in a bathroom such as products intended for a bathroom based on a stored or provided marketing list or products that are missing or absent from a bathroom when compared to a stored list or products that do not belong in a bathroom or can suggest products based on the room and from the stored lists and network or databases for a tag reader, tag or camera image database. For example, an app mode can allow a user to read, list and display products that should ‘not be present’ in a room or space. For example, a user can select this mode for a living room and the app can scan/read the area to only identify tag products that do not belong in the room such as identifying a tag tube of toothpaste and thereby informing a user that this product should be located in another room such as a bathroom. In this manner a user can quickly walk through a home or work environment and quickly identify products that are present or available, identify products that are absent or missing, identify products that do not belong in a room or space, section or area and identify products that need to be ordered and to also receive product suggestions for each room, space, section or area with each reading or room mode in communication with respective network product/marketing and product list modules. For example, an app room mode can be configured into other apps or apps can be configured to operate individually or in any sequence with the capability to store search results or send them to other appliances/devices or networks for next step data actions. For example, when an appliance, pantry or room tag reader or camera (tags, readers with interfaces can connect to or communicate with a camera and image or general database) cannot identify or read a basic, shopping or room list product in an appliance, supply cabinet, pantry or home/work area that product data can be sent to an appliance/device or network with a notice or message regarding the product status or said data can be sent directly to a network to automatically place a product into an order basket or to place an order, purchase, payment processing or delivery request for a product. The app ordering function can also place an order after a predetermined time period, such as after a 24 hour period. An app can also connect or communicate with a camera or image database. For example, if a reader and computing device cannot or does not identify a tag product in an appliance, pantry or home/work area or environment, and prior to placing an order based on the product not being present, an app can compare the inventory management data module/store/base with camera images, voice or tag products, databases or modules as well as pending shopping orders or recent purchase orders to verify the same or similar product is not available and can also compare and review virtual shopping baskets. When a tag reader detects an absent or missing product in an appliance, shopping or room list, home/work environment a tag database can query the camera image, voice or text product database to confirm or verify the presence or absence of a product from one or more rooms prior to sending a notification or placing an order and the camera, voice or text ordering functions can operate similarly with the respective tag reader and product databases. In this manner if a speaker or beacon detects that a box of tissues is absent or missing from a room the app can query the room and databases to locate a product if it has been moved to another room or location, can detect a similar product placed or stored inside a pantry, can identify a pending order or virtual shopping basket with the product or similar product and provide a user a notice or place an order. An app can also provide a mode to only identify and display a digital list of tags or products that report body vital signs or others described herein, provide freshness levels or expiration dates within determined times or dates, such as days, weeks or years. The app can also be configured to only identify certain types or groups of products such as fresh foods including meat, fish, dairy or fruit and vegetables or container products. An appliance app can be configured to create, read and display a list of products in an appliance, pantry or home/work environment in alphabetical order, food or product group, gas freshness level and expiration date, expiration date order, projected shelf-life and to search for specific recipe products or a product group, type or specific product can be inputted into an app to quickly search for the product. An app can provide a list of products that are present, absent, in the incorrect room or location or can suggest or recommend products based on products identified and stored in appliance/device, room, camera, voice, text, tag and product marketing databases, searches or user data history. For example, a user can walk into their living room, bathroom, pantry or open their refrigerator with a smart control, as explained herein, and a tag reader or camera can immediately inform a user that milk, a box of tissues, toilet paper, toothpaste or baking soda is absent or missing and to place an order or to replace said absent product with one from another area in the house. The app can also provide the user with information such as how long a product has been present in a room or area until the product is detected as absent. The appliance program or app can then either place said product into an online or virtual order basket or place an order via a network, open a product ordering page, display a product image on a display with a purchase option or upon reaching a predetermined dollar amount in a virtual basket can send the order, or send a product request to order, purchase, payment process or deliver for said products or via a third party product/service provider. The aforementioned functions can also be executed by using the camera functionality with product image recognition, database or network. Delivery request information can be sent by the order recipient, delivery service or connected network to a user with a schedule of available times to select from and to confirm or a specific delivery time confirmation can be sent. A notice and confirmation for each or any step of the inventory management process can be provided to a user. The aforementioned and all disclosure herein can apply to appliance/device product ordering via voice or camera by capturing voice and images and transmitting them to local, remote or network computing devices, servers or product inventory/ordering management systems which can query, connect and communicate with tags, voice or camera image data storage bases/stores. Furthermore, the system allows a user, appliance/device or network to bundle voice, tag or camera purchase orders together into a virtual shopping basket, a purchase or delivery order, payment processing or any other processes described or combined herein. FIGS.18-19, provide an aspect of the invention that can include an appliance/device (“appliance”, “host appliance”, “smart control”) with an appliance/device that can be tag reader enabled and connect and communicate with an appliance control or operating system and controlled wirelessly via an appliance/device, smartphone, voice, text, or AI, among others. In one embodiment, an appliance/device can comprise any of the following: a housing1in which at least one sealable modular compartment6can be disposed with at least one or more tags, food or container items disposed therein. As shown inFIG.18, an appliance/device control system2can include a display panel, microprocessor (CPU), memory device, network interface and software and hardware for a network enabled appliance/device including a wireless device connected to a microcontroller to communicate with cloud, internet or satellite networks, a smart control, other appliances/devices, containers, tags and to control and operate each system. Furthermore, a smart control or appliance/device and an appliance/device can be constructed to incorporate input/output ports, an interface or a microcontroller to allow a user to connect a smart control and an appliance/device together to communicate to allow software, an interface and micro-controllers to connect to communicate with hardware devices and with respective control and operating systems. As used herein an input/output port can include any system, including cable or wireless, to couple an appliance/device, a smart control or another appliance/device together to allow the creation of a network of appliances/devices to communicate and share information, data, electricity, wireless connectivity via cloud, internet, satellite, Bluetooth and control by a host appliance, smart control or appliance/device in the network. The coupling device can include an input/output port, slidable connectors, click or snap connectors, etc. The appliance network can also operate with an appliance/device to connect to a network to function as a host appliance or as an appliance/device or a recipient of network data, connectivity, among others. An appliance/device and a smart control, and respective control and operating systems, can connect and communicate with each other using compatible control and operating systems, programs and software. An appliance can include a housing1,18, a temperature system40to create a range of temperature inside a housing1,18, a compartment6and a container30disposed therein that can connect to a control system2,20and a display38and to individual or selected system components. A container can include a one-way valve36or tag/sensors50. A vacuum pump, fan or means89in communication with a compartment and operating system can also be included. A humidity system35can create a range of humidity inside a compartment and a container disposed therein and can connect to a control system and to individual or selected system components. A compartment can include a valve90in communication with an operating system and an appliance aperture104, a seal91on the open and close mechanism93that can include a drawer4with a handle5or other closure device for the compartment aperture as disclosed herein. All appliance systems can be controlled by the operating system or an appliance/device. A smart control508can function as a countertop, refrigerator or handheld or appliance device to provide cooking instructions; monitor and control an appliance cooking network; function as a household or room security platform or device by connecting to a camera doorbell system or other home security camera systems to monitor or report with its security camera system; a camera with a motion detector, GPS, to detect and report levels of gas or ambient contaminants; check product inventory and place product orders, purchases and home delivery and monitor, track and report body vital signs using an appliance/device or reader/speaker509and include the control and operating systems of a smartphone or host appliance such as wireless enablement and communication, CPU, memory, control, tags and reader, camera or video, display panel600, input/output and programmable capabilities601, AI, augmented and virtual reality food, preparation, recipe or cooking capabilities with software and hardware to connect and communicate wirelessly, via electrical socket or input/output ports, interface or microcontroller with cable503to each appliance/device501and operating system connected to the smart control. A smart control can be placed into a base to operate or recharge the unit. A smart control can create a connection with the base via connectors situated at the bottom or base of a smart control to recharge or to connect the smart control operating system to the base and an appliance connected to the base to control and manage an appliance. Furthermore, a smart control can be placed into a refrigerator. A smart control base can be constructed or placed into a refrigerator and connected using an electrical cable to an interior appliance socket or an electrical source outside of the refrigerator such as a wall unit. In this manner, a smart control can be placed inside a refrigerator to track and monitor products, order products, etc., read body vital signs when a user is in proximity to an appliance to communicate with a network and data modules to suggest or recommend food or drink items or recipes as well as exercise or rest using tag or camera data in communication with data modules. A smart control can also operate with batteries. A smart control can be placed inside an appliance, pantry, medicine cabinet or house area to monitor products and activity and to order products or can be removed from an appliance such as a refrigerator to read and order products from areas such as the pantry or other house areas. The smart control can also be placed onto a kitchen counter in a base to control the cooking process of a network of appliances. The smart control can connect and communicate with online cooking sites or recipes via a network and data modules to provide step by step cooking preparation and instructions to prepare a meal and to order products to make specific recipes or suggest recipes based upon existing product inventory, tag freshness or projected shelf-life data and other tag data such as health status and related user accounts and data. A smart control or appliance/device can incorporate single or multiple cameras that can be AI connected in an appliance/device compartment connected and in communication with an appliance control and operating system to identify users, individuals, containers, products, transactions, tags and events using appliance/device, internet and satellite or cloud software, interfaces and processes as noted herein. For example, a camera on an appliance door or inside an appliance compartment can identify a user, container or a product placed inside a compartment to adjust the required compartment temperature for a product or set a specific or mean temperature for more than one product. If the camera does not view a product in a compartment the temperature can be lowered to a predetermined temperature level until a product or container is identified. A camera can identify food items inside an appliance compartment and set the most effective temperature, humidity, venting and pressure settings for one or more food items that can include the same or different items such as fruit and vegetables, meat, dairy, bakery, fish or a range of respiring or non-respiring food items. A camera can capture a container or food item being placed inside an appliance and recommend to a user the most effective storage placement inside the refrigerator by suggesting via voice, light or location the most effective food drawer, shelf, appliance function or location to place said product. A camera can identify a user and read their body vital signs and recommend food items inside an appliance, food inventory or suggested food items that fit a described fitness or lifestyle based on health, weight or medical or health needs and requirements as noted herein. As used herein, a container can be any receptacle such as a sealable or resealable receptacle or closure for use with a container that can also hold a product and constructed to hold environments such as modified atmosphere, gas, vacuum, pressure or vented environments with said containers not being destroyed by said applications. Containers are designed for use with tags configured to generate or provide data activation technologies to function with an appliance/device, garment, technologies or network. Containers can combine any tag functions or applications as described herein, to function individually, in combination or in sequence with any tag, data generation or activation configurations or combinations described herein with appliances/devices, garments, technologies or networks and related data modules in connection or communication with users, products, services, providers or markets (“container”). As used herein, a container technology can include any technology that communicates, interacts, tracks, monitors, reports, benefits, orders, purchases, processes a payment or service or delivery or enhances the use, access, understanding, storage or appearance of a product such as a user interface, consumer or electronic device, good or appliance which can be placed, attached, affixed to or connect to or communicate with a product, container, garment or appliance such as, but not limited to, wireless technologies that can include Li-Fi, RFID, NFC and hybrid tags and an is card (active, passive, hybrid or battery assisted passive tags), a machine-readable code such as a universal product code or QR code that can comprise an array of black and white squares and can store data such as URLs or oilier information to be read by a camera on an appliance or other interface, bar codes that can be read by RF or bar code devices, thin film labels and applications with sensors and sensors in communication with wireless technologies such as Li-Fi interfaces and RFID, NFC or hybrid tags that can generate or activate data or next step data actions (“tags”), semiconductors, circuits, chip resistors, thin film chip resistors/transistors, memory and networks, electronic temperature and other-sensing labels including real-time sensing capabilities, electronic sensor circuits, plastic semiconductors, chemical sensors such as potentiometric sensors, chemical field-effect transistor sensors, chemiresistors and chemoreceptors. Technologies can further include containers or lids with sensors to monitor temperature, vacuum, humidity, time, container density, acidity levels and gases, chemicals and volatile organic compounds, such as but not limited to, aldehyde, acetic acids, ethylene, sulphur compounds, alcohol CO2, NH3, H2, H2S, O2, N2 and SO2. Tags or sensors can connect to or communicate via wireless or wired methods to transfer data between two or more appliances/devices in communication with network interfaces and networks comprising Li-Fi, Wi-Fi, Bluetooth, internet, satellite and cloud computing technologies and all technologies described herein can be combined into complementary combinations or applications. Technologies can also include artificial intelligence, augmented, neural or virtual reality applications, software or hardware interfaces for use with appliances/devices. Firmware can also be incorporated into any relevant embodiment or appliance/device or network application to control the operation of an appliance/device on which it is hosted. All of the technologies and networks disclosed herein “technology” or “technologies” can be combined or function in any complementary order, combination or function with any other technology, network, appliance/device or process disclosed herein. As used herein, product packaging, information and marketing materials as used herein can include any communicative indicia such as icons, abbreviated text, symbols, voice, graphs or graphical representations, shapes, colors, forms, or text, that can be digitally, physically, in combination, or by any other means, configured to communicate or connect to, incorporate, read, be written on, attach or associate with a product, container or appliance/device or other product, container or appliance/device and can include URLs, wired or wireless capabilities, cloud, satellite, web or internet data, containers, technologies, consumer goods, electronics, devices, user interfaces or appliances, including user account, applications, personal data, email and web site addresses, telephone numbers or any other digital, social media or personal address or banking or financial information as described herein. Product information can include technical and specification data, financial, legal and operating information, data and documents such as warranty, technical and operating manuals and technologies, products and services. Product marketing materials can include tag generated or activated data, next step data actions and voice or text data and materials, readers, URLs, price, product, place, promotions, marketing collateral, coupons, promotional materials, recipes, menus, movies, music, sales, visual and auditory materials, discounts, brochures and other printed or digital product information, visual aids used in sales presentations, user health and metrics, web content, product data sheets and white papers and any other materials disclosed herein (all of the aforementioned “product packaging” or “packaging” or “marketing”). As used herein an appliance can include consumer or electronic devices, user interfaces, electronic goods or manufacturer, retail, wholesale, home or professional goods or appliances including any user interface such as mobile, smartphones, readers, smart speakers, beacons, tablets, lap tops, computers, glasses, watches, wired or wireless wearables, devices and clothing/garments, devices, rings, jewelry and wristbands, printers, cameras, micro-processors and microphones that can include use with an appliance, device, garment, consumer good, user interface or local or remote computing, cloud, server, network, appliance or device operating system and any other similar electronic device or appliance operating systems including functions and modes that include tag monitoring, tracking and reporting of generated or activated data or next step data actions, product ordering, purchasing, payment processing or delivery. Appliances can further include retail display or merchandising cabinets and refrigerated units, microwave oven, oven, induction cook top or UV light systems (any of these technologies can be incorporated as a single mode or combination mode in an appliance or interface and can be controlled by an interface or appliance controller, another or external appliance or interface, a product, container or smartphone, smart speaker, beacon, mobile or remote application to function with the other or external appliance functions and modes disclosed herein), stoves, refrigerators, freezers, washer/dryer, vacuum systems, toaster, rice maker, steamer, pasta cooker, crock pot, modular cooking units, portable or handheld devices which can be tag reader enabled, RFID light readers, and can combine, connect and communicate with other technologies, appliances or devices in any complementary or compatible combination and in any combinations of the aforementioned appliances, devices, technologies or tags as described herein (“appliance/device” or host appliance or “smart control”). While particular embodiments of the present invention have been illustrated and described, it would be obvious to those skilled in the art that various other changes and modifications can be made without departing from the spirit and scope of the invention. It is therefore intended to cover in the appended claims all such changes and modifications that are within the scope of this invention. | 119,895 |
11861449 | Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of embodiments of the present invention. The apparatus and method components have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein. DETAILED DESCRIPTION Generally speaking, pursuant to these various embodiments, a cradle capable of retaining and/or charging a handheld scanning device or devices is described herein. The cradle may have interchangeable cup assemblies that are capable of receiving different configurations of scanning devices, and may be quickly replaceable by a user without the use of complex tools. Advantageously, a user needn't unplug power and/or data transmission cables when replacing the cup assemblies, thereby reducing downtime and complexity. Turning toFIGS.1-14, a first example cradle100for a handheld scanning device101is provided that includes a base assembly120having a coupling mechanism124, a top housing assembly130, a circuit board assembly150, and a first cup assembly160. The base assembly120includes a lower support side120a, an upper side120b, and a sidewall120cextending therebetween. The base120further defines a cavity121which may receive any number of components such as, for example, circuit board assemblies and the like. The base120may rest on any number of support members positioned on the lower support side120a. The base assembly120may accommodate any number of components such as, for example, circuit boards, mechanical, and/or other electromechanical components. For example, any number of power and/or data transmission ports122may be at least partially disposed within the cavity121and may be positioned adjacent to the sidewall120cto receive a power and/or data transmission cable (e.g., a USB-C cable). Other examples are possible. As illustrated inFIGS.2,4, and6, the coupling mechanism124is in the form of a plate that is positionable over the upper side120bof the base assembly120. In the illustrated example, the coupling mechanism124includes a number of base mounting openings124ato receive fasteners or other securement mechanisms to secure the coupling mechanism124with the base assembly120. However, other examples such as latches, protrusions, and/or a friction-fit coupling are possible. The coupling mechanism124includes a component opening126and a top housing mounting region128. The component opening126is dimensioned to receive any number of electromechanical components such as, for example, the circuit board assembly150. The top housing mounting region128is in the form of a raised ledge or flange that includes openings128ato receive a fastener or fasteners. The top housing assembly130is in the form of a plate adapted to be positionable over the upper side120bof the base assembly120and the coupling mechanism124. The top housing assembly130includes a lower side130a, an upper side130b, a circuit board recess132, a housing opening134, a coupling mechanism mounting region136(FIG.8), and a cup retention mechanism137. The circuit board recess132extends upwardly from the upper side130bof the housing assembly130and is dimensioned to receive at least a portion of the circuit board assembly150. Further, the opening134is disposed through an upper surface of the circuit board recess132. The top housing assembly130further includes any number of alignment and/or positioning members such as, for example, alignment members138and front alignment members139. The circuit board assembly150may include any number of components or subcomponents to perform electrical and/or electromechanical functions. For example, the circuit board assembly150may include a board communication interface152and any number of power interfaces154and/or data connection interfaces156. In some examples, the interfaces may be capable of transmitting both power and data, and as such, a single interface may be used. These interfaces may receive power and/or data connection cables151(FIGS.4&8) which are then coupled with the power and/or data transmission ports122. The board communication interface152is generally positioned within and/or through the housing opening134. Further, the board communication interface152may include any number of interconnects155such as leaf spring connectors, pogo pin connectors, and the like to form a communicative and/or electrical coupling, thereby allowing data and/or power transmission. In other examples, inductive charging mechanisms for electrical transmission may be used. As illustrated inFIGS.4,7, and8, the circuit board150is at least partially disposed within the circuit board recess132of the top housing assembly130, and may be operably coupled and/or secured therewith via any number of suitable approaches such as, for example, fasteners, adhesives, tabs, protrusions, and/or friction-fit couplings. Upon or before operably coupling the circuit board assembly150with the top housing assembly130, a first end of the power and/or data connection cable(s)151may be coupled with the power interface154and/or the data interface156of the circuit board assembly150and the opposing end of the power and/or data connection cables151may be coupled with the internal connection of the power and/or data transmission port or ports122. With reference toFIG.9, the top housing assembly130is operably coupled with the base assembly120by positioning the lower side130aof the top housing assembly130adjacent to the upper side120bof the base assembly120. More specifically, the lower side130aof the top housing assembly130is aligned with the coupling mechanism124of the base assembly120such that the coupling mechanism mounting region136of the top housing assembly130is aligned with the top housing mounting region128of the coupling mechanism124. In the illustrated example, a fastener such as a screw or a bolt may be inserted into the openings128aof the top housing mounting region128, but other approaches for securing and/or operably coupling the top housing assembly130with the base assembly120are possible. The first cup assembly160includes a body having a lower side160a, an upper side160b, and any number of docks162. The dock162includes slots, channels, and/or tracks162athat are sized and dimensioned to receive a handheld scanning device101having a specific configuration or shape. An opening162bextends through the lower and upper sides160a,160bof the dock162through which a portion of a cup communication interface166is at least partially disposed. The first cup assembly160further includes a retention mechanism164and a latch165. More specifically, the first cup assembly160may be operably coupled with the top housing assembly130by first engaging the latch165with the front alignment members139of the top housing assembly130. In the illustrated example, the latch165is in the form of a ledge that hooks and engages an underside of the front alignment members139to allow the first cup assembly160to rotate onto the base assembly120and the top housing assembly130. When the cup assembly160is positioned adjacent to the top housing assembly, the cup retention mechanism164is aligned with the cup retention mechanism137. In some examples, the retention mechanism164of the first cup assembly160and the cup retention mechanism137of the top housing assembly130may both be in the form of throughbores or threaded openings that receive a coin screw167therethrough to quickly secure the components with each other. As such, a user may removably couple the cup assembly160with the top housing assembly130. In some examples, the first cup assembly may160alternatively or additionally couple with the base assembly120via any number of suitable approaches. The cup assembly also includes a cup communication interface166. With particular reference toFIGS.10-14, the cup communication interface166is in the form of a flex tail interconnection having a first end166aand a second end166b. As illustrated inFIG.11, the first end166aof the cup communication interface166includes any number of interconnects167such as leaf spring connectors, pogo pin connectors, and the like, to form a communicative and/or electrical coupling to allow data and/or power transmission. In other examples, inductive charging mechanisms for electrical transmission may be used. The first end166aof the cup communication interface166is operably coupled with the lower side160aof the first cup assembly160, and in some examples, may be positioned on a mounting support member or region. Similarly, and as illustrated inFIGS.10,13, and14, the second end166bof the cup communication interface166includes any number of interconnects168such as leaf spring connectors, pogo pin connectors, and the like to form a communicative and/or electrical coupling to allow data and/or power transmission. In other examples, inductive charging mechanisms for electrical transmission may be used. The second end166bof the cup communication interface166is positioned such that at least a portion of the interconnects168are disposed adjacent to and/or near the dock opening163of the dock162. The interconnects167of the first end166aof the cup communication interface166are arranged to couple with the interconnects155of the board communication interface152. In some examples, the interconnects167may be in the form of a single row of pogo pins. Other examples are possible. Upon coupling the first cup assembly160with the top housing assembly130, the interconnects155of the board communication interface152engage the interconnects167of the first end166aof the cup communication interface166. As a result, a power and/or data transmission link is formed between any devices coupled with the power and/or data transmission port122, the circuit board assembly150(i.e., the power interface154and/or the data interface156), the interconnects155, the board communication interface152, and the first and second ends166a,166bof the cup communication interface166. Because the second end166bof the cup communication interface166is communicatively coupled with the first end166aof the cup communication interface166, the interconnects168may provide an electrical link to the handheld scanning device101when disposed within the slot or channel162aof the dock162. As illustrated inFIG.14, the handheld scanning device101includes corresponding interconnects101aadapted to engage the interconnects168of the second end166bof the cup communication interface166. Accordingly, power and/or data may be transmitted between the handheld scanning device101and any components (e.g., charging devices, computing devices, etc.) coupled with the power and/or data transmission port(s)122. As illustrated inFIG.10, the first end166aof the cup communication interface166additionally includes a cup auxiliary power and/or data transmission port170. This port170is positioned through an auxiliary opening163aformed on the first cup assembly160and may receive an auxiliary power and/or data cable172, which may provide and/or facilitate the charging of additional devices such as, for example, personal mobile computing devices. Other examples are possible. Advantageously, the cradle100may be used with any number of varying cup assemblies having different cup configurations. In some examples, the first cup assembly160may be decoupled from the top housing assembly130such that the base assembly120, top housing assembly130, and circuit board assembly150remain (as illustrated inFIG.9). With reference toFIGS.15-22, a different cup assembly may be selectively coupled with the base assembly120, top housing assembly130, and circuit board assembly150by positioning the desired coupling assembly above the top housing assembly130and securing and/or coupling the desired cup assembly therewith. More specifically, with reference toFIGS.15-19, the cradle100may receive a second cup assembly260which may include similar features and/or components as the first cup assembly160. Accordingly, such similar features will be designated with reference numerals having identical two-digit suffixes as the cup assembly160, and will not be described in substantial detail. It is appreciated that any of the features described with respect to the first cup assembly160may be incorporated into the second cup assembly260, and vice-versa. The second cup assembly260includes a dock262having a different configuration than the dock162of the first cup assembly160. More specifically, the dock262includes two slots or channels262adimensioned and configured to receive a handheld scanning device201having a different configuration. As before, the second cup assembly260may be quickly decoupled from the top housing assembly130without needing to unplug or rearrange wires or cables. The handheld scanning device201includes a scan sled and an additional terminal coupled therewith. Each of these components may be capable of receiving power and/or transmitting data, and as such, each of the slots or channels262aincludes a dock opening263. As illustrated inFIGS.16-19, a cup communication interface266includes a first end266ahaving interconnects267, a second end266bhaving interconnects268, and a middle region266chaving interconnects269. As with the cup communication interface166, the interconnects267disposed at the first end266aare positioned on a lower side260aof the second cup assembly260to engage the interconnects155of the board communication interface152of the circuit board assembly150when the second cup assembly260is coupled with the top housing assembly130. The second end266band middle region266cof the cup communication interface266include any number of interconnects268,269such as leaf spring connectors, pogo pin connectors, and the like to form a communicative and/or electrical coupling to allow data and/or power transmission. The second end266band the middle region266cof the cup communication interface266are positioned such that at least a portion of the interconnects268,269are disposed adjacent to and/or near the dock openings263of the dock262. As illustrated inFIG.16, the interconnects268,269may have different sizes and/or configurations in order to couple with desired components of the handheld scanning device201. In some examples, both of the components of the handheld scanning device201may receive power and/or data transmission when disposed within the dock262of the second cup assembly260. However, in other examples, a user may selectively permit power and/or data transmission of a specific component of the handheld scanning device201as desired. With reference toFIGS.20-22, third, fourth, and fifth cup assemblies360,460,560are provided which may be selectively coupled with the base assembly120, the top housing assembly130, and the circuit board assembly150. The third, fourth, and fifth cup assemblies360,460,560each include similar features as the first and second cup assemblies160,260, and as such, similar features will be designated with reference numerals having identical two-digit suffixes as the cup assemblies160,260, and will not be described in substantial detail. It is appreciated that any of the features described with respect to the first and/or second cup assembly160,260may be incorporated into the third, fourth, and/or fifth cup assemblies360,460,560, and vice-versa. The third, fourth, and fifth cup assemblies360,460,560may include respective docks362,462,562having differently-configured slots or channels362a,462a,562athat are sized and configured to receive specific handheld scanning devices301,401,501, respectively. As before, the third, fourth, and fifth cup assemblies360,460,560may be quickly decoupled from the top housing assembly130without needing to unplug or rearrange wires or cables. In some examples, and with reference toFIG.23, an alternative cradle600may be provided capable of accommodating a number of handheld scanning devices and/or additional components. In this example, the cradle600may include a number of base assemblies120, each having respective top housing assemblies130and circuit board assemblies150coupled therewith. In such a configuration, any number of cup assemblies may be coupled thereto, including cup assemblies having different configurations to receive different handheld scanning devices as desired. In some examples, the entire handheld scanning device may be disposed within these and other cup assemblies, and in other examples, a portion of the handheld scanning device (e.g., the terminal) may be retained by the desired cup assembly to be charged and/or to transmit data while the remainder of the device is used in the field. Further, in some examples, the cradle600may include battery receptacles605to receive battery units which are removed from their respective handheld scanning device. So configured, the cradle100described herein may be used with different cup assemblies having different interfaces as needed to reduce a number of distinct cradle SKUs in the environment. Such a configuration eliminates the need to disassemble the entire cradle, thereby avoiding problems such as incorrect reassembly, loose connections, and/or contamination. In some examples, the top housing assembly130may provide a liquid drain path that prevents any liquids from contacting the circuit board assembly150and/or any components disposed within the cavity121of the base assembly120. Further, a replacement cup assembly of the same or different type may be quickly coupled with the top housing assembly130. The above description refers to a block diagram of the accompanying drawings. Alternative implementations of the example represented by the block diagram includes one or more additional or alternative elements, processes and/or devices. Additionally or alternatively, one or more of the example blocks of the diagram may be combined, divided, re-arranged or omitted. Components represented by the blocks of the diagram are implemented by hardware, software, firmware, and/or any combination of hardware, software and/or firmware. In some examples, at least one of the components represented by the blocks is implemented by a logic circuit. As used herein, the term “logic circuit” is expressly defined as a physical device including at least one hardware component configured (e.g., via operation in accordance with a predetermined configuration and/or via execution of stored machine-readable instructions) to control one or more machines and/or perform operations of one or more machines. Examples of a logic circuit include one or more processors, one or more coprocessors, one or more microprocessors, one or more controllers, one or more digital signal processors (DSPs), one or more application specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs), one or more microcontroller units (MCUs), one or more hardware accelerators, one or more special-purpose computer chips, and one or more system-on-a-chip (SoC) devices. Some example logic circuits, such as ASICs or FPGAs, are specifically configured hardware for performing operations (e.g., one or more of the operations described herein and represented by the flowcharts of this disclosure, if such are present). Some example logic circuits are hardware that executes machine-readable instructions to perform operations (e.g., one or more of the operations described herein and represented by the flowcharts of this disclosure, if such are present). Some example logic circuits include a combination of specifically configured hardware and hardware that executes machine-readable instructions. In the foregoing specification, specific embodiments have been described. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the invention as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of present teachings. Additionally, the described embodiments/examples/implementations should not be interpreted as mutually exclusive, and should instead be understood as potentially combinable if such combinations are permissive in any way. In other words, any feature disclosed in any of the aforementioned embodiments/examples/implementations may be included in any of the other aforementioned embodiments/examples/implementations. The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential features or elements of any or all the claims. The claimed invention is defined solely by the appended claims including any amendments made during the pendency of this application and all equivalents of those claims as issued. Moreover in this document, relational terms such as first and second, top and bottom, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” “has”, “having,” “includes”, “including,” “contains”, “containing” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises, has, includes, contains a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by “comprises . . . a”, “has . . . a”, “includes . . . a”, “contains . . . a” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises, has, includes, contains the element. The terms “a” and “an” are defined as one or more unless explicitly stated otherwise herein. The terms “substantially”, “essentially”, “approximately”, “about” or any other version thereof, are defined as being close to as understood by one of ordinary skill in the art, and in one non-limiting embodiment the term is defined to be within 10%, in another embodiment within 5%, in another embodiment within 1% and in another embodiment within 0.5%. The term “coupled” as used herein is defined as connected, although not necessarily directly and not necessarily mechanically. A device or structure that is “configured” in a certain way is configured in at least that way, but may also be configured in ways that are not listed. The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter may lie in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter. | 23,999 |
11861450 | DETAILED DESCRIPTION OF THE DISCLOSURE A system for generating a machine-readable optical label is provided. An illustrative machine-readable optical label may be a quick-response (“QR”) code. Other illustrative machine-readable optical labels may include a linear barcode or a two-dimensional matrix barcode such as Aztec code, ShotCode, SPARQCode, and the like. The system may include a software dashboard. The dashboard may include a user interface (“UI”) that provides access to software tools for entering one or more design choices for a machine-readable optical label, such as a QR code. An associated software engine may generate a machine-readable optical label based on the user entered design choices. A machine-readable optical label may include a plurality of modules—i.e., areas within the code that are directed to pre-defined tasks and/or functions. A module may be a dark module or a light module, or modules with different colors, visible or in other spectra such as infrared or ultralviolet. A scanning device, such as a smartphone, may be configured to interpret instructions encoded by a pattern of light and dark modules. For example, the scanning device may interpret the pattern of modules as a binary encoded message. A light module may represent a0, and a dark module may represent a1, or vice versa. A pattern of modules within a machine-readable optical label may define a data zone, position detection patterns, timing patterns, an error correction level and error correction code. The data zone may include machine readable instructions that, when scanned, triggers an action on a device used to scan the machine-readable optical label. For example, the machine-readable optical label may include instructions for launching a webpage or text message application. The instructions encoded in the data zone may prefill a destination field of the text message or insert text into the body of a message. The instructions encoded in the data zone may trigger a display of information on the scanning device such as a product identifier or instructions on how to use the product. The more information included within the data zone, the more modules a machine-readable optical label will have to encode that information. Position detection patterns may provide instructions that orient a scanning device to identify and read the data zone. Position detection patterns may include position markers. For example, a machine-readable optical label may include three position markers (“eyes”) at a top left, top right, and bottom left of the machine-readable optical label. Position markers may be defined based on a pattern of light/dark modules. For example, a position marker may be spaced apart from the data zone by a border of light modules. The position marker may include an outer border of dark modules. The outer border may surround an inner border of light modules. The inner border of light modules may surround a core of dark modules. A position mark may be designed to include a pattern of modules that is unlikely to appear elsewhere within the machine-readable optical label. Each position marker may be linked to another position marker by a timing pattern. An illustrative timing pattern may include a horizontal line of alternating light/dark modules. An illustrative timing pattern may include a vertical line of alternating light/dark modules. Each line of alternating light/dark modules may start and end with a dark module. The position detection pattern may include an alignment pattern. An alignment pattern may overlap a timing pattern. The alignment pattern may include one or more alignment markers. An illustrative alignment marker may include an outer border of dark modules surrounding an inner border of light modules and a single dark module in the center of the marker. The alignment pattern may allow a scanning device to determine an orientation of the machine-readable optical label. The alignment pattern may improve scanning speed of the machine-readable optical label. The alignment pattern may include markers or a pattern that allows a scanning device to orient the machine-readable optical label despite displacement of modules due to distortion. For example, the alignment pattern may allow a device to scan machine-readable optical labels applied to a curved surface. Generally, a larger machine-readable optical label will include more alignment patterns than a smaller machine-readable optical label. Size of a machine-readable optical label may be defined based on a number of modules included in the machine-readable optical label. The machine-readable optical label may include error correction code. The error correction code may be included in the data zone. An illustrative error correction code may include Reed-Solomon codes. The error correction code may be applied to restore data encoded by modules when a segment of a machine-readable optical label is missing or damaged. A machine-readable optical label may include various levels of error correction. Modules used for error correction store redundant copies of data that compensate for damaged modules that cannot be read by a scanner. An exemplary target error correction level may allow restoration of at least 15% of data bytes. The target error correction level is determined based on Reed-Solomon codes included in the machine-readable optical label. Other illustrative target error correction levels may include:Level L—7% of data bytes can be restored.Level M—15% of data bytes can be restored.Level Q—25% of data bytes can be restored.Level H—30% of data bytes can be restored. A machine-readable optical label that includes a 30% error correction level will be scannable by a device even if 30% of the modules are damaged (soiled, washed out, faded, replaced with images). Generally, the higher level of error correction included in the machine-readable optical label, the less instructions can be stored within a data zone of the machine-readable optical label. An optical label according to the disclosure is provided. The label may include a primary optical label machine-readable code region. The primary optical label machine-readable code region may include a first area. The optical label may also include a secondary optical label machine-readable code region. The secondary optical label machine-readable code region may include a second area. The first area may include a magnitude that is greater than a magnitude of the second area. The primary optical label machine-readable code region may also include a first plurality of instructions that are different from a second plurality of instructions. The second plurality of instructions may be located in and/or derived from the secondary optical label machine-readable code region. In some embodiments, the first plurality of instructions, when processed in conjunction with the second plurality of instructions, may form a third plurality of instructions. The magnitude of the primary area may in certain embodiments, be greater, by at least 30% of the magnitude of the primary area, than the magnitude of the secondary area. In some embodiments, the first plurality of instructions may be configured, in response to a scanning and a processing of the instructions in the primary optical label machine-readable code region, to direct a scanner to a first URL (Uniform Resource Locator) or other suitable location. In addition, the second plurality of instructions may be configured, in response to a scanning and a processing of the instructions in the secondary optical label machine-readable code region, to direct the scanner to a second URL. Certain embodiments may include a label that provides, in addition to the first and second plurality of instructions, a third plurality of instructions. The third plurality of instructions may, in some embodiments, be based on the first plurality of instructions and the second plurality of instructions. In other embodiments, the third plurality of instructions may be based on one of the first and second plurality of instructions. In yet other embodiments, the third plurality of instructions may be based on a region of code that is different from the first region of code, used to derive the first set of instructions, and different from the second region of code and to derive a second set of instructions. It should be noted as well that the third plurality of instructions may be configured to direct the scanner to a third URL or other suitable location. The third URL may be different from the first and second URLs identified above. In certain embodiments, the secondary optical label machine-readable code region may be formed from a plurality of discrete regions. The plurality of discrete regions may substantially, if not completely, surround the primary optical label machine-readable code. Alternatively, the plurality of discrete regions may be distributed evenly or unevenly with respect to the primary code region, but not surround the primary code region. In some embodiments of the invention, the label may contain, in addition to the primary and secondary regions, a tertiary optical label machine-readable code region. In some embodiments, the tertiary optical label machine-readable code region may form an external border around the primary optical label machine-readable code region. The tertiary optical label machine-readable code region may, in certain embodiments, form an external border around the secondary optical label machine-readable code region. The tertiary optical label machine-readable code region may, in certain embodiments, form an external border around the second plurality of discrete regions described above. The embodiments set forth herein may involve an optical code scanner. The scanner may be operated using an algorithm. Elements of the algorithm are set forth hereinbelow as described in the context of the various configurations of the scanner. The scanner may be configured to scan an optical label. For the purposes of this application an optical label may be understood to refer to any multi-dimensional construct that is capable of being retrieved and interpreted by, for example, an optical scanner. For example, the optical label may include machine-readable code that is set forth in the format of optical markings. The scanner may be used to process the code. The processing of the code may trigger an uploading of a set of instructions from the code to a safe zone within the scanner. The processing may also include determining whether the set of instructions in the code includes malware—short for malicious software. Malware is an umbrella term used to refer to a variety of forms of hostile or intrusive software. Such hostile or intrusive software may include computer viruses, worms, Trojan horses, ransomware, spyware, adware, scareware, and other malicious programs. It can take the form of executable code, scripts, active content, and other software. Malware is defined by its malicious intent, acting against the requirements, or contrary to the interests, of the computer user. In response to a determination that the set of instructions includes malware, the processing of the code region may trigger termination of the uploading of the code. Certain embodiments may include an optical code scanner. The scanner may be operated using an algorithm. The scanner may be configured to process the code stagewise. The stagewise processing of the code may include, in a first stage, initiating uploading a set of instructions from the code to the scanner. In a second stage, the processing may include determining whether the set of instructions comprises a valid authorization instruction within the code. When the set of instructions is determined to comprise the valid authorization instruction, then a third stage may include enabling completion of the uploading of the code. Some embodiments of the invention may include an optical code scanner being operated using an algorithm and configured as follows. The scanner may be configured to scan an optical label. The label may include optical label machine-readable code. The scanner may process the code. The processing may include uploading a set of instructions from the code to the scanner and storing the set of instructions in an instructions library. The scanner may also derive a picture associated with the instructions from the instructions stored within the library. The scanner may also maintain a clickable picture of the code for associating with the picture. The scanner may be further configured to display a plurality of pictures. Each of the pictures may correspond to a set of uploaded instructions stored on the scanner. In preferred embodiments, each of the plurality of pictures is selectable by a user. In response to a user selection of a picture, the scanner may be configured to execute the uploaded instructions that correspond to the selected picture. Apparatus and methods described herein are illustrative. Apparatus and methods in accordance with this disclosure will now be described in connection with the figures, which form a part hereof. The figures show illustrative features of apparatus and method steps in accordance with the principles of this disclosure. It is to be understood that other embodiments may be utilized and that structural, functional and procedural modifications may be made without departing from the scope and spirit of the present disclosure. The steps of methods may be performed in an order other than the order shown or described herein. Embodiments may omit steps shown or described in connection with illustrative methods. Embodiments may include steps that are neither shown nor described in connection with illustrative methods. Illustrative method steps may be combined. For example, an illustrative method may include steps shown in connection with another illustrative method. Apparatus may omit features shown or described in connection with illustrative apparatus. Embodiments may include features that are neither shown nor described in connection with the illustrative apparatus. Features of illustrative apparatus may be combined. For example, an illustrative embodiment may include features shown in connection with another illustrative embodiment. FIG.1shows a prior art illustration of a QR scan experience in accordance with the prior art. QR code102shows a conventional QR code. Element104indicates a mobile device and mobile device screen that have been used to scan QR code102. Following the scan, QR code102appears on the screen of the mobile device. Following the scan of QR code102and the processing of same by the mobile device, as shown at104, element106indicates that the mobile device navigates the user to the website or other location identified by QR code102. FIG.2shows a Flowcode™ scan experience in accordance with the principles of the disclosure. Flowcode™ is shown at202. It should be noted that Flowcode™202may be understood to leverage a conventional QR code footprint within a greater code area, as described in more detail below with respect toFIG.3. Element204shows scanning the Flowcode™202with a mobile device to retrieve and process same. Thereafter, processing Flowcode™202takes a user to a website206or other suitable location associated with Flowcode™202. FIG.3shows yet another illustrative diagram in accordance with principles of the disclosure.FIG.3shows a conventional QR code302alongside a Flowcode™304. Each of conventional QR code302and Flowcode™304shows the data zone as circumscribed by a square306. The area circumscribed by square306is typically reserved for the standard QR algorithm scan in the case of QR code302and the Flowcode™ algorithm scan in the case of Flowcode™304. It should be noted that use of the area surrounded by square306either by conventional QR code302or by Flowcode™304precludes other uses of the surrounded area. Typically, such other uses interfere with the standard code-associated use of the area circumscribed by square306. FIG.4shows an illustrative diagram in accordance with principles of the disclosure.FIG.4shows a conventional QR code402and a Flowcode™404that is different from Flowcode™304shown inFIG.3. Specifically, Flowcode™404is different in that a portion of the area of Flowcode™404, within the area shown circumscribed by square306inFIG.3, has been leveraged to include a company logo406. It should be noted, as well, that an increase in the scannable area of Flowcode™ obtains yet greater advantages with respect to increased scannable information, greater efficiency of code retrieval and an increase of usable code area. FIG.5shows yet another illustrative diagram in accordance with principles of the disclosure.FIG.5shows that the optimum scanner seeks an exclusively scannable area. To optimize brand information, however, the scannable area should be reduced to zero and only brand information should be displayed. This option, however, is unworkable at least because a brand contains no scannable information. FIG.6shows an illustrative diagram of a Flowcode™604alongside a conventional QR code602in accordance with principles of the disclosure. QR code602is shown with sides of one unit of length. The diagonal across the square in QR code602measure 1.41 units. The area within QR code602is 1 square unit. The outer boundary of Flowcode™604is shown as circumscribing an area606corresponding to the area within QR code602. The area of Flowcode™ is equal to (π×(R=1.41/2)2), which is equal to 1.56 square units. It should be noted that because the total area of Flowcode™604will be 1.56 square units the area within Flowcode™604and outside of area706will equal 0.56 square units—an increase over the area of QR code602of 56%. Some embodiments according to the current disclosure leverage this extra 0.56 square units of the scannable area, and the currently unused pixels contained therein, to store additional, preferably scannable, information. FIG.7shows an illustrative diagram of a Flowcode™708in accordance with principles of the disclosure. Flowcode™708preferably includes code region702, as circumscribed by square706. It should be noted that code region702preferably is shown with conventional Flowcode™708size pixels. However, other size pixels are also possible for use with certain embodiments and certain scanning protocols according to the systems and methods of the disclosure. Flowcode™708also includes external code regions704. These external code regions represent an additional, heretofore unrealized, opportunity with respect to increasing scannable information, improving efficiency of code retrieval and increasing usable code area. To reiterate, rim708may, itself, form a code area such that information can be written in the rim line itself. Accordingly, the QR pattern can be within rim708such that rim708forms an additional area of code. FIG.8shows an illustrative diagram of a deconstructed Flowcode™ in accordance with principles of the disclosure. Deconstructed Flowcode™ preferably includes primary code region802+804external code regions806. It should be noted that embodiments according to the invention preferably include an algorithm for implementation of scanning algorithms using a scanner. The algorithm(s) are preferably configured to interpret and process the instructions found in the pixels that are located in primary or internal code regions802and external code regions806. The algorithm may also be preferably configured to interpret and process the logic found in the pixels that are located in a rim808that bounds the external portion of external code regions806. The algorithm(s) may also be preferably configured to interpret and process the logic (which is embedded as machine-readable code) found in the pixels that are located in square810. It should be noted that, while five code regions are shown inFIG.8, this number of code regions is merely exemplary and any other suitable number of code regions is within the scope of the disclosure. In some embodiments, any code region could be either subdivided or used with different logic patterns throughout the additional region. This flexible adaption of different code regions could preferably be used to take advantage of any pixel combination. Each pixel, or group of pixels, could be used multiple times and/or in multiple logic patterns. In certain embodiments, the algorithm may also be preferably configured to interpret and process, preferably simultaneously, two or more of the logic constructs found in the pixels that are located in external code regions806, the logic constructs found in the pixels that are located within rim808and the logic constructs found in the pixels that may be located within square810. In such embodiments, a scanner with suitable logic embedded therein, could preferably retrieve two or more sets of instructions preferably simultaneously. FIG.9shows an illustrative diagram of a Flowcode™ dual experience in accordance with principles of the disclosure.FIG.9shows a first mobile device screen902, a second screen904, a third screen910and a fourth screen912. Flowcode™ is formed from internal code region906and external code region908. In certain embodiments, internal code region906, when scanned and processed, preferably triggers display of second screen904which, in response to a user selection or an automatic transfer, is capable of navigating a user to a website912entitled “Today's Your Morning.” Internal code region906may preferably be scanned using a conventional machine-readable optical label scanner (not shown). Preferably, the conventional machine-readable optical label scanner does not require any custom code to scan and process internal code regions906. Second screen904preferably shows the internal code region906as retrieved. In some embodiments, second screen904could preferably navigate a user directly to website912, independent of showing the user second screen904. External code regions908, when scanned and processed, preferably directly obtains third screen910which, in response to a user selection or an automatic transfer, navigates a user to website912entitled “Today's Your Morning.” Internal code region908may preferably be scanned using a code scanner embodied in the form of a custom-configured mobile device according to the embodiments. Such a code scanner preferably is enabled, in certain embodiments, to retrieve information exclusively from external code regions908. In alternative embodiments, such a custom scanner according to the invention may be enabled to scan and process internal code regions906together with external code regions908. It should be noted that all the examples shown herein are by way of example and are not intended to limit the disclosure other than by the claims recited below. FIG.10shows an illustrative diagram of a condensed Flowcode™ dual experience in accordance with principles of the disclosure.FIG.10shows a first website1002which may be retrieved by a conventional machine-readable optical label scanner in response to scanning Flowcode™1004.FIG.10shows a second website1006which may be retrieved by a customized Flowcode™ scanner in response to scanning Flowcode™1004. It should be noted that such a customized Flowcode™ scanner may be configured to retrieve information from one or more of the external regions of Flowcode™1004. In certain embodiments, a customized Flowcode™ scanner may be configured to retrieve information from one or more of the external regions of Flowcode™1004in combination with information retrieved from the internal region of Flowcode™1004. FIG.11shows just external regions1106,1108,1110and1112of a Flowcode™1102according to the principles of the disclosure. FIG.11also shows “eyes”1114, otherwise known as position markers. Position markers may be found at a top left, top right, and bottom left of the machine-readable optical label. Position markers may be defined based on a pattern of light/dark modules. For example, a position marker may be spaced apart from the data zone by a border of light modules. The position marker may include an outer border of dark modules. The outer border may surround an inner border of light modules. The inner border of light modules may surround a core of dark modules. A position marker as may be designed to include a pattern of modules that is unlikely to appear elsewhere within the machine-readable optical label. Each position marker may be linked to another position marker by a timing pattern. An illustrative timing pattern may include a horizontal line of alternating light/dark modules. An illustrative timing pattern may include a vertical line of alternating light/dark modules. Each line of alternating light/dark modules may start and end with a dark module. The position detection pattern may include an alignment pattern. An alignment pattern may overlap a timing pattern. The alignment pattern may include one or more alignment markers. An illustrative alignment marker may include an outer border of dark modules surrounding an inner border of light modules and a single dark module in the center of the marker. The alignment pattern may allow a scanning device to determine an orientation of the machine-readable optical label and/or Flowcode™. The alignment pattern may improve scanning speed of the code. The alignment pattern may include markers or a pattern that allows a scanning device to orient the code despite displacement of modules due to distortion. For example, the alignment pattern may allow a device to scan codes applied to a curved surface. Generally, a larger code will include more alignment patterns than a smaller code. Size of a code may be defined based on a number of modules included in the code. In the Flowcode™ shown inFIG.11, the eyes may act to orient the customized scanner to retrieve the information only in internal region1103, only in the external regions1106,1108,1110and1112, or in both internal region1103and external regions1106,1108,1110and1112. It should be noted that while four external regions are shown inFIG.11, embodiments of the present disclosure contemplate any suitable number of discrete code regions. In certain embodiments, the scanner may be configured to retrieve information from one or both of border zones1104and1116. In some embodiments, the scanner may be configured to retrieve information from one or more of border zones1114,1116, and external regions1106,1108,1110,1112and/or internal region1103. In certain embodiments, one or more of border zones1104and1106may act as environmental zones. The environmental zone may include a buffer of light modules that surround a data zone and associated position detection patterns. The buffer may allow a scanning device to distinguish the data zone from its surrounding environment zone. An illustrative buffer may be four light modules wide, or more or less than four light modules wide. It should be noted that, in certain embodiments set forth herein—i.e., when border zones1104and1106are include scannable code—the environmental zones should preferably include sufficient area to accommodate environmental zones as well as areas for readable code information. While Flowcode™1102is shown in circular format, with one or more of border zones1114,1116, and external regions1106,1108,1110,1112and/or internal region1102, it should be noted that a Flowcode™ or other machine-readable optical label according to the disclosure does not have to be round. In fact, a Flowcode™ or other machine-readable optical label according to the disclosure can be shapes other than circular. Moreover, preferably any suitable area—not just border zones1114,1116, and external regions1106,1108,1110,1112, can be leveraged to incorporate additional scannable areas. Preferably any adjacent area, which can be scanned simultaneously with, or separate from, the Flowcode™ or machine-readable optical label can be leveraged to provide additional scannable area for use in providing an optical label according to the disclosure. A software engine, which may be used to create the code, may generate one or more of the above-described environmental zones for the code. The software engine may generate, for integrating into the final code, the buffer surrounding the internal regions of the code and the position detection patterns. The software engine may generate modules for an environmental zone surrounding the data zone. The environmental zone of a code may include marks or designs that are not intended to be interpreted by a scanning device. The appearance of a standard QR code may be defined by one or more standards published by the International Organization for Standardization (ISO) of Geneva, Switzerland. Illustrative standards published by the ISO include ISO/IEC 18004:2015 and ISO/IEC 24778:2008 which are hereby incorporated herein by reference in their entireties. The software engine may generate a data zone for the code. FIG.12shows yet another illustrative diagram of a reconstructed Flowcode™1200, as shown on devices1202and1206, in accordance with principles of the disclosure.FIG.12shows retrieving and processing information from the external regions of Flowcode™1200. A first region1201may preferably provide information related to an offer1208associated with a website1212. A second region1203may preferably provide information related to a reward1214related to website1212. A third region1205may preferably provide information related to content1216of website1212. A fourth region1207may preferably provide information related to data1210of website1212. It should be noted thatFIG.12also shows that Flowcode™1200may be further customized to provide information in rim1204and/or internal region1211. To the extent that internal region1211comprises code that may be retrieved by a conventional machine-readable optical label scanner, internal region1211may retrieve a website1202or other data that is different from website1212. FIG.13Ashows an illustrative diagram of a Flowcode™ in accordance with the principles of the invention. Flowcode™1302shows scannable external regions1303,1304,1306and1310. Flowcode™1302also shows orientation markers1312,1314,1316, and1318. In addition, Flowcode™1302shows an internal region1320that is occupied, primarily, by typically non-scannable brand information. Flowcode™1302also shows rim1322, which can contain scannable information as well. In certain embodiments, orientation markers1312-1318, as set forth herein may be leveraged to enable the scanner to read external regions1302,1304,1306and1308. FIG.13Bshows another illustrative diagram of a Flowcode™1300in accordance with the principles of the invention. Flowcode™1300shows scannable external regions1303,1304,1306and1310. Flowcode™1300shows optional orientation markers1312,1314, and1316. It should be noted that these markers are optional and not required in all embodiments. In addition, Flowcode™1300shows an internal region1320that is occupied by a unique ID. The unique ID may be a linear barcode or a two-dimensional matrix barcode. Internal region1320may include any suitable label such as a suitable machine-readable optical label. It should be noted that the size of internal region1320may be limited by the error correction level of the machine-readable optical label at least because the space available for data to be encoded in primary scanning region1319will be limited by the inclusion therein of internal region1320. It should be noted as well that a unique ID (shown only with respect to internal region1320) may also, in certain embodiments, be used to fill external regions1303,1304,1306and1310with readable code information. FIG.13Cshows yet another illustrative diagram of a Flowcode™ in accordance with the principles of the invention. QR code1332preferably includes a primary region of code. Code area1334may include a secondary code region. Code area1336may include a third code region1336. It should be noted that, in this embodiment, primary code region1332is devoid of markers1338-1342. Secondary code region1334, on the other hand, includes orientation markers1338while third code region1336includes markers1340and1342. The flexible presentation of markers among different code regions, or the lack of orientation markers, are all within the scope of the current disclosure. FIG.14shows a mobile device1400. Mobile device1400displays exemplary Flowcode™ home screen1402. Home screen1402includes a user's selectable (referred to herein in the alternative as “clickable”) personal Flowcode™1404. Such a personal Flowcode™1404may, in certain embodiments, direct a user to a user homepage associated with the selectable Flowcode™1404. Home screen1402is shown as having been associated with a single user. Home screen preferably enables a user to read and upload a code1406and/or create a code1408. It should be noted that the user's personal Flowcode™1404may enable a user to access a library of decoded and selectable codes. In certain embodiments, user's personal Flowcode™1404can be shared with others by having others scan the code, click the code, message the code, otherwise contact the code by electronic transmission, or by some other suitable communication process. In some embodiments, other users may be provided access to a user's library of decoded and selectable codes. FIG.15shows an illustrative flow diagram of a method in accordance with the disclosure. Step1502shows using an optical scanner to scan an optical label. Step1504shows initiating uploading of a set of instructions the optical label to the scanner. At step1506, the scanner determines whether the uploaded set of instructions includes a valid authorization instruction. Such an authorization instruction may indicate that the code was “signed”—i.e., authored—by a pre-determined entity. At step1508, the method indicates that, in response to determining that the instructions include a valid authorization instruction, the method completes of the loading of the instructions in the code to the scanner, and the subsequent performance of instructions associated with the completion of the loading of the instructions in the code. It should be noted that in certain embodiments, a software developer's kit (“SDK”) may be provided in accordance with the disclosure set forth herein. Such an SDK may preferably include a user-accessible module for adding to optical scanner configuration code. Such an SDK module may preferably enable an application author to write an application for generating optical labels that include a unique signature. The unique signature may preferably enable the scanner application to determine whether the scanned optical label was generated by a pre-determined entity. In some embodiments, such an application may limit a scanner to processing only optical labels by one or more pre-determined entities. In some embodiments involving the SDK and/or the API it should be noted that applications for scanning optical labels that include a unique signature may preferably be configured to transmit the scanning information—the scanned data, the scan time, the scan location and/or context of a scan of a machine-readable optical label—to a centralized server. At the centralized server, the scanning information may preferably be indexed and analyzed to determine trends involving user's behavior. Such retrieval, consolidation and analysis of scanning information should all preferably comply with relevant information privacy regulations. In some embodiments, an application programming interface (“API”) may be used to access the validation algorithm set forth inFIG.15. For example, if a user is building an application for QR code using an API according to the disclosure, an app may embed a unique signature into a QR code according to such embodiments. The unique signature can preferably include information identifying the entity associated with the app and/or the creator of the scanned code. Such identity information preferably can be used by an app that includes a validation algorithm. A scanner that includes the app may verify, at the time of retrieval, the identity of the entity that generated a scanned optical label. One method of confirming a unique signature embedded in an optical label involves using cryptographic codes. For example, an optical lable generator may embed a private cryptographic key in a generated label. This embedded private key may be the unique signature of the optical label. The optical label including the private cryptographic code may be configured to be executed by an optical scanner. For example, the scanner may scan the optical label. The optical label may include code that may be signed, on behalf of the entity associated with generating the label, using the private cryptographic key. Access to the private cryptographic key may be controlled by the creator or entity associated with generating the label. To increase security, such a private cryptographic key may be signed in a pre-determined error detection code level such that the private cryptographic key is not visible to the human eye. A customized scanning application that may be downloaded to, or resident on, the scanner may include a public cryptographic key. The public cryptographic key may include a 32-byte key. The customized scanning application may be customized at least because it is equipped with the public cryptographic key. The public cryptographic key may be used to validate the private cryptographic key within the optical label and—thereby confirm a valid authorization instruction associated with the scanned optical label. To reiterate, a scanned optical label may be signed, preferably using a private cryptographic key, by the creator and/or generator of the label. The signing the label may leverage an SDK or an API to integrate the private key into the generated label. The scanner, using the public cryptographic key, may validate the scanned label to confirm the valid authorization instruction. FIG.16shows another illustrative flow diagram of a method in accordance with the disclosure. Similar to the method shown inFIG.15, step1602shows using an optical scanner to scan an optical label. Step1604shows processing information derived (or otherwise extracted) from the scanned optical label. At step1606, the method shows uploading a set of executable instructions that are derived (or otherwise extracted) from the scanned label. Prior to taking action based on the instructions, the instructions may be isolated in a safe zone—i.e., a zone that is separated and secured from interacting with vulnerable portions of the scanner. Such a safe zone may preferably be a memory location within the scanner where the instructions can be analyzed. For example, if the instructions direct a scanner to a pre-determined website, the website can be reviewed to determine whether it is a trusted website. This is shown at step1608, which states, generally, determining whether the instructions include malware. Such malware may direct the scanner to the afore-mentioned untrusted website or the instructions itself may include damaging information, such as, for example, a computer virus. At step1610, the scanner can, in response to a determination that the instructions include malware, preferably terminate uploading of the code. This termination preferably occurs while the code remains isolated in the safe zone, and before the scanner takes actions based on executing the instructions. FIG.17shows a series of Flowcode™ screens1702,1704and1706in accordance with the principles of the disclosure. Screens1702,1704and1706preferably indicate an exemplary set of scanning according to an application in accordance with the disclosure set forth herein. Screen1702preferably shows that a Flowcode™ or another machine-readable optical label can be scanned with a customized scanner. Such a scanning can preferably trigger a rewards page, such as the page shown on screen1704. In addition, screen1706shows that codes, once selected or clicked, can be stored, organized and displayed. It should be noted that the codes, once selected or clicked, can preferably be immediately checked for malware which would otherwise be triggered by clicking or selecting the QR code. In the menu shown at1710, such QR code can be represented by a picture or other associated visual indicator. In addition, at1712, screen shows that the library may have a “recent” access button which enables users to retrieve a system-set or user-defined number of most recently retrieved codes. FIG.18shows yet another illustrative flow diagram of a method in accordance with the disclosure. Similar to the method shown inFIGS.15and16, step1802shows using an optical scanner to scan an optical label. Step1804shows processing code derived from the optical label. At step1806, the method shows uploading a set of instructions that are derived (or otherwise extracted) from the label into the scanner. Step1808shows storing the set of instructions in an instructions library. At step1810, the method preferably derives a picture associated with the instructions from the instructions that are stored within the library. It should be noted that the picture may preferably be derived from instructions either before or after the instructions are stored within the library. Such a library may be indexed to provide a user an easily accessible list of QR codes which the user has recently accessed. Thus, a MULTIPLEXED QUICK RESPONSE (“QR”) CODE EXPERIENCE DERIVATION is provided. Persons skilled in the art will appreciate that the present invention can be practiced by other than the described embodiments, which are presented for purposes of illustration rather than of limitation. The present invention is limited only by the claims that follow. | 41,947 |
11861451 | DETAILED DESCRIPTION At present, codes of chips on a chip tray are collected mainly by manually controlling laser cameras. The number of the chips on the chip tray is large, and if only one chip code can be collected at a time, it will consume a lot of detection time, which is not beneficial to the batch output of the chips. Further, manual operation is prone to problems of missing collection, repeated collection and wrong positional sequence collection of the chips. An embodiment of the disclosure provides a method for chip collection, including the following operations. An image to be detected is obtained, the image to be detected includes chip code images, and each chip code image is configured to identify a respective one of semiconductor chips. Chip position information of a comparison image with a highest matching degree with the image to be detected is obtained from a database, the chip position information is configured to indicate a position of each semiconductor chip in the comparison image. A position of each of detection regions in the image to be detected is obtained based on the chip position information, each detection region is configured to indicate a position of a respective one of chip code images. An image of the detection region is obtained based on the position of each detection region. It is determined whether the image of the detection region includes the chip code image; and when it is determined that the detection region includes the chip code image, a chip code corresponding to the chip code image is identified and the chip code is stored in the database. To make the objectives, technical solution and advantages of the embodiments of the disclosure to be understood more clearly, the embodiments of the disclosure will be illustrated in detail below with reference to the drawings. It should be understood by those of ordinary skill in the art that numerous technical details according to the embodiments of the disclosure will be illustrated for a better understanding of the disclosure. However, even without the technical details and variations and modifications based on the following embodiments, the technical solution of the disclosure may be further implemented. The following embodiments are divided for convenient description of the disclosure, which are not intended to limit the specific implementations of the disclosure. The embodiments may be combined with and referred to each other on a non-conflict basis. FIG.1is a schematic flowchart of a method for chip collection according to an embodiment,FIG.2is a schematic flowchart for determining whether the chip code image is included according to the embodiment,FIG.3is a schematic diagram of a signal vector according to the embodiment, andFIG.4is a diagram of the relationship map between the position of the code identifier and the signal strength of the code identifier according to the embodiment. The method for chip collection according to the embodiment of the disclosure will be illustrated in detail below with reference to the drawings. With reference toFIG.1, the method for chip collection includes the following operations. At block101, an image to be detected is obtained. The image to be detected is obtained, the image to be detected includes chip code images and each chip code image is configured to identify a respective one of semiconductor chips. Specifically, the image to be detected is an obtained image of a chip tray, the image to be detected includes multiple chip code images, each of the chip code images includes a respective chip code configured to calibrate and identify semiconductor chips. In an example, the chip codes include multiple general purpose formats, such as Code-128, DataMatrix, QR-Code, etc. At block102, chip position information of a comparison image with a highest matching degree with the image to be detected is obtained from a database. The chip position information of the comparison image with the highest matching degree with the image to be detected is obtained from the database, the chip position information is configured to indicate a position of each semiconductor chip in the comparison image. Specifically, the chip position information of multiple comparison images is stored in the database. First, a matching degree of the image to be detected with each comparison image is obtained, and each matching degree is configured to represent the similarity between the image to be detected and the corresponding comparison image; and the chip position information of the comparison image with the highest matching degree with the image to be detected is obtained based on each matching degree, and the chip position information is configured to indicate the position of the semiconductor chip in the comparison image. In an example, the size of a chip tray is not fixed, but sizes and directions of chips placed on the chip tray are fixed. Due to the fact that a numbers of chips in a longitudinal direction and a number of chips in a transverse direction on the chip tray are not fixed, the positions of the chips in the image to be detected and identification sequences of the image to be detected are required to be identified through a comparison object with the highest similarity with the image to be detected. At block103, a position of each of detection regions in the image to be detected is obtained, and an image of the detection region is obtained based on the position of each detection region. The position of the detection region in the image to be detected is obtained based on the chip position information, and the detection region is configured to mark the position of the chip code image; and the image of the detection region is obtained based on the position of the detection region. Specifically, the chip position information is mapped to the image to be detected based on the chip position information of the comparison image with the highest similarity thereto, the position of the detection region indicated by the chip position information is obtained, and the image of the detection region in the image to be detected is obtained based on the position of the detection region and the image to be detected. At block104, it is determined whether a chip code image is included. It should be noted that, according to other embodiments, before determining whether a chip code image is included in the image of the detection region, the method for chip collection further includes the following operations. It is determined whether a code identifier is included in the image of the detection region; and when it is determined that a code identifier is included in the image of the detection region, the operation of determining whether the chip code image is included in the image of the detection region is performed. The operation that it is determined whether a chip code image is included in the image of the detection region specifically includes it is determined whether the code identifier is the chip code image. By determining whether a code identifier is included in the detection region, and then determining whether the code identifier is the chip code image, the data processing load of a system on determining the image in the detection region is reduced, thereby speeding up the process for identifying the chips. Specifically, the operation for determining whether a chip code image is included in the image of the detection region includes the following operations. With reference toFIG.2, the present embodiment will be described in detail with an example that the image of the detection region includes a code identifier. Specifically, when the image of the detection region includes a code identifier, the method for chip collection according to the embodiment includes the following operations. At block111, the image of the detection region is obtained. The image of the detection region is obtained, and the image of the detection region includes the code identifier. At block112, a signal vector for analyzing the code identifier is set and the signal strength of the code identifier on the signal vector is obtained. Specifically, with reference toFIG.3, a signal vector120for analyzing a code identifier is set in the image of the detection region and the signal strength of the code identifier on the signal vector120is obtained. At block113, a relationship map between a position of the code identifier and the signal strength of the code identifier is obtained. Specifically, with reference toFIG.4, the relationship map between the position of the code identifier and the signal strength of the code identifier is generated based on the signal vector and the signal strength of the code identifier on the signal vector. It should be noted that according to the embodiment, the relationship map between the position of the code identifier and the signal strength of the code identifier is a bar graph, which is only for illustration. According to other embodiments, the relationship map between the position of the code identifier and the signal strength of the code identifier may be a line graph or a table. It should be understood by those skilled in the art that any diagram illustrating the relationship between the position of the code identifier and the signal strength of the code identifier shall fall within the scope of the disclosure. Further with reference toFIG.2, at block114, it is determined whether the code identifier is the chip code image. Specifically, whether the code identifier in the image of the detection region is the chip code image is determined based on a signal strength difference between the signal strength of the code identifier in the relationship map and signal strength of the code identifier at a preset position. The operation for determining whether the code identifier in the image of the detection region is the chip code image includes the following operations. i is configured to represent a sequential position index value, j is configured to represent a current position index value, anis configured to represent signal strength of a code identifier position n, and dnis configured to represent a determination value of the code identifier position n. When aiis greater than aj, di=1, and when aiis not greater than aj, di=0. A1: in an initial state, it is set that i=1 and j=1, and a value of d1is obtained based on a0and a1. A2: i is incremented by 1, and a value of diis obtained. A3: it is determined whether the values of diand d1are equal, A4 is performed when di=d1, and A5 is performed when di≠d1. A4: it is set that j=i, and A2 is performed. A5: it is determined whether |i−j| is greater than t, when |i−j|>t, A6 is performed, and when |i−j|≤t, A2 is performed, where t is configured to represent a tolerance value. A6: it is set that j=i, s is incremented by 1, di=1−di, and A2 is performed. A7: A2 is performed when s<b, and it is determined that the image of the detection region includes the chip code image when s≥b, where b is configured to represent a required number of standard amplitudes, and s is configured to represent a counted number of amplitudes in current position. A large enough amplitude difference exists between the signal strengths, the amplitude difference is a standard amplitude. When a sufficient number of standard amplitudes exist in the image of the detection region, a chip code image exists in the image of the detection region. When the image of the detection region includes a chip code image, the operation in block105is performed to identify a chip code corresponding to the chip code image and store the chip code in the database. When the image of the detection region includes no chip code image, it is determined whether an image of a next detection region includes a chip code image. At block105, a chip code corresponding to chip code image is identified and the chip code is stored in the database. It should be noted that some chip codes may be extremely small in size, such as 4 mm*4 mm, then it is difficult to identify the chip codes. Further, in gray level images, the average brightness difference between the codes and background is only 8 gray values, and then it is difficult to identify the chip codes. According to an embodiment, before identifying the chip code corresponding to the chip code image, the method for chip collection further includes the following operations. The image of the detection region corresponding to the chip code image is obtained, the image of the detection region is magnified to magnify the chip code image, and the magnified chip code image is obtained. In an example, the image of the detection region may be magnified by 1 time, 1.5 times, 2 times, etc. Specifically, the specific magnification may be manually set for a system. The chip code image is identified after magnification, thereby improving the rate of identifying the code. Compared with the related art, the image to be detected includes multiple chips, and the chip position information of the image to be detected is obtained through the chip position information of the comparison image with the highest matching degree with the image to be detected in the database, and the position of the detection region in the image to be detected is obtained. That is, the chip position information of the multiple chips in the image to be detected is obtained through the chip position information of the comparison image in the database, the positions of the multiple chips in the image to be detected are automatically obtained, and then it is determined whether the image of the detection region includes a chip code according to the image of the detection region. Therefore, the multiple chip codes are identified simultaneously, and the multiple chips in the image to be detected are identified automatically, thereby saving the detection time of the chips and contributing to the batch output of the chips. The foregoing operations are divided only for clear description. Multiple operations may be combined into one operation or one operation may be split into multiple operations when the method is performed, as long as the operations include the same logical relationship, they are all within the scope of the disclosure. Adding insignificant modification to the process or introducing insignificant designs without changing the core design of the process is within the scope of the disclosure. Another embodiment of the disclosure relates to a method for chip collection. Different from the foregoing embodiments, according to the embodiment, the chip code is binarized before the chip code is stored, thereby improving the efficiency of storing the chip code. FIG.5is a schematic flowchart of the method for chip collection according to the embodiment. The method for chip collection according to the embodiment will be illustrated in detail below with reference to the drawings, and the same or corresponding parts as and to those according to the foregoing embodiments will not be illustrated in detail below. With reference toFIG.5, the method for chip collection includes the following operations. At block201, an image to be detected is obtained. At block202, chip position information of a comparison image with a highest matching degree with the image to be detected is obtained from a database. At block203, a position of each of detection regions in the image to be detected is obtained, and an image of the detection region is obtained based on the position of each detection region. At block204, it is determined whether a chip code image is included. When the image of the detection region includes a chip code image, the operation in block205is performed. The chip code image is binarized to obtain the chip code. When the image of the detection region includes no chip code image, it is determined whether an image of a next detection region includes a chip code image. At block205, the chip code image is binarized to obtain the chip code. The chip code image is binarized to obtain the chip code corresponding to the chip code image. By finding out an optimal threshold, the chip code image is transformed into a binarized image, so as to effectively remove the non-code region and effectively classify the information of 0 and 1 in the code images, thereby improving the success rate of chip collection. Specifically, the operation that the chip code image is binarized includes the following operations. The image of the detection region where the chip code image is located is obtained, to obtain x configured to represent a total number of pixels in the image of the detection region where the chip code image is located. w and e are obtained, where e=w*x, e is configured to represent a binarization evaluation value, and w is configured to represent a preset binarization evaluation percentage. A first accumulated value of the signal strength of the chip code image from low to high is obtained; and when the first accumulated value is not less than e, k1configured to represent a first position for obtaining a binarization threshold is obtained. A second accumulated value of the signal strength of the chip code image from high to low is obtained; and when the second accumulated value is not less than e, k2configured to represent a second position for obtaining the binarization threshold is obtained. m is obtained, where m=(k1+k2)/2, and m is configured to represent the binarization threshold. A code identifier with a signal strength greater than m is calibrated to be 1, and a code identifier with a signal strength not greater than m is calibrated to be 0. At block206, a code detection is performed on the chip code. According to the embodiment, the method for chip collection further includes the following operations. After identifying the chip code corresponding to the chip code image and before storing the chip code in the database, a code detection is performed on the chip code, and operation in block207is performed, that is, the chip code is stored in the database, when the chip code conforms to a code detection rule for the code detection. Specifically, the operation of performing the code detection on the chip code includes the following operation. It is determined whether a length of the chip code is a compliant length, and it is determined whether each character of the chip code is a compliant character. In an example, when collecting the chip codes, the collection may be successful but the code is misjudged, therefore it is necessary to perform the code detection on the obtained chip code. When the chip code conforms to the code detection rule of the database, the chip code is a compliant code; otherwise, the chip code is a non-compliant code. Specifically, a compliant length of a code string is defined, and the compliant length refers to a range, such as [13, 20, 22-25]: the length of “13”, “20” or “between 22 and 25” fall within the compliant length. Character forms are defined, such as English letters, decimal digits, hexadecimal digits, etc., such as [B-G, Z, h-k; 1-3, 7, 9; 0-F; 3-B] which denotes that “;” is configured to separate the character rules, and the rule is only applicable when the chip code is a four-character code string, the first range of the compliant character includes English letters between uppercase letters B to G or Z or lowercase letters h to k, the second range of the compliant character includes decimal digits between 1 to 3 or 7 or 9, the third range of the compliant character includes hexadecimal digits between 0 to 9 or A-F, and the fourth range of the compliant character includes hexadecimal digits between 3 to 9 or A-B. At block207, the chip code is stored in the database. The foregoing operations are divided only for clear description. Multiple operations may be combined into one operation or one operation may be split into multiple operations when the method is performed, as long as the operations include the same logical relationship, they are all within the scope of the disclosure. Adding insignificant modification to the process or introducing insignificant designs without changing the core design of the process is within the scope of the disclosure. Due to the fact that the above-mentioned embodiments correspond to the present embodiment, the present embodiment may be implemented in cooperation with the above-mentioned embodiments. The related technical details mentioned in the above-mentioned embodiments are further applicable to this embodiment, and the technical effects achieved in the above-mentioned embodiments may also be implemented in this embodiment, which will not be repeated herein. Accordingly, the related technical details mentioned in the embodiment may further be applicable to the above-mentioned embodiments. Another embodiment of the disclosure provides a method for chip positioning, including the following operations: obtaining an image to be positioned based on a positioning device; dividing the image to be positioned into regions to obtain multiple sub-positioning images, with a chip in each sub-positioning image; obtaining the position of the chip in each sub-positioning image; and obtaining chip position information of the image to be positioned based on the position of the chip in each sub-positioning image and the position of each sub-positioning image in the image to be positioned. FIG.6is a schematic flowchart of the method for chip positioning according to the embodiment, andFIG.7is a schematic flowchart for obtaining sub-positioning images according to the embodiment. The method for chip positioning according to the embodiment of the disclosure will be illustrated in detail below with reference to the drawings. With reference toFIG.6, the method for chip positioning includes the following operations. At block301, the image to be positioned is obtained by a positioning device. At block302, multiple sub-positioning images are obtained, and a position of the respective chip in each sub-positioning image is obtained. The image to be positioned is divided into regions to obtain multiple sub-positioning images, where each sub-positioning image includes a chip, and the position of the chip in each sub-positioning image is obtained. With reference toFIG.7, the operation for dividing the image to be positioned into regions to obtain multiple sub-positioning images includes the following operations. At block311, a tray region is obtained. The tray region is obtained based on a tray outer frame in the image to be positioned. In an example, the tray chip image obtained by a current camera is obtained, and a user interface is presented to render the obtained tray chip image as the background. A tray region is divided into equal regions, and multiple sub-positioning images with the same size are obtained. The operation for dividing the tray region into equal regions includes the following operations. At block312, a number of chips in a longitudinal direction and a number of chips in a transverse direction in the image to be positioned are obtained. Specifically, the number of the longitudinal chips and the number of the transverse chips in the image to be positioned are obtained, and the tray region is equally and longitudinally divided based on the number of the chips in the longitudinal direction, and the tray region is equally and transversely divided based on the number of the chips in the transverse direction. At block313, the sub-positioning images are obtained. In an example, the tray outer frame may be selected on the user interface with a mouse; the number of the chips in the longitudinal direction and the number of the chips in the transverse direction (v, c) on the tray are keyed in; the tray outer frame is longitudinally divided into v equal segments, and the tray outer frame is transversely divided into c equal segments to obtain v*c sub-positioning images with the same area; and the sub-positioning images are sequentially numbered from 1 to v*c in the chip region from top-left to bottom-right rule. Further with reference toFIG.6, the operation in block303includes that the chip position information of the image to be positioned is obtained. The chip position information of the image to be positioned is obtained based on the position of the respective chip in each sub-positioning image and a position of each sub-positioning image in the image to be positioned. Compared with the related art, the image to be positioned is obtain by the positioning device, the image to be positioned is divided into regions to obtain the chip position information of the image to be positioned, then the chip position information of the image to be positioned is stored in the database as chip position information of a comparison image, and subsequently, when the method for chip collection is performed, the chip position information of the image to be positioned is mapped to the image to be detected through the matching degree between the image to be detected and the comparison image. Therefore, the codes of the chips on the chip tray are automatically collected, thereby saving the detection time of the chips and contributing to the batch output of the chips. The foregoing operations are divided only for clear description. Multiple operations may be combined into one operation or one operation may be split into multiple operations when the method is performed, as long as the operations include the same logical relationship, they are all within the scope of the disclosure. Adding insignificant modification to the process or introducing insignificant designs without changing the core design of the process is within the scope of the disclosure. Another embodiment of the disclosure provides a method for chip positioning. Different from the previous embodiment, in this embodiment, in the process for obtaining the chip position information of the image to be positioned based on sub-positioning images, a two-dimensional code image in each sub-positioning image is obtained based on the sub-positioning image, and then the chip position information of the image to be positioned is obtained through the multiple two-dimensional code images in the multiple sub-positioning images, thereby greatly reducing the image analysis range, effectively improving the processing speed and eliminating the interference from background factors, and effectively improving the accuracy to reduce the data processing load on obtaining the chip position information. FIG.8is a schematic flowchart of the method for chip positioning according to the embodiment,FIG.9is a schematic diagram of an image to be positioned according to the embodiment,FIG.10is a schematic diagram of a sub-positioning image according to the embodiment, andFIG.11is a schematic diagram of a two-dimensional code image according to the embodiment. The method for chip positioning according to the embodiment will be illustrated in detail below with reference to the drawings, and the same or corresponding parts as and to those according to the foregoing embodiment will not be illustrated in detail below. With reference toFIG.8, the method for chip positioning includes the following operations. At block401, an image to be positioned is obtained by a positioning device. With reference toFIG.10, a sub-positioning image502is obtained based on the image to be positioned501as shown inFIG.9. Further with reference toFIG.8, the operation in block402includes that multiple sub-positioning images are obtained, and a position of the respective chip in each sub-positioning image is obtained. At block403, multiple two-dimensional code images in multiple sub-positioning images are obtained. With reference toFIG.11, the two-dimensional code image503is obtained based on the corresponding sub-positioning image502as shown inFIG.10. Specifically, the two-dimensional code image in each sub-positioning image is obtained based on the position of the respective chip in each sub-positioning image, and each two-dimensional code image is configured to cover only the position of the respective chip. Specifically, the sub-positioning images are simultaneously adjusted in size and position to obtain adjusted images. When it is determined that no non-chip code image exists in a region covered by each adjusted image and each adjusted image covers a respective chip code image, the adjusted image is set to be the two-dimensional code image. More specifically, the multiple sub-positioning images are simultaneously adjusted in size based on at least one of a zoom-in instruction or a zoom-out instruction, so that no non-chip code image exists in the region covered by each adjusted image. The multiple sub-positioning images are simultaneously adjusted in position based on a direction control instruction, so that each adjusted image covers the respective chip code image. At block404, the chip position information of the image to be positioned is obtained based on multiple two-dimensional code images. The operation for obtaining the chip position information of the image to be positioned based on the position of the respective chip in each sub-positioning image and the position of each sub-positioning image in the image to be positioned includes the following operations. The chip position information of the image to be positioned is obtained based on multiple two-dimensional code images. The chip position information includes multiple two-dimensional code images and positions of the multiple two-dimensional code images in the image to be positioned. In an example, the region of a sub-positioning image is modified to be in a shape of square, the side length of the square is a minimum value of the side length of the sub-positioning image, then, the square region is an image to be adjusted, and the image is adjusted by controlling a “+” key or “−” key of a keyboard, so that all the images to be adjusted may be magnified or minified simultaneously, and each two-dimensional code region is controlled by a direction key of the keyboard, and all the two-dimensional code regions are simultaneously adjusted in position, until the images cover all the chip codes. The foregoing operations are divided only for clear description. Multiple operations may be combined into one operation or one operation may be split into multiple operations when the method is performed, as long as the operations include the same logical relationship, they are all within the scope of the disclosure. Adding insignificant modification to the process or introducing insignificant designs without changing the core design of the process is within the scope of the disclosure. It should be understood by those of ordinary skill in the art that, the foregoing embodiments are specific embodiments for implementing the disclosure, and in practical application, variations in form and detail may be made thereto without departing from the spirit and scope of the disclosure. | 31,113 |
11861452 | DETAILED DESCRIPTION Reference will now be made in detail to specific example embodiments for carrying out the inventive subject matter. Examples of these specific embodiments are illustrated in the accompanying drawings, and specific details are set forth in the following description in order to provide a thorough understanding of the subject matter. It will be understood that these examples are not intended to limit the scope of the claims to the illustrated embodiments. On the contrary, they are intended to cover such alternatives, modifications, and equivalents as may be included within the scope of the disclosure. Embodiments described herein generally relate to the technical field of signal processing, and in particular to processing circuits, systems, instructions, and methods for fixed-point quantized softmax layers for neural networks. In particular, embodiments describe the generation and use of a compact softmax lookup table structure generated with an index of the lookup table representing a distance between a current input and a maximum possible value of the softmax input. This enables improvements to a device by reducing memory resources for softmax operations and further reducing the associated processing resources for softmax operations when compared with similar operations using larger tables or deconstructed index entries. Softmax, also known as a normalized exponential function, is a function that takes an input of vectors and normalizes the vectors into a probability distribution. In neural networks, softmax is used to map the non-normalized output of a network to a probability distribution for the output classes of the network. Neural networks and associated softmax layers of such networks are being developed and deployed in a wide range of markets, with increasing resource and responsiveness requirements. As part of such trends, computational hardware for neural network-focused computations is being pushed to the end device (e.g., phones, cameras, vehicles, etc.) rather than concentrated at remote networked servers. This enables faster response times for network decisions, as well as specialized computational systems focused on the particular networks at the end device. While energy efficient circuits are able to deliver trillions of multiply accumulations (TMACs) for the computational layers of neural networks, the mathematical processes for computing softmax values remains excessively, resource-intensive for the general processing resources at such end devices. Instead of calculating softmax values, lookup tables have traditionally been used for such softmax values. The simplest lookup table structure, however, requires a separate lookup table for each input size, as described in more detail below. As input sizes increase, such tables can require many gigabytes of information, which again outstrips the available resources of many end devices. Previous attempts to simplify such tables have included decomposing inputs of exponential functions to multiple inputs with corresponding exponential functions in lookup table generation. This results in two large lookup tables with added computational costs. Even so, such systems result in both memory use and additional computation costs, which are significant for end devices. Embodiments described herein improve the operation of end devices with neural network functionality by decreasing the resources used in softmax layers. This is done using a quantized lookup table, which degrades the accuracy of softmax values while greatly reducing needed resources. In some embodiments, the accuracy of output softmax values is degraded by between 0.1 percent and 0.01 percent, while reducing memory resource usage from multiple gigabytes to less than one hundred thousand kilobytes. In various embodiments, the particular resources used will depend on the particular design of the neural network. In addition to reducing the memory resources used, computation resource use is also reduce by reducing the processing resources to fetch values from multiple large tables in memory. Some embodiments described herein generate such improvements via the use of a single lookup table. Instead of separate lookup tables representing the input value with a lookup table index based on the number of bits, embodiments described herein use a lookup table index based on a distance between a current input and a maximum possible value of the input. This enables a single softmax lookup table. Because this single lookup table is not decomposed, no additional computation costs are incurred. Additionally, in contrast to computationally expensive floating-point data types typically used in neural networks that provide a way to represent a wide range of numbers precisely, fixed-point data types are limited in the range of values that can be represented, but can provide options for relatively low computational costs compared to floating-point data types. For a softmax layer with a significant number of inputs and outputs, many of the table entries are zero. Embodiments described herein can further reduce the size of the single lookup table by removing all duplicate entries with a zero value. For a sixteen bit input, embodiments described herein can use a table with a maximum size of 64 kilobytes (kb), but elimination of redundant zeros can reduce the size of such a table to approximately 20-30 kb. Other embodiments can use different input sizes, and the elimination of redundant zeros can result in different table sizes in different embodiments. Aspects of some embodiments thus involve fixed-point quantization of floating-point neural networks (e.g., neural networks represented using floating-point data types), although embodiments are not limited to such implementations. For example, consistent with some embodiments, non-normalized output data from neural network comprising floating-point representations of probabilities associated with network analysis are accessed and quantized into fixed point data. This fixed point data can be mapped to normalized probability data using a table to estimate softmax values for the non-normalized output data. Errors associated with such quantization can be configured to be less than 1% (less than 0.1 or 0.01 percent in various embodiments), while providing significant reductions in processing resources used by a softmax layer. Various embodiments for generating a table that can be used for such fixed point softmax operations, as well as embodiments for using such a table, are described in detail below. FIG.1is a diagram illustrating aspects of a neural network with a quantized softmax layer in accordance with some embodiments. With reference toFIG.1, a high-level image segmentation process100is illustrated, according to some example embodiments. As shown, the process100is divided into two phases: training and deployment. In both phases, a softmax layer in accordance with embodiments described herein can be used to normalize the output of a neural network.FIG.1particularly illustrates an embodiment directed to image segmentation, but other embodiments, such as embodiments directed to data classification, or segmentation of other types of data other than image data, or any other such application of a neural network with a normalized output, can be used. The training phases may be performed once per database and are typically very computationally intensive server-based operations. The deployment phase uses filter weights from the training phase, and is used by an application which can be operating on a server or on a client device, such as a phone. Embodiments described herein provide particular benefits to a resource constrained device such as a phone. In the training phase, a labeled data set (e.g., a set of images labeled according to class) are provided as input to multi-layered function (e.g., an FCN) as training data. The multi-layered function iteratively derives a set of filter weights from the labeled data set (e.g., through stochastic descent error minimization) for subsequent use in the deployment phase in estimating pixel labels in input images. Once the filter weights for the application are selected, a lookup table for the softmax layer of the deployment phase can be generated using operations described below. In the deployment phase, a neural network analyzes input data using the estimated filter weights, and then normalizes the output data using the quantized softmax layer with the lookup table. In other embodiments, various different combinations of training and deployment can be used for generating the lookup table and then the lookup table can be used for quantized softmax determinations as described below. FIG.2is a diagram illustrating aspects of a neural network200with a quantized softmax layer260in accordance with some embodiments.FIG.2shows layers of a neural network200, including intermediate layers220,240, and softmax layer260. Intermediate layer220includes a plurality of neurons222, which receive data from previous layers (not shown) in a neural network deployment. Weights for the neurons are set so that an input to an initial layer is processed using multiple layers. For a floating point neural network, a non-normalized floating point output230made up of a set of floating point values is communicated to layer240. Layer240is a quantization layer that determines a quantization level for quantizing of the set of floating point values into fixed point values. For example, in 8-bit quantization, the fixed-point output250is an 8-bit number. Thus, with 8-bit quantization, the quantization level is 256, given that with 8 bits there are 256 possible bit patterns. Softmax layer260receives the non-normalized fixed point output250, and uses this data to generate a normalized fixed point output as described in more detail below. FIG.3is a diagram illustrating aspects of a neural network with a quantized softmax layer360in accordance with some embodiments. As mentioned above, softmax is a generalization of logistic regression, which involves computations of exponential functions. softmax=exj∑i=0k-1exi(1) where x is the input, k is the number of input components, i and j are input values greater than or equal to zero and less than k. Rather than consuming processing resources to calculate such functions, the results of such functions can be stored in lookup tables to reduce real-time computation costs at the expense of memory resources. In conventional softmax implementations, multiple lookup tables are used, with the number of lookup tables identical to the range of input (e.g., k tables), and the entries for example table k given by: e(xj−d−maxm−1)*scale*(2N−1) (2) where in is the number of calibration inputs (e.g. calibration images), wax is the maximum value of the softmax inputs (e.g. the maximum value output by the intermediate layers or the preceding layer to the softmax layer for in calibration images), N is the number of bits in the input values, d is an offset value, x+d used as an index of the lookup table, and scale is a scaling factor. For floating point systems with a signed sixteen-bit data type, 65536 lookup tables are needed for complete solution detail, with a relatively large size for each table so that the total amount of memory used for lookup tables can be greater than eight gigabytes (Gb). Even for a smaller, eight-bit data type, the size of a single lookup table can be 512 bytes with 16 bytes for each entry and a memory usage for 256 lookup tables of 128 kilobytes (kb), but such an input significantly limits neural network applications. For example, for a network configured for 1000 classification classes, a sixteen bit input is recommended. Instead of the above conventional system, embodiments described herein use a single small lookup table for quantized softmax. In the lookup table according to various embodiments, the index of the lookup table represents the distance between the current input and the maximum possible value of the input. This allows merging of multiple lookup tables into a single lookup table. Further, the size of the single lookup table can be reduced by removing duplicate entries with a content of zero. This reduces the size of the single table significantly in certain deployments. To achieve this, in some embodiments a number of lookup table entries is used to index lookup table computations, and this allows one fewer bit than the number of input bits to be used for lookup entry. The index for such a table can be considered: index=xj+size(LUT)−max−1 (3) where size (LUT) is the number of entries after the elimination of redundant zeros. Using such a table, the maximum possible table size for a sixteen-bit input is approximately 64 kb, with many applications having tables in the 20 kb to 30 kb range due to the elimination of redundant zero entries. Such table sizes will vary based on the application, but such lookup tables are able to fit in local data memory for many mobile platforms or devices or in tightly coupled memory of neural networking-focused digital signal processors (DSPs). Since the single table is not decomposed, there is no extra computation costs associated with decomposition. As illustrated byFIG.3then, the non-normalized inputs350as quantized are mapped to the inputs of softmax layer360. This mapping uses the index as a distance from the input value to the maximum possible value, thus merging all zero (e.g., entries with the same distance to the maximum) to the same table entry. The values from the lookup table of softmax layer360are accessed, and the normalized output values370are provided. As described, the entries for each element of the lookup table, and the associated normalized output values, are given by: e(xj−maxl)*scale*2N(4) FIG.4is then a flow chart illustrating a method400of using a quantized softmax layer in accordance with some embodiments. In some embodiments, method400is implemented using circuitry of one or more integrated circuits specialized for neural networks. In some embodiments, method400is implemented as instructions in a storage memory that, when executed by processing circuitry of a device, cause the device to perform method400. Method400begins with operation402receiving, at an input to a softmax layer of a neural network from an intermediate layer of the neural network, a non-normalized output comprising a plurality of intermediate network decision values. Operation404involves calculating a difference between the intermediate network decision value and a maximum network decision value for each intermediate network decision value of the plurality of intermediate network decision values. A corresponding lookup table value is then requested from a lookup table in operation406using the difference between the intermediate network decision value and the maximum network decision value for each intermediate network decision value of the plurality of intermediate network decision values. The corresponding lookup table value is then selected as a corresponding decision value for each intermediate network decision value of the plurality of intermediate network decision values in operation408, and finally, operation410involves generating a normalized output comprising the corresponding lookup table value for said each intermediate network decision value of the plurality of intermediate network decision values. Such a method enables the improvements described above with a single small lookup table for quantized softmax operations. For example, in some embodiments, the plurality of intermediate network decision values comprise a plurality of signed sixteen-bit values, and may operate where the lookup table comprises less than the maximum number of entry values due to duplicate entries at the same distance to the maximum (e.g., less than 63000, 30000, or even 20000 entry values in some embodiments with sixteen-bit data types). The normalized output, which is then used for further evaluation in the application, can involve a plurality of unsigned sixteen bit values for a sixteen-bit input, and the corresponding lookup table value for said each intermediate network decision value comprises an unsigned fifteen-bit value. In other embodiments, matching bits values for inputs and outputs to the softmax layer are used (e.g., eight bits, 24 bits, etc.). In other embodiments, with significant reduction in the number of table entry values, the number of output bits can be smaller than the number of input bits. In some embodiments, this output value type for the lookup table is dynamically selected based on a number of entry values of the lookup table having a non-zero value during a training phase. In some embodiments, the non-normalized input values are generated by converting a plurality of floating point intermediate network decision values from a non-normalized floating point output of a final add-accumulate layer of the neural network, the non-normalized output comprising the plurality of intermediate network decision values, wherein the plurality of intermediate network decision values comprise fixed point values. Method400can be used in a wide variety of deployments of neural networks, such as image classification, image segmentation, localization, or such analysis of other types of data. Improvements to device operation due to the reduced processing resources is amplified in certain segmentation embodiments, where large numbers of analysis repetitions (e.g., for each pixel of an image or many different groupings of pixels) each involve a softmax operation. Even in larger processing environments with less resource constraints than a phone or other mobile or wearable device, the resource reductions from embodiments described herein and the associated improvement in device operation can be significant. Some such embodiments involve training the neural network using a plurality of training inputs and a plurality of associated target outputs and generating the normalized output from a first application input using the neural network, wherein the first application input comprises an image and wherein the normalized output represents a normalized probability associated with recognition of content of the image. Other embodiments can operate in any environment with neural network layers implemented in processing circuitry with memory, such as with neural network layers configured to be trained to set a plurality of weighting values for the plurality of neural network layers, with wherein the normalized output represents a normalized probability associated with recognition of audio content of audio data input to the plurality of neural network layers. FIGS.5and6are flowcharts illustrating methods of generating a single compact lookup table for a quantized softmax layer in accordance with some embodiments. InFIG.5, operation505involves inputting an empirical range mapping to softmax inputs. These fixed point mappings from operation505are then input to a lookup table entry index in operation510. The lookup table entry index includes all inputs (e.g., including duplicate zero values). Then in operation515, duplicate 0 values are removed from the lookup table, and the lookup table index with index values according to equation 3 above are finalized with the content of each lookup table entry set according to equation 4 above. FIG.6is a flow chart illustrating a method600of generating a single compact lookup table for a quantized softmax layer according to some example embodiments in some embodiments, method600is embodied in a device with circuitry configured to perform the operations of method600. In some embodiments, method600is implemented as instructions in a storage medium that, when executed by one or more processors, cause generation of a lookup table as described by method600. Method600, for generating a lookup table for quantized softmax evaluation in a neural network, begins with operation602generating a lookup table entry index for a value type having a first number of bits. A range mapping from an intermediate neural network layer output to a corresponding softmax input for each entry value of the lookup table entry index is determined in operation604, and operation606then involves inputting a fixed point value from the range mapping to the lookup table entry index for each entry value of the lookup table entry index. Entry values of the lookup table entry index having a zero value are determined in operation608, and operation610then involves removing the entry values of the lookup table entry index having a zero value from the entry lookup table index to generate a lookup table, and storing the lookup table in a memory. This lookup table as generated in operation610is then stored in a memory in operation612. As described above, this creates a compact table where indexes determined by a distance from the input value to the maximum input value are used in the softmax layer. In some embodiments, the index value for each entry of the lookup table is determined according to equation 3 above. The range mapping to the softmax input comprises quantizing a set of floating point values to the set of fixed point values using: Floating—fmax(5) where fmaxis a layer input empirical range; and Floating is the set of floating point values from the intermediate neural network layer(s). In various embodiments, a quantization system or layer can be used to quantize floating point values to fixed point values, and can dynamically adjust (e.g., by increasing or decreasing) the quantization level. Such a quantization system may adjust the quantization level based on one or more design constraints (e.g., hardware cost, performance, and accuracy). It will be understood that while particular operations are described in a particular order above, various other embodiments can involve intervening and/or repeated operations, and that additional embodiments not specifically described are possible within the scope of the described innovations. FIG.7is a block diagram700illustrating an example of a software architecture702that may be operating on any machine described herein and associated with generating tables for or using a softmax layer of a neural network or a circuit for implementing a neural network with a quantized softmax layer as described herein. FIG.7is merely a non-limiting example of a software architecture702, and it will be appreciated that many other architectures can be implemented to facilitate the functionality described herein. In various embodiments, the software architecture702is implemented by hardware such as a machine800that includes processors810, memory830, and input/output (I/O) components850. In this example, the software architecture702can be conceptualized as a stack of layers where each layer may provide a particular functionality. For example, the software architecture702includes layers such as an operating system704, libraries707, software frameworks708, and applications710. Operationally, the applications710invoke application programming interface (API) calls712through the software stack and receive messages714in response to the API calls712, consistent with some embodiments. In various embodiments, any client device, server computer of a server system, or any other device described herein may operate using elements of the software architecture702. A computing device described herein may additionally be implemented using aspects of the software architecture702, with the software architecture702adapted for generation and use of tables or softmax layers in accordance with embodiments described herein. In one embodiment, an application of the applications710performs operations described herein for generating a lookup table as described herein. In other embodiments, the application may be any application that uses a neural network with a softmax layer as described herein. In various other embodiments, rather than being implemented as neural networking modules of one or more applications710, some or all of the modules used for such neural networks can be implemented using elements of the libraries707or operating system704. In various implementations, the operating system704manages hardware resources and provides common services. The operating system704includes, for example, a kernel720, services722, and drivers724. The kernel720acts as an abstraction layer between the hardware and the other software layers, consistent with some embodiments. For example, the kernel720provides memory management, processor management (e.g., scheduling), component management, networking, and security settings, among other functionality. The services722can provide other common services for the other software layers. The drivers724are responsible for controlling or interfacing with the underlying hardware, according to some embodiments. For instance, the drivers724can include display drivers, signal processing drivers to optimize modelling computation, memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), WI-FI® drivers, audio drivers, power management drivers, and so forth. In some embodiments, the libraries706provide a low-level common infrastructure utilized by the applications710. The libraries706can include system libraries730, such as libraries of multi-instance blocks for use in an EDA environment or other libraries that can provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, the libraries706can include API libraries732such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as Moving Picture Experts Group-4 (MPEG4), Advanced Video Coding (H.264 or AVC), Moving Picture Experts Group Layer-3 (MP3), Advanced Audio Coding (AAC), Adaptive Multi-Rate (AMR) audio codec, Joint Photographic Experts Group (JPEG or JPG), or Portable Network Graphics (PNG)), graphics libraries (e.g., an OpenGL framework used to render in two dimensions (2D) and 3D in a graphic context on a display), database libraries (e.g., SQLite to provide various relational database functions), web libraries (e.g., WebKit to provide web browsing functionality), and the like. The libraries706may also include other libraries734. The software frameworks708provide a high-level common infrastructure that can be utilized by the applications710, according to some embodiments. For example, the software frameworks708provide various graphic user interface (GUI) functions, high-level resource management, high-level location services, and so forth. The software frameworks708can provide a broad spectrum of other APIs that can be utilized by the applications710, some of which may be specific to a particular operating system704or platform. In various embodiments, the systems, methods, devices, and instructions described herein may use various files, macros, libraries, and other elements of an EDA or neural network environment to implement operations or various embodiments described herein. This includes analysis of input design files for an integrated circuit design, IP blocks and associated test patterns, functional information for implementing pattern migration from IP blocks to a system on a chip (SOC) or application-specific integrated circuit (ASIC) design boundary, or any other such information that may be used as part of or along with the embodiments described herein. While netlist files, library files, SDC files, and view definition files are examples that may operate within the software architecture702, it will be apparent that other files and structures may provide a similar function, in various embodiments. Certain embodiments are described herein as including logic or a number of components, modules, elements, or mechanisms. Such modules can constitute either software modules (e.g., code embodied on a machine-readable medium or in a transmission signal) or hardware modules. A “hardware module” is a tangible unit capable of performing certain operations and can be configured or arranged in a certain physical manner. In various example embodiments, one or more computer systems (e.g., a standalone computer system, a client computer system, or a server computer system) or one or more hardware modules of a computer system (e.g., a processor or a group of processors) are configured by software (e.g., an application or application portion) as a hardware module that operates to perform certain operations as described herein. In some embodiments, a hardware module is implemented mechanically, electronically, or any suitable combination thereof. For example, a hardware module can include dedicated circuitry or logic that is permanently configured to perform certain operations. For example, a hardware module can be a special-purpose processor, such as a field-programmable gate array (FPGA), an SOC, or an ASIC. A hardware module may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, a hardware module can include software encompassed within a general-purpose processor or other programmable processor. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) can be driven by cost and time considerations. Accordingly, the phrase “module” should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instant in time. For example, where a hardware module comprises a general-purpose processor configured by software to become a special-purpose processor, the general-purpose processor may be configured as respectively different special-purpose processors (e.g., comprising different hardware modules) at different times. Software can accordingly configure a particular processor or processors, for example, to constitute a particular hardware module at one instant of time and to constitute a different a ware module at a different instant of time. Hardware modules can provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules can be regarded as being communicatively coupled. Where multiple hardware modules exist contemporaneously, communications can be achieved through signal transmission (e.g., over appropriate circuits and buses) between or among two or more of the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module performs an operation and stores the output of that operation in a memory device to which it is communicatively coupled. A further hardware module can then, at a later time, access the memory device to retrieve and process the stored output. Hardware modules can also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information). The various operations of example methods described herein can be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors constitute processor-implemented modules that operate to perform one or more operations or functions described herein. As used herein, “processor-implemented module” refers to a hardware module implemented using one or more processors. Similarly, the methods described herein can be at least partially processor-implemented, with a particular processor or processors being an example of hardware. For example, at least some of the operations of a method can be performed by one or more processors or processor-implemented modules. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines800including processors810), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an API). In certain embodiments, for example, a client device may relay or operate in communication with cloud computing systems, and may store media content such as images or videos generated by devices described herein in a cloud environment. The performance of certain of the operations may be distributed among the processors, not only residing within a single machine800, but deployed across a number of machines800. In some example embodiments, the processors810or processor-implemented modules are located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the processors810or processor-implemented modules are distributed across a number of geographic locations. FIG.8is a diagrammatic representation of the machine800in the form of a computer system within which a set of instructions may be executed for causing the machine800to perform any one or more of the methodologies discussed herein, according to an example embodiment.FIG.8shows components of the machine800, which is, according to some embodiments, able to read instructions from a machine-readable medium (e.g., a machine-readable storage medium) and perform any one or more of the methodologies discussed herein. In some embodiments, the machine800may operate with instructions816(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine800to perform any one or more of the methodologies discussed. In alternative embodiments, the machine800operates as a standalone device or can be coupled (e.g., networked) to other machines. In a networked deployment, the machine800may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine800can comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a personal digital assistant (PDA), a media system, a cellular telephone, a smart phone, a mobile device, or any machine capable of executing the instructions816, sequentially or otherwise, that specify actions to be taken by the machine800. Further, while only a single machine800is illustrated, the term “machine” shall also be taken to include a collection of machines800that individually or jointly execute the instructions816to perform any one or more of the methodologies discussed herein. In various embodiments, the machine800comprises processors810, memory830, and I/O components850, which can be configured to communicate with each other via a bus802. In an example embodiment, the processors810(e.g., a central processing unit (CPU), a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a graphics processing unit (GPU), a digital signal processor (DSP), an ASIC, a radio-frequency integrated circuit (RFIC), another processor, or any suitable combination thereof) include, for example, a processor812and a processor814that may execute the instructions816. The term “processor” is intended to include multi-core processors810that may comprise two or more independent processors812,814(also referred to as “cores”) that can execute the instructions816contemporaneously. AlthoughFIG.8shows multiple processors810, the machine800may include a single processor812with a single core, a single processor812with multiple cores (e.g., a multi-core processor812), multiple processors810with a single core, multiple processors810with multiple cores, or any combination thereof. The memory830comprises a main memory832, a static memory834, and a storage unit836accessible to the processors810via the bus802, according to some embodiments. The storage unit836can include a machine-readable medium838on which are stored the instructions816embodying any one or more of the methodologies or functions described herein. The instructions816can also reside, completely or at least partially, within the main memory832, within the static memory834, within at least one of the processors810(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine800. Accordingly, in various embodiments, the main memory832, the static memory834, and the processors810are considered machine-readable media838. As used herein, the term “memory” refers to a machine-readable medium838able to store data temporarily or permanently and may be taken to include, but not be limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, and cache memory. While the machine-readable medium838is shown, in an example embodiment, to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store the instructions816. The term “machine-readable medium” shall also be taken to include any medium, or combination of multiple media, that is capable of storing instructions (e.g., the instructions816) for execution by a machine (e.g., the machine800), such that the instructions816, when executed by one or more processors of the machine800(e.g., the processors810), cause the machine800to perform any one or more of the methodologies described herein. Accordingly, a “machine-readable medium” refers to a single storage apparatus or device, as well as “cloud-based” storage systems or storage networks that include multiple storage apparatus or devices. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, one or more data repositories in the form of a solid-state memory (e.g., flash memory), an optical medium, a magnetic medium, other non-volatile memory (e.g., erasable programmable read-only memory (EPROM)), or any suitable combination thereof. The term “machine-readable medium” specifically excludes non-statutory signals per se. The I/O components850include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. In general, it will be appreciated that the I/O components850can include many other components that are not shown inFIG.8. The I/O components850are grouped according to functionality merely for simplifying the following discussion, and the grouping is in no way limiting. In various example embodiments, the I/O components850include output components852and input components854. The output components852include visual components (e.g., a display such as a plasma display panel (PDP), a light-emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor), other signal generators, and so forth. The input components854include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point based input components a mouse, a touchpad, a trackball, a joystick, a motion sensor, or other pointing instruments), tactile input components (e.g., a physical button, a touch screen that provides location and force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. In some embodiments, outputs from an EDA computing device may include design documents, files for additional steps in a process100, or outputs for circuit fabrication. In various embodiments, outputs of a timing analysis are used to generate updates and changes to a circuit design, and after a final closure of timing with all associated timing thresholds and design requirements met, circuit design output files are used to generate masks and other physical outputs for generation of a circuit. As described herein, “requirements,” “design elements,” and other aspects of a circuit design refer to selectable values that are set as part of the design of a circuit. Such design requirements or elements may be adjusted by a system operator or circuit designer to suit the particular goals of a project or circuit that results from the operations described herein. Embodiments described herein then optimize and improve the operation of a device such as the machine800in implementing EDA operations by improving resource usage of the machine800or another associated machine as part of design, fabrication, and testing of a circuit device. Communication can be implemented using a wide variety of technologies. The I/O components850may include communication components864operable to couple the machine800to a network880or devices870via couplings882. For example, the communication components864include a network interface component or another suitable device to interface with the network880. In further examples, the communication components864include wired communication components, wireless communication components, cellular communication components, near field communication (NFC) components, BLUETOOTH® components (e.g., BLUETOOTH® Low Energy), WI-FI® components, and other communication components to provide communication via other modalities. The devices870may be another machine or any of a wide variety of peripheral devices a peripheral device coupled via a USB). Language Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein. Although an overview of the inventive subject matter has been described with reference to specific example embodiments, various modifications and changes may be made to these embodiments without departing from the broader scope of embodiments of the present disclosure. Such embodiments of the inventive subject matter may be referred to herein, individually or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single disclosure or inventive concept if more than one is, in fact, disclosed. The embodiments illustrated herein are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed. Other embodiments may be used and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. The detailed description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled. As used herein, the term “or” may be construed in either an inclusive or exclusive sense. Moreover, plural instances may be provided for resources, operations, or structures described herein as a single instance. Additionally, boundaries between various resources, operations, modules, engines, and data stores are somewhat arbitrary, and particular operations are illustrated in a context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within a scope of various embodiments of the present disclosure. In general, structures and functionality presented as separate resources in the example configurations may be implemented as a combined structure or resource. Similarly, structures and functionality presented as a single resource may be implemented as separate resources. These and other variations, modifications, additions, and improvements fall within a scope of embodiments of the present disclosure as represented by the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The description above includes systems, methods, techniques, instruction sequences, and computing machine program products that embody illustrative embodiments of the disclosure. In the description, for the purposes of explanation, numerous specific details are set forth in order to provide an understanding of various embodiments of the inventive subject matter. It will be evident, however, to those skilled in the art, that embodiments of the inventive subject matter may be practiced without these specific details. In general, well-known instruction instances, protocols, structures, and techniques are not necessarily shown in detail. | 46,220 |
11861453 | Like reference symbols in the various drawings indicate like elements. DETAILED DESCRIPTION Cellular networks may suffer from an array of network issues (e.g., degrading hardware, misconfigurations between network elements, unreliable updates or upgrades to network equipment, etc.). The network issues may impact network performance and cause users of a cellular network (i.e., subscribers of a cellular network) to have a poor user experience with the cellular network. The poor user experience may result in user frustration and perhaps even a user switching network operators (i.e., network providers) as a means to resolve the network performance issues. Network providers (or operators) have an incentive to address these issues because network issues may affect their customer loyalty and may have a detrimental impact on their cellular services. Without resolving network issues, these issues could cost network operators business and potentially damage a network operator's goodwill and/or brand. Yet often network operators do not experience the network performance issues firsthand. In other words, users of a cellular network are the ones generally impacted by network performance issues. This means that network operators often may have to rely on the network users to report network issues when they occur. However, there are a few problems with user-reporting to address network issues. First off, network users not only need to recognize that the issues they are experiencing are likely due to their cellular network, but also to take their time to report the issue to the network operator in some manner. Clearly, this approach is not likely to work well for users who fail to recognize that they are experiencing less-than-ideal performance. For instance, a user becomes accustom to below-average network performance or does not realize that the network performance should be better. Here, this type of user may never inform the network operator that a network performance issue is present and simply changes cellular network providers thinking that another provider might result in better performance. In other words, the original cellular provider may never have the opportunity to address the problem. Furthermore, when a user does report a network performance issue to a network operator, the network operator performs an investigation of the reported issue. These investigations may be a labor intensive process that may leave some user issues unsolved due to a lack of available resources to investigate/address all reported problems. Particularly, network operators may often have to prioritize labor resources to operating the cellular network rather than investigating reported user issues. Another approach is that the network operator monitors the cellular network to detect anomalies that may indicate a network performance issue. An anomaly refers to a unique occurrence (or different behavior) during signaling for a cellular network. Here, an anomaly itself is agnostic as to whether the unique occurrence is an occurrence that indicates detrimental behavior (e.g., a network performance issue) or an occurrence that indicates non-detrimental behavior (e.g., not a network performance issue). Yet by identifying anomalies, a network operator may analyze an anomaly to determine whether the anomaly corresponds to a network performance issue. Detecting anomalies within a cellular network has traditionally had its drawbacks. For instance, depending on the cellular usage and traffic, cellular networks could have an immense amount of log data (e.g., network logs, inter-process logs, usage statistics, etc.). Sifting through the immense amounts of data to identify an anomaly may be resource intensive. Therefore, when an anomaly was detected that impacted network performance, an entity detecting the anomaly (e.g., the network operator) may develop a rule to more easily detect the same or similar anomaly in other instances. This traditional form of anomaly detection therefore generates one or more rules to identify a deviation from normal behavior. For instance, a rule defines that a certain message type typically occurs at a rate of five times a second. When that certain message type occurs more or less times per second, this rule would allow a system to detect this deviation as an anomaly. Unfortunately, the issue with this form of anomaly detection is that an entity must first specify what is considered normal behavior to identify anomalies with behavior outside of the specified normality. Here, this method only works for known anomalies dictated by known rules. In other words, a new anomaly that impacts network performance will be undetected until a rule specifically addresses the new anomaly (or the normal behavior that should be occurring instead of the new anomaly). This approach lacks any ability to be predictive for new anomalies that may cause performance issues. Thus, a predictive anomaly detector may more accurately use anomalies to detect network performance issues. FIG.1illustrates a communication network100(also referred to as a cellular network), which may be a Long-Term Evolution (LTE) network, a 5G network, and/or a multiple access network supporting numerous access technologies specified by the 3rd Generation Partnership Project (3GPP), such as the General Packet Radio Service (GPRS), the Global System for Mobile Communications/Enhanced Data Rates for GSM Evolution (GSM/EDGE), the Universal Mobile Telecommunication System/High Speed Packet Access (UMTS/HSPA), LTE and LTE advanced network technologies. The cellular network100(e.g., LTE network) enables wireless communication of high-speed data packets between subscriber devices102,102a-b, such as mobile phones and data terminals, and a base station104. The subscriber devices102may be interchangeably referred to as user equipment (UE) devices and/or mobile devices102. For instance, LTE is a wireless communication standard that is based on the GSM/EDGE and UMTS/HSPA network technologies and configured to increase the capacity and speed of the telecommunication by using different radio interfaces in addition to core network improvements. Different types of cellular networks100may support different bands/frequencies at various bandwidths to allow UE devices102to communicate data (e.g., data packets). To illustrate, LTE supports scalable carrier bandwidths, from 1.4 MHz to 20 MHz and supports both frequency division duplexing (FDD) and time-division duplexing (TDD) while 5G supports bandwidths ranging from 5 MHz to 100 MHz where some bandwidths overlap with LTE. UE devices102may be any telecommunication device that is capable of transmitting and/or receiving voice/data over the network100. UE devices102may include, but are not limited to, mobile computing devices, such as laptops, tablets, smart phones, and wearable computing devices (e.g., headsets and/or watches). UE devices102may also include other computing devices having other form factors, such as computing devices included in desktop computers, smart speakers/displays, vehicles, gaming devices, televisions, or other appliances (e.g., networked home automation devices and home appliances). UE devices102subscribe to network services provided by a network operator of the communication network100. The network operator may also be referred to as a mobile network operator (MNO), a wireless service provider, wireless carrier, cellular company, or mobile network carrier. The UE devices102may communicate with an external network30, such as a packet data network (PDN), through the communication network100(or 5G/3G/2G network). Referring toFIG.1, the communication network100is an LTE network that includes a first portion, an Evolved Universal Terrestrial Radio Access Network (e-UTRAN) portion106, and a second portion, an Evolved Packet Core (EPC) portion108. The first portion106includes an air interface110(i.e., Evolved Universal Terrestrial Radio Access (e-UTRA)) of 3GPP's LTE upgrade path for mobile networks, UE devices102, and multiple base stations104. The LTE air interface110uses orthogonal frequency-division multiple access (OFDMA) radio-access for the downlink and Single-carrier FDMA (SC-FDMA) for the uplink. Accordingly, the first portion106provides a radio access network (RAN) that supports radio communication of data packets and/or other surfaces from the external network30to the UE devices102over the air interface110via one or more base station104. Each base station104may include an evolved Node B (also referred as eNode B or eNB). An eNB104includes hardware that connects to the air interface110(e.g., a mobile phone network) for communicating directly with the UE devices102. For instance, the eNB104may transmit downlink LTE/3G/5G signals (e.g., communications) to the UE devices102and receive uplink LTE/3G/5G signals from the UE devices102over the air interface110. A base station104may have an associated coverage area104area that corresponds to an area where one or more UE devices102communicate with the network100by way of the base station104. The eNBs104use a S1 interface for communicating with the EPC108. The S1 interface may include an S1-MME interface for communicating with a Mobility Management Entity (MME)112and an S1-U interface for interfacing with a Serving Gateway (SGW)116. Accordingly, the S1 interface is associated with a backhaul link for communicating with the EPC108. The EPC108provides a framework configured to converge voice and data on the LTE network100. The EPC108unifies voice and data on an Internet Protocol (IP) service architecture and voice is treated as just another IP application. The EPC108includes, without limitation, several network elements, such as the MME112, a Serving GPRS Support Node (SGSN)114, the SGW116, a Policy and Charging Rules Function (PCRF)118, a Home Subscriber Server (HSS)120, and a Packet Data Node Gateway (PGW)122, The PGW122may also be referred to as a network gateway device122, and when the network corresponds to a 3G network, the network gateway device122includes a Gateway GPRS Support Node (GGSN) instead of the PGW122. Optionally, when the network corresponds to a 5G or 5G+ network, the network gateway device122may include a gateway node with a naming convention as defined by the 5G and/or 5G+ network. The MIME112, the SGSN114, the SGW116, the PCRF118, the HSS120, and the PGW122may be standalone components, or at least two of the components may be integrated together. The EPC108communicates with the UE devices102and the external network30to route data packets therebetween. The network100includes interfaces that allow the UE devices102, the base stations104, and various network elements (e.g., the MME112, the SGSN114, the SGW116, the PCRF118, the HSS120, and the PGW122) to cooperate with each other during use of the network100. Information flows along these interfaces throughout the network100and generally these interfaces may be divided into a user plane and a control plane. The user plane routes user plane traffic and includes a user plane protocol stack between the UE devices102and the base station104with sublayers, such as packet data convergence protocol (PDCP), radio link control (RLC), and medium access control (MAC). Some interfaces specific to the user plane, shown in solid lines between the network elements, are as follows: a S1-U interface between the base station104and the SGW116for per bearer user plane tunneling and inter base station path switching during handover; a S4 interface between a UE device102with 2G access or 3G access and the PGW122for control and mobility support and, in some cases, user plane tunneling; and a S12 interface (not shown) between the E-UTRAN portion106(e.g., UE device102) and the SGW116for user plane tunneling as an operator configuration option. Other types of communication networks (e.g., 3G, 5G, etc.) may include other user plane interfaces besides the ones depicted inFIG.1for the network100. The control plane is responsible for controlling and supporting user plane functions with control plane protocols. Particularly, the control plane controls E-UTRAN access connections (e.g., attaching and detaching from the E-UTRAN portion106of the network100), controls attributes of an established network access connection (e.g., an activation of an IP address), controls routing paths of an established network connection (e.g., to support user mobility), and/or controls an assignment of network resources based on demands to the network100(e.g., by a user of a UE device102). Some interfaces specific to the control plane, shown in dotted lines between network elements, are as follows: a S1-MME interface between the base station104and the MME112that guarantees delivery of signaling messages; a S3 interface between the SGSN114and the MME112that enables user/bearer information exchange for inter 3GPP access network mobility in idle and/or active states; a S5/S8 interface between the SGW116and the PGW122where the S5 interface is used in a non-roaming scenario to serve relocation based on UE device102mobility and to connect to a non-collocated gateway of a PDN while the S8 interface connects to public land mobile networks (PLMN); an S10 interface that coordinates handovers between MMES112; a S11 interface between the MME112and the SGW116for transferring signal messages; a S6a interface between the MME112and the HSS120that enables transfer of subscription and authentication data related to user access; a S6d interface between the HSS120and the SGSN114that also enables transfer of subscription and authentication data related to user access; and a S13 interface (not shown) that supports a UE device102identity check. Other types of communication networks (e.g., 3G, 5G, etc.) may include other control plane interfaces besides the ones depicted inFIG.1for the network100. When a particular UE device102connects to the network100, one or more control messages128are sent among the various network elements (e.g., between the network elements of the evolved packet core108and the E-UTRAN portion106). For instance, as illustrated byFIG.1, the base station104sends a control message128to the MME112indicating that a new UE device102is attempting to connect to the network100. As another example, the SGW116sends a control message128to the MME112indicating that data from the external network30has arrived for a particular UE device102and that the UE device102needs to be awoken (or paged) to establish tunnels in order to accept the waiting data. The control plane interfaces may transmit such control messages128using control plane protocols, such as a general packet radio service tunneling control (GTP-C) protocol or a Diameter protocol. The type of protocol used to transmit a control message128may depend on the interface. For instance, the S3, S5/S8, and S10 interfaces use GTP-C protocol while the S11, S6a, S6d, and S13 interfaces use Diameter protocol. The MME112is a key control-node for the LTE network100. The MME112manages sessions and states and authenticates and tracks a UE device102across the network100. For instance, the MME112may perform various functions such as, but not limited to, control of signaling and security for a Non Access Stratum (NAS), authentication and mobility management of UE devices102, selection of gateways for UE devices102, and bearer management functions. The SGSN114may act in some ways similar to the MME112. For instance, the SGSN114tracks the location of a UE device102and performs security and access control functions. In some examples, the SGSN114is responsible for mobility management (e.g., of a standby mode UE device102), logical link management, authentication, charging functions, and/or handling overload situations. The SGW116performs various functions related to IP data transfer for user devices102, such as data routing and forwarding, as well as mobility anchoring. The SGW116may perform functions such as buffering, routing, and forwarding of data packets for mobile devices102. The PCRF118is a node responsible for real-time policy rules and charging in the EPC108. In some examples, the PCRF118is configured to access subscriber databases (i.e., UE device users) to make policy decisions. Quality of service management may be controlled by dynamic policy interactions between the PCRF118and the network gateway device122. Signaling by the PCRF118may establish or modify attributes of an EPS bearer (i.e., a virtual connection between the UE device102and the PGW122). In some configurations, such as voice over LTE (VoLTE), the PCRF118allocates network resources for establishing calls and distributing requested bandwidth to a call bearer with configured attributes. The HSS120refers to a database of all UE devices102that includes all UE device user data. Generally, the HSS120is responsible for authentication for call and session setup. In other words, the HSS120is configured to transfer subscription and authentication data for user access and UE context authentication. The HSS120interacts with the MME112to authenticate the UE device102and/or UE device user. The MME communicates with the HSS120on the PLMN using Diameter protocol (e.g., via the S6a interface). The PGW122(i.e., network gateway device) performs various functions such as, but not limited to, internet protocol (IP) address allocation, maintenance of data connectivity for UE devices102, packet filtering for UE devices102, service level gating control and rate enforcement, dynamic host configuration protocol (DHCP) functions for clients and servers, and gateway general packet radio service (GGSN) functionality. In some implementations, data processing hardware124of the network gateway device122(e.g., PGW or GGSN or a gateway node with another naming convention as defined by 5G and/or 5G+ networks) receives control messages128associated with at least one UE device102. The data processing hardware124may receive the control messages128based on interaction(s) that at least one UE device102has with the network100within the coverage area104area of the base station104. Referring further toFIG.1, the communication network100also includes an anomaly detector200. In some examples, the anomaly detector200is part of the network gateway device122(e.g., PGW or GGSN or a gateway node with another naming convention as defined by 5G and/or 5G+ networks). For instance, data processing hardware124and/or memory hardware126of the network gateway device122host the anomaly detector200and execute the functionality of the anomaly detector200. In some implementations, the anomaly detector200communicates with the E-UTRAN portion106and the EPC108, but resides on the external network30(e.g., data processing hardware corresponding to the external network30). In other words, the external network30may be a distributed system (e.g., a cloud environment) with its own data processing hardware or shared data processing hardware (e.g., shared with the network gateway device122). In other configurations, a network element other than the network gateway device122implements the anomaly detector200. Additionally or alternatively, the anomaly detector200resides across more than one network element of the network100. Generally, the anomaly detector200is configured to detect anomalies that occur within the network100based on one or more control messages128. With a detected anomaly, the anomaly detector200analyzes whether the anomaly corresponds to a network performance issue202that impacts a performance of the network100. In other words, the anomaly detector200identifies a unique occurrence (i.e., the anomaly) within the network100and determines whether the unique occurrence is detrimental to the performance of the network100(or negatively impacts a user experience). When the anomaly detector200identifies that the detected anomaly impacts network performance, the anomaly detector200is configured to inform a network entity40responsible for the network performance issue202or relay the network performance issue202to an entity that knows or communicates with the responsible entity. For instance, the anomaly detector200may signal or inform the network operator of the network performance issue202corresponding to the detected anomaly. In some implementations, the anomaly detector200communicates the one or more control messages128that indicated the network anomaly to the network entity40. Here, the network entity40may further analyze one or more control messages128to help resolve the network issue202. Referring toFIGS.2A-2D, the anomaly detector200generally includes a collector210, an extractor220, a predictor230, and an analyzer240. The collector210is configured to receive at least one control message128from the network100. In some implementations, the collector210includes a datastore212to collect control messages128from the network100in order to function as a central database for logging data corresponding to the control messages128. With the collector210, the anomaly detector200may process the control messages128in a variety of ways to create training data (e.g., training control messages) that may be used to detect anomalies. For instance, the collector210groups together (e.g., within the datastore212) control messages128from a single session of a UE device102. In some examples, a session refers to a time period from when a user (via the UE device102) initiates a CreateSessionRequest or CreatePdpRequest message to when the user terminates the session with a Delete SessionResponse or DeletePdpContextRequest message. As another example, the collector210groups control messages128together to indicate an amount of data129that was transferred (e.g., either in an uplink direction, a downlink direction, or both) within a certain time period (e.g., during a session). With these control messages128grouped together, the collector210forms a representation of a total amount of data129for a certain time period. In other configurations, the collector210collects the log data as a sequence such that the control messages128are strung together as a time series (e.g., t0-t3. Here, the string of control messages128may be aggregated by an entity (e.g., a particular user or UE device102) or by sessions of the entity. If these sequences become too long, the collector210may be configured to dissect these sequences into sub-sequences of a fixed length and associate any identifiers of the original sequence to each sub-sequence. Otherwise, a sequence may have a label (e.g., a particular entity or UE device102) that when the collector210dissects the sequence would fail to transfer to one or more sub-sequences. The extractor220is configured to extract information from one or more control messages128and/or log data corresponding to control messages128. The extractor220may extract one or more features222and/or one or more labels224from the one or more control messages128(or parts thereof). Each feature222and/or label224refers to a characteristic derived from a control message128. In some examples, a label224is a characteristic of a network element, a UE device102, a user of a UE device, or a base station104that is generally obfuscated due to 3GPP standardization of the network100. In other words, although the extractor220may generate an actual label224directly from a control message128(or log data relating to a control message128), it should not be possible to contextually determine the actual label224simply from one or more control message128when the network100is 3GPP compliant. One such example of a label224is a type allocation code (TAC) that identifies a wireless device (e.g., a mobile phone type of a UE device102). Other examples of labels224may include, without limitation, identifiers corresponding to network elements of the network100(e.g., a MME identifier, a base station identity code (B SIC), an international mobile equipment identity (IMEI), E-UTRAN cell identity (ECI)/E-UTRAN cell global identifier (ECGI), etc.) On the other hand, a feature222corresponds to another characteristic derived from a control message128that is different than the characteristic forming the label224. Here, unlike for a label224, a feature222of a control message128may be discernable even when the network100is 3GPP compliant. Some examples of features222include a control message type (e.g., represented as an integer), a cause type for GTP-C messages, an amount of time elapsed between adjacent messages (e.g., when the collector210sequences the control messages128), etc. In some examples, the extractor220extracts different features222from different control message protocols. For instance, features222extracted from GTP-C messages would be different than features222extracted from Diameter messages. In some examples, features222extracted by the extractor220are crossed to create new features222. A cross of features222refers to a combination of a portion of two or more features222. For example, the extractor220crosses the message type feature222and the cause value feature222to generate a message type-cause value feature222. By crossing features222, the extractor220may provide additional training data sets potentially increasing the ability of the anomaly detector200to detect anomalies. Whether the extractor220extracts a feature222and/or a label224may depend on a stage of the anomaly detector200. In a first stage (e.g., training stage), the anomaly detector200trains to be able to predict network anomalies. In order to train the anomaly detector200, the extractor220extracts information from one or more control messages128at the collector210. The extracted information forms a training control message226that includes one or more features222and an actual label224. By including the actual label224as a ground truth with the training control message226, the anomaly detector200learns which features222may correspond to which label224. In a second stage (e.g., inference), after the anomaly detector200is trained, the extractor220no longer provides training control messages226with both features222and a label224. Instead, the extractor220extracts one or more features222from a control message128and relies on the trained anomaly detector200to predict the label224. In other words, as processing each control message128to extract an actual label224therefrom is time-sensitive, and therefore not practical in real-time, the trained anomaly detector200may predict potential labels234using only the features22extracted from the control message128as feature inputs. The predictor230is configured to use a predictive model232to predict a potential label234for a control message128associated with the one or more features222extracted from the control message128by the extractor220. Ideally, because of the standardization of 3GPP, it should not be possible for the predictor230to generate a prediction P where the potential label234matches (i.e., correctly predicts) the actual label224for a given control message128. Thus, when the predictor230predicts a potential label234that matches the actual label224from at least one control message128(e.g., features222of a control message128), this match indicates a unique correlation (i.e., a detected anomaly) between the control message(s)128and the labels224,234. When the predictor230generates a correct prediction P, the analyzer240analyzes the related control message128and/or the log data corresponding to the control message128. Here, the analyzer240analyzes the control message128to determine whether the control message128corresponds to a network performance issue202impacting network performance of the network100. In other words, the analyzer240determines whether the detected anomaly is a unique correlation due to detrimental behavior or whether the detected anomaly is simply unique behavior with little to no impact on network performance or user experience. When the analyzer240determines that the detected anomaly of the control message128impacts network performance, the analyzer240flags this detrimental behavior to be fixed. To fix the behavior, the analyzer240may communicate the network performance issue202to the network entity40(e.g., a network operator or a UE device provider) responsible for the network performance issue202. In some configurations, the analyzer240performs clustering. Clustering may be beneficial where there are too many anomalies occurring with a network100to investigate. Instead of investigating each and every detected anomaly, the analyzer240clusters the detected anomalies into similar groups. By clustering into groups, the analyzer240may prioritize larger clusters that potentially may have more detrimental impact on the network100(e.g., ranking clusters by network impact or likelihood/probability of network impact). Furthermore, when the analyzer240relies on human analysis to determine whether or not the detected anomaly corresponds to a network performance issue202, the analyzer240may use an autoencoder to perform dimensionality reduction. Dimensionality reduction by an autoencoder is configured to reduce large data sets (i.e., a large number of anomalies) by correlating redundant features in the large data sets. Here, as a neural network trained according to gradient descent, an autoencoder performs dimensionality reduction by trying to identify new structures or uniqueness in a data set. In other words, the autoencoder may isolate more unique anomalies for the network100that may more likely correlate to network performance issues202that should be analyzed. By combining clustering and autoencoding, a large number of anomalies may be formed into smaller groups (clusters) and then further reduced to make efficient use of human and/or computations resources. The predictor230predicts the potential label234using the predictive model232. In some examples, the predictive model232is a neural network (e.g., a deep neural network (DNN), a recurrent neural network (RNN), or a convolution neural network (CNN)). To generate predictions P, the predictive model232undergoes model training. Here, training for the predictive model232occurs using examples (also referred to as training data or a training data set) that correspond to control messages128and/or their related log data. In some implementations, the extractor220generates a set228of training control messages226as examples to train the predictive model232(e.g., shown inFIG.2B). In some configurations, each training control message226corresponds to a control message128processed at the collector210. The extractor220may form each training control message226by extracting one or more features222from a control message128along with the actual label224for the control message128. In some examples, when more than one control message128has the same label224, the features222of these control messages128are combined into one example or set228of training control messages226. For example, the extractor220creates a message type vector summary to account for each type of control message128included in a combination. The message type vector summary may include one entry for each possible message type to represent a number of times that a particular control message128was encountered (e.g., within a single session). In order to train the predictive model232, the predictor230divides the set228of training control messages226into a training set226Tand validation set226V. In some examples, in addition to the training set226Tand validation set226V, the training control messages226are also split into a test set. The predictive model232trains on the training set226Twhile using the validation set226Vto determine when to stop training (e.g., to prevent over-fit). The training may stop when a performance of the predictive model232reaches a particular threshold or when the performance of the predictive model232on the validation set226Vstops decreasing. In some examples, the training set226Tevaluates the final performance for the predictive model232. In some implementations, the predictive model232is trained as a multiclass classification model. As a multiclass classification model, the predictive model232outputs a probability distribution PBdisrepresenting an opinion regarding the probability PBfor each class. For instance, when the predictive model232predicts TAC, each TAC will be a different class such that the predictive model232will output a probability distribution for each class of TAC. In some examples, the process of training and evaluating the predictive model232occurs continuously to provide early detection of new network issues202that may arise. Once the training is complete, predictions P from the training may be fed back into the predictive model232. These predictions P may correspond to the training sets226T, the validations sets226V, the test sets, or any combination thereof. In other words, the predictive model232is configured to evaluate its predictions P from training on the training data (e.g., the set228of training control messages226). This approach may ensure the predictive model232has completed training and is ready to predict potential labels234 With reference toFIGS.2B and2D, in some examples, the predictive model232of the predictor230generates a probability PBfor a prediction P of a potential label234. To evaluate the probability PBof the potential label234, the predictor230may apply a confidence threshold236. The confidence threshold236indicates a level of confidence that the probability PBof the potential label234corresponds to an anomaly that requires evaluation by the analyzer240for detrimental behavior. In other words, when the prediction probability PBof the potential label234satisfies the confidence threshold236, the predictor230communicates the control message128corresponding to the potential label234to the analyzer240. For instance, when the confidence threshold236is 90%, a probability PBfor a prediction P of a potential label234indicative of a TAC that is greater than 90% indicates a confident prediction P that should pass to the analyzer240to be further analyzed. In some configurations, the predictive model232outputs/predicts a probability distribution PBdisover potential labels234a-n. In these configurations, each potential label234a-nin the probability distribution PBdisincludes a corresponding probability PB. In some examples, the predictor230predicts the potential label234by selecting the potential label234a-nhaving the highest probability PBin the probability distribution PBdisover potential labels234a-n. In the example shown inFIGS.2B and2D, the potential label234ahas the highest probability PBof ninety-one percent (91%) in the probability distribution PBdisover potential labels234a-n, and therefore the predictor230selects the potential label234aand compares the probability PB(91%) to the confidence threshold (90%). Thus, in the example, the predictor230determines that the probability PBof the selected potential label234asatisfies the confidence threshold236and passes the corresponding control message128to the analyzer240to determine whether the control message128corresponds to a respective network performance issue202impacting network performance. In some scenarios, the predictor230communicates to the analyzer240each potential label234a-nin the in the probability distribution PBdisthat has a corresponding probability PBsatisfying the confidence threshold236. In some configurations, the predictive model232is an RNN model that is better suited (than a DNN model) for sequential data. For an RNN model, the extractor220generates sequences for the features222. In other words, the extractor220may form the training control messages226from sequential control messages128(or sequential features222from sequential control messages128). With sequential features222, each sequence may be a training example such that sequential features222would be split into a training data set, a validation data set, and a test data set. Besides preferring sequential data, the RNN model operates relatively similar to the previously described predictive model232. In some examples, the predictive model232has difficulty distinguishing different potential labels234that perform similarly. For instance, when predicting TAC, there may be several TACs (e.g., three TACs) that perform identically. This identical behavior results in the predictive model232confidently knowing that the TAC is one of the three TACs, but not being able to predict exactly which TAC. To overcome this issue, the predictor230may use principal component analysis (PCA) to identify groupings of labels234that perform similarly (e.g., like the three TACs). Using PCA, the prediction P of the potential label234may be a vector where PCA identifies which groupings of labels224are commonly predicted together. For example, the PCA will identify that the three TACs should be considered together because the principal component vectors of these three TACs will have strong peaks indicating that they should be grouped (or considered) together. Referring toFIGS.2C and2D, the anomaly detector200may also include a filter250. The filter250is configured to prevent redundant analysis of similar detected anomalies. In other words, the anomaly detector200generates a filter250when an anomaly has been detected. The filter250may be for an anomaly of detrimental behavior or for an anomaly of non-detrimental behavior (i.e., acceptable behavior). Once the analyzer240has determined whether or not a control message128corresponding to an anomaly affects network performance, performing this same analysis for a similar control message128or sequence of similar control messages128may defer anomaly detection resources from detecting new anomalies or anomalies that need to be analyzed. Accordingly, the filter250attempts to prevent repeat analysis. For instance, when the analyzer240determines a control message128corresponds to a respective network issue202that affects network performance, the respective network issue202and/or control message128is reported to the responsible network entity40. Here, it would be redundant to re-analyze and report similar control messages128to the network entity40because the respective network issue202has been reported and will be addressed by the responsible network entity40in due course. On the other hand, when the analyzer240determines a control message does not affect network performance, the anomaly associated with the control message128is non-detrimental, and therefore acceptable. Accordingly, it would be pointless to re-analyze subsequent similar control messages128. The anomaly detector200may generally apply the filter250in two scenarios: (1) on features222extracted from control messages128prior to input to the predictive model232; or (2) on the set228of training control messages226used to train the predictive model232. In some examples (i.e., the first scenario), the anomaly detector200applies the filter250after the predictive model232has been trained, but before one or more features222extracted from a subsequent control message128are input to the trained predictive model232for prediction P of a subsequent potential label234. Here, the anomaly detector200identifies that at least one of the one or more of the corresponding features222extracted from the subsequent control message128match the one or more features222extracted from a previous control message128having a predicted potential label234indicative of a network anomaly, (i.e., the predicted potential label234satisfies a confidence threshold236). Thereafter, prior to using the predictive model232to predict a corresponding potential label234for the subsequent control message128, the anomaly detector200applies the filter250to remove the identified at least one of the one or more corresponding features222extracted from the subsequent control message128from use as feature inputs to the predictive model232. Accordingly, any prediction P output by the predictive model232at the predictor230for a potential label234will not be based on features222extracted from previous control messages128having predicted potential labels234indicative of a network anomaly, regardless of whether the analyzer240determined the network anomaly was non-detrimental or impacted network performance. For example,FIG.2Cillustrates the filter250in grey blocking and/or removing one of the three features222being communicated to the predictor230to predict a potential label234for a subsequent control message128. In other examples (i.e., the second scenario), such as inFIG.2D, the anomaly detector200re-trains the predictive model232so that any features222extracted from control messages128previously identified as having a prediction P of a potential label234indicative of a network anomaly are removed from the set228of training control messages226. This approach may also be applicable whether or not the control message128corresponds to a network performance issue202. To re-train the predictive model232, the anomaly detector200first identifies the one or more features222extracted from a prior control message128having a potential label234indicative of the network anomaly. Then, prior to using the predictive model232to predict P a corresponding potential label234for a subsequent control message128, the anomaly detector200modifies the set228of training control messages226by removing each training control message226that includes one or more corresponding features222that match any of the identified one or more features222extracted from the prior control message128. Thereafter, anomaly detector200re-trains the predictive model232on the modified set228of training control messages226. For instance,FIG.2Ddepicts the filter250modifying the set228of training control messages226by removing one of the three training control messages226from a retraining set (i.e., modified set228) of training control messages226. Once the one or more training control messages226have been removed, the filter250retrains the predictive model232one the modified set228of training control messages226. In other words, if the predictive model232is not trained to detect which features222are indicative of an anomaly, the anomaly will subsequently be undetected, and thus ignored. Additionally or alternatively, when a detected anomaly indicates a respective network performance issue202and the network performance issue202has subsequently been resolved, the anomaly detector200may be configured to remove any filter250relating to the resolved network performance issue202. In configurations where the predictive model232is an RNN model, the anomaly detector200may selectively apply a filter250. In other words, rather than removing an entire sequence as a feature222, the filter250may remove part of a sequence of the feature222that correspond to a particular control message(s)128of a detected anomaly. Advantageously, the filter250may remove this part of the sequence before the sequence splits into smaller sequences. For instance, when the filter250identifies when there are too many CreateSessionRequest messages with a small time period, these individual messages can be completely or partially removed. FIG.3illustrates a flow diagram of an example method300for detecting network anomalies. At operation302, the method300receives a control message128from a cellular network100. At operations304, the method300extracts one or more features222from the control message128. At operation306, the method300predicts a potential label234for the control message using a predictive model232configured to receive the one or more extracted features222from the control message128as feature inputs. The predictive model232is trained on a set of training control message226where each training control message226includes one or more corresponding features222and an actual label224. At operation308, the method300determines that a probability PBof the potential label234satisfies a confidence threshold236. At operation310, the method300analyzes the control message128to determine whether the control message128corresponds to a respective network performance issue202impacting network performance of the cellular network100. At operation312, when the control message128corresponds to the respective network performance issue impacting network performance, the method300communicates the network performance issue202to a network entity40responsible for the network performance issue202. In some examples, when the control message128fails to correspond to the respective network performance issue202, the method300receives a subsequent control message128from the cellular network100and extracts one or more corresponding features222from the subsequent control message128. In these examples, the method300also identifies that at least one of the one or more corresponding features222extracted from the subsequent control message128match the one or more features222extracted from the control message128. Here, prior to using the predictive model232to predict a corresponding potential label234for the a subsequent control message, the method300removes the identified at least one of the one or more features222extracted from the subsequent control message128as feature inputs to the predictive model232. In some implementations, when the control message128fails to correspond to the respective network performance issue202, the method300identifies the one or more features222extracted from the control message128. Here, in addition to identifying the one or more features222, the method300, prior to using the predictive model232to predict a corresponding potential label234for a subsequent control message128, modifies the set228of training control messages226by removing each training control message226that includes one or more of corresponding features that match any of the identified one or more features222extracted from the control message128and re-training the predictive model232with the modified set228of training control messages226. FIG.4is schematic view of an example computing device400that may be used to implement the systems (e.g., the anomaly detector200) and methods (e.g., the method300) described in this document. The computing device400is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document. The computing device400includes a processor410(i.e., data processing hardware), memory420(i.e., memory hardware), a storage device430, a high-speed interface/controller440connecting to the memory420and high-speed expansion ports450, and a low speed interface/controller460connecting to a low speed bus470and a storage device430. Each of the components410,420,430,440,450, and460, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate. The processor410can process instructions for execution within the computing device400, including instructions stored in the memory420or on the storage device430to display graphical information for a graphical user interface (GUI) on an external input/output device, such as display480coupled to high speed interface440. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices400may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system). The memory420stores information non-transitorily within the computing device400. The memory420may be a computer-readable medium, a volatile memory unit(s), or non-volatile memory unit(s). The non-transitory memory420may be physical devices used to store programs (e.g., sequences of instructions) or data (e.g., program state information) on a temporary or permanent basis for use by the computing device400. Examples of non-volatile memory include, but are not limited to, flash memory and read-only memory (ROM)/programmable read-only memory (PROM)/erasable programmable read-only memory (EPROM)/electronically erasable programmable read-only memory (EEPROM) (e.g., typically used for firmware, such as boot programs). Examples of volatile memory include, but are not limited to, random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), phase change memory (PCM) as well as disks or tapes. The storage device430is capable of providing mass storage for the computing device400. In some implementations, the storage device430is a computer-readable medium. In various different implementations, the storage device430may be a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. In additional implementations, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory420, the storage device430, or memory on processor410. The high speed controller440manages bandwidth-intensive operations for the computing device400, while the low speed controller460manages lower bandwidth-intensive operations. Such allocation of duties is exemplary only. In some implementations, the high-speed controller440is coupled to the memory420, the display480(e.g., through a graphics processor or accelerator), and to the high-speed expansion ports450, which may accept various expansion cards (not shown). In some implementations, the low-speed controller460is coupled to the storage device430and a low-speed expansion port490. The low-speed expansion port490, which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet), may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter. The computing device400may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server400aor multiple times in a group of such servers400a, as a laptop computer400b, or as part of a rack server system400c. Various implementations of the systems and techniques described herein can be realized in digital electronic and/or optical circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium” and “computer-readable medium” refer to any computer program product, non-transitory computer readable medium, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. To provide for interaction with a user, one or more aspects of the disclosure can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube), LCD (liquid crystal display) monitor, or touch screen for displaying information to the user and optionally a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's client device in response to requests received from the web browser. A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure. Accordingly, other implementations are within the scope of the following claims. | 55,316 |
11861454 | DETAILED DESCRIPTION Hereinafter, specific embodiments will be described with reference to the accompanying drawings. The following detailed description is provided to assist in a comprehensive understanding of at least one of a method, a device, and a system to be described herein. However, the detailed description is merely exemplary, and the present disclosure is not limited thereto. In the description of embodiments, a detailed description of known technologies related to the present disclosure will be omitted in the situation in which the subject matter of the present disclosure may be rendered rather unclear thereby. Terms to be used hereinafter will be defined in consideration of functions thereof in embodiments of the present disclosure, but may vary depending on the intentions of users or operators, as well as practices. Therefore, the terms shall be defined on the basis of the descriptions throughout the specification. The terms used in the detailed description shall be interpreted as being illustrative, while not being limitative, of embodiments. Unless clearly used otherwise, a singular form includes a plural meaning. It shall be understood that expressions such as “comprise,” “include,” and “have” used herein are for indicating certain features, numbers, steps, operations, elements, a part or combinations thereof and are not to be interpreted as excluding the presence or possibility of one or more features, numbers, steps, operations, elements, a part or combinations thereof other than the above. In addition, terms, such as first and second, may be used to describing a variety of components, but the components are not Limited by such terms. Such terms may be used to distinguish one component from other components. For example, a first component may be referred to as a second component and, in a similar manner, a second component may be referred to as a first component without departing from the scope of the present disclosure. FIG.1illustrates a configuration of a device for detecting an abnormality in time series data according to an embodiment of the present disclosure. Referring toFIG.1, an abnormality detection device100may include a pretreatment module102and a first artificial neural network module104. The abnormality detection device100is a device for detecting an abnormality in time series data, and uses machine learning technology to detect an abnormality. The pretreatment module102may perform pretreatment to input time series data. For example, the time series data may include heart rate data, brain wave data, temperature data, humidity data, precipitation data, quarterly sales performance data, traffic volumes, and the like, but is not limited thereto. The pretreatment module102may include a quantization part102aand a masking part102b. The quantization part102amay scale the input time series data in a predetermined size range. For example, the quantization part102amay scale the input time series data with a value between −1 and 1. The quantization part102amay quantize the time series data, scaled with a value between −1 and 1, according to the value thereof. FIG.2illustrates a process of quantizing time series data according to an embodiment of the present disclosure. Referring toFIG.2, the quantization part102amay divide the time series data between −1 and 1 into a plurality of size intervals. The quantization part102amay quantize a time series data value matching each of the size intervals by mapping the time series data value with a predetermined integer value (e.g., 0, 1, 2, 3, . . . , or N). Here, the integer value may be equal to or greater than 0, but is not limited thereto. The masking part102bmay generate tokens in predetermined units by tokenizing the quantized time series data. For example, the masking part102bmay generate tokens by tokening each value (i.e., a mapped integer value) of the quantized time series data. The masking part102bmay cover a portion of the tokenized time series data with a mask. In an example embodiment, the masking part102bmay perform a masking operation of covering a predetermined ratio of the tokenized time series data with the mask. Here, the masking part102bmay randomly cover a predetermined ratio of the tokenized time series data with a mask or a specific portion of the tokenized time series data according to the training process of the first artificial neural network module104. The first artificial neural network module104may receive the pretreated time series data from the pretreatment module102, and be trained to detect an abnormality in the input time series data. In an example embodiment, the first artificial neural network module104may include an artificial neural network based on a transformer. The transformer is an artificial neural network adopting self-attention while using an encoder-encoder architecture, i.e., a sequence-to-sequence architecture. The first artificial neural network module104may learn the context of an input sequence by calculating the concentration ratio of each of the tokens by multi-head self-attention. The first artificial neural network module104may include an embedding part104a, a generator104b, and a reverse embedding part104c. The embedding part104amay generate embedded data by receiving the tokenized time series data, a portion of which is covered with a mask, from the masking part102b, and embedding the input time series data. The embedding part104amay include a first embedding part104a-1and a second embedding part104a-2. FIG.3schematically illustrates a process of embedding time series data according to an embodiment of the present disclosure. Referring toFIG.3, the first embedding part104a-1may perform first embedding to the tokenized time series data, a portion of which is covered with a mask. In this case, the first embedding part104a-1may form a first embedding vector by performing the first embedding to each quantized value (i.e., an integer indicating the size of the time series data) of the time series data. Here, the first embedding vector may have a matrix form matching a vector dimension corresponding to the total number of integers for quantization (i.e., the total number of size intervals inFIG.2)×one quantized value. The second embedding part104a-2may perform second embedding to the first-embedded time series data. The second embedding part104a-2may generate an embedding vector by performing the second embedding to the time series order of the first-embedded time series data. Consequently, time-series position information may be imparted to the corresponding embedding vector. The generator104bmay be an artificial neural network trained to restore the original time series data using the embedding vector, generated by the embedding part104a, as an input. That is, the embedding vector is configured such that a portion of the time series data is covered with a mask. Here, the generator104bmay learn to restore the portion of the embedding vector covered with a mask. FIG.4schematically illustrates a situation in which the generator104brestores a portion of time series data covered with a mask, according to an embodiment of the present disclosure. When the embedding vector is input, the generator104bmay output a restored embedding vector on the basis of the embedding vector. The reverse embedding part104cmay perform reverse embedding to the restored embedding vector output from the generator104b. The reverse embedding part104cmay convert the restored embedding vector into an input data form, i.e., the form of the time series data input to the artificial neural network module104by the reverse embedding. Here, the input data form may be a data form obtained by quantizing the time series data. FIG.5illustrates a process of covering a restored embedding vector into an input data form according to an embodiment of the present disclosure. Referring toFIG.5, the reverse embedding part104cmay calculate the similarity between a restored embedding vector V1output from the generator104band a first embedding vector V2produced by the embedding of the embedding part104a. Here, the restored embedding vector V1may have the shape of a matrix matching a vector dimension D corresponding to a product of the length S of the time series data×one quantized value. In addition, the first embedding vector V2may have the shape of a matrix matching a vector dimension D corresponding a product of the total number N of integers for quantization×one quantized value. The reverse embedding part104cmay convert the restored embedding vector into the input data form by selecting the maximum value of the similarity between the restored embedding vector V1and the first embedding vector V2at each position of the time series data as a quantized value. The first artificial neural network module104may compare restored time series data output from the reverse embedding part104cwith an answer value (i.e., original time series data) so that the parameters of the generator104bare learned. FIG.6is a flowchart illustrating a method of detecting an abnormality in time series data according to an embodiment of the present disclosure. Although the method is illustrated as including a plurality of operations in the flowchart illustrated inFIG.6, at least some of operations may be performed in different orders, be combined and performed with other operations, or be divided into sub-operations, or one or more operations (not shown) may be added. Referring toFIG.6, in S101, the pretreatment module102scales input time series data in a predetermined size range and then quantizes the scaled time series data. That is, the pretreatment module102may scale the time series data in the predetermined size range, and then quantize the scaled time series data by dividing the scaled time series data into a plurality of size intervals and mapping time series data values matching the size intervals with predetermined integer values. Afterwards, in S103, the pretreatment module102perform first masking to cover a portion of the quantized time series data with a mask. Here, the pretreatment module102may randomly cover a predetermined ratio of the quantized time series data. Subsequently, in S105, the first artificial neural network module104generates an embedding vector by receiving the time series data, the portion of which is randomly covered with a mask, from the pretreatment module102and embedding the input time series data. Specifically, the first artificial neural network module104may generate the embedding vector by performing first embedding to each quantized value of the time series data randomly covered with a mask and then second embedding to each time series order of the first-embedded time series data. Afterwards, in S107, the first artificial neural network module104outputs first-restored time series data in which the portion randomly covered with the mask is restored by inputting the embedding vector to the generator104b. Here, the first artificial neural network module104outputting the first-restored time series data may include converting the restored embedding vector, output from the generator104b, into an input data form. Subsequently, in S109, the first artificial neural network module104extracts a portion to be second-masked from the time series data by comparing the first-restored time series data and the original time series data (i.e., the time series data, a portion of which is not covered with a mask, as an answer value). Specifically, the first artificial neural network module104may calculate the difference between the first-restored time series data and the original time series data at each time series position. The first artificial neural network module104may line up differences between the first-restored time series data and the original time series data in the descending order and extract any difference equal to or greater than a predetermined threshold value as a portion to be second-masked. Here, the first artificial neural network module104may be first trained so that the difference between the first-restored time series data and the original time series data is minimized. Afterwards, in S111, the pretreatment module102performs second masking to cover a portion of the quantized time series data, in which the difference between the first-restored time series data and the original time series data is equal to or greater than the predetermined threshold value, with a mask Subsequently, in S113, the first artificial neural network module104receives the second-masked time series data from the pretreatment module102and outputs second-restored time series data, in which the second-masked portion is restored. Here, the first artificial neural network module104outputting the second-restored time series data may include generating the embedding vector by embedding the second-masked time series data, outputting the restored embedding vector by inputting the generated embedding vector to the generator104b, and converting the output restored embedding vector into an input data form. Here, the first artificial neural network module104may be second trained to compare the second-restored time series data and the original time series data so that the difference between the second-restored time series data and the original time series data is minimized. Afterwards, in mean restored data, the first artificial neural network module104calculates mean restored time series data by averaging the first-restored time series data and the second-restored time series data and trains the generator104bso that the difference between the mean restored time series data and the original time series data is minimized. That is, the first artificial neural network module104may be third trained so that the difference between the mean restored time series data and the original time series data is minimized. Here, in the training process of the first artificial neural network module104, normal data may only be used as the time series data. That is, the first artificial neural network module104may perform machine learning only using normal time series data. When the training of the first artificial neural network module104is finished, the time series data may be input to the first artificial neural network module104in an inference process in order to determine whether or not the time series data has an abnormality. According to the disclosed embodiment, an abnormality in time series data can be detected using a transformer-based artificial neural network. Thus, the abnormality in the time series data can be detected using a single artificial neural network without having to use a plurality of decoders. Due to the use of the deep learning model suitable for processing the time series data, normal distribution of the time series data can be properly learned, thereby improving abnormality detection performance. The term “module” used herein may refer to a functional and structural combination of hardware for realizing the technical principle of the present disclosure and software for driving the hardware. For example, the module may mean a logical unit of specific codes and a hardware resource by which the specific codes are to be performed. The module does not necessarily mean physically connected codes or a single type of hardware. FIG.7illustrates a configuration of a device for detecting an abnormality in time series data according to another embodiment of the present disclosure. Here, features different from those of the embodiment illustrated inFIG.1will mainly be described. Referring toFIG.7, the abnormality detection device100may include a pretreatment module102, a first artificial neural network module104, and a second artificial neural network module106. Here, the pretreatment module102and the first artificial neural network module104are the same as or similar to those of the former embodiment illustrated inFIG.1, and thus detailed descriptions thereof will be omitted. In the illustrated embodiment, the second artificial neural network module106may include a transformer-based artificial neural network. The second artificial neural network module106and the first artificial neural network module104may constitute a generative adversarial model. In this generative adversarial model, the first artificial neural network module104may serve as a generator, whereas the second artificial neural network module106may serve as a discriminator. The second artificial neural network module106may receive the original time series data and the restored time series data output from the first artificial neural network module104. Here, a CLS token may be inserted into the head portion of each of the original time series data and the restored time series data input to the second artificial neural network module106. Here, the CSL token may indicate a vector token used in classification. The second artificial neural network module106may include a discriminator106a. The discriminator106amay be an artificial neural network trained to classify the original time series data as true and the restored time series data as false. Here, the first artificial neural network module104may be trained to generate the restored time series data so that the difference between the original time series data and the restored time series data classified by the discriminator106a. In this manner, the first artificial neural network module104may generate the restored time series data to be more similar to the original time series data. The first artificial neural network module104and the second artificial neural network module106may be trained in an alternating manner. In addition, in the training process of the first artificial neural network module104, the second artificial neural network module106may also be trained. For example, the CLS token may be inserted into the head portion of each of the first-restored time series data, the second-restored time series data, the mean restored time series data, and the like, input to the second artificial neural network module106, and then classified by the second artificial neural network module106. FIG.8is a block diagram illustrating a computing environment10including a computing apparatus suitable to be used in example embodiments. In the illustrated embodiments, each component may have a function and capability different from those to be described below, and additional components not described below may be included. The illustrated computing environment10includes a computing device12. According to an embodiment, the computing device12may be the locking apparatus110. In addition, the computing device12may be the device100for detecting an abnormality in time series data. The computing device12includes at least one processor14, a computer readable storage medium16, and a communication bus18. The processor14may allow the computing device12to operate according to the example embodiments described above. For example, the processor14may execute one or more programs stored in the computer readable storage medium16. The one or more programs may include one or more computer executable instructions. The computer executable instructions may be configured to allow the computing device12to perform the operations according to the example embodiments when executed by the processor14. The computer readable storage medium16may be configured to store computer executable instructions, program codes, program data, and/or other suitable forms of information. A program20stored in the computer readable storage medium16may include a set of instructions executable by the processor14. According to an embodiment, the computer readable storage medium16may be a memory (e.g., a volatile memory such as a random access memory (RAM), a non-volatile memory, or a combination thereof), one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, other types of storage media which can be accessed by the computing device12and store intended information, or combinations thereof. The communication bus18may interconnect various components of the computing device12, including the processor14and the computer readable storage medium16, to each other. The computing device12may further include one or more input/output (I/O) interfaces22providing an interface for one or more I/O devices24and one or more network communication interfaces26. The I/O interface22and the network communication interfaces26may be connected to the communication bus18. The I/O devices24may be connected to other components of the computing device12through the I/O interfaces22. The I/O devices24may include input devices, such as a pointing device (e.g., a mouse and a track pad), a keyboard, a touch input device (e.g., a touch pad and a touch screen), a voice or sound input device, various types of sensors, and/or a capturing device, and/or output devices, such as a display device, a printer, a speaker, and/or a network card. Each of the I/O devices24may be one component constituting the computing device12, may be included in the computing device12, or may be connected to the computing device12as a device separate from the computing device12. Although the exemplary embodiments of the present disclosure have been described in detail hereinabove, a person having ordinary knowledge in the technical field to which the present disclosure pertains will appreciate that various modifications are possible to the foregoing embodiments without departing from the scope of the present disclosure. Therefore, the scope of protection of the present disclosure shall not be limited to the foregoing embodiments but shall be defined by the appended Claims and equivalents thereof. | 21,821 |
11861455 | DETAILED DESCRIPTION In the following description, some specific details are included to provide a thorough understanding of various disclosed embodiments. One skilled in the relevant art, however, will recognize that embodiments may be practiced without one or more of these specific details, or with other methods, components, materials, etc. In other instances, well-known structures associated with quantum processors, such as quantum devices, couplers, and control systems including microprocessors and drive circuitry have not been shown or described in detail to avoid unnecessarily obscuring descriptions of the embodiments of the present methods. Throughout this specification and the appended claims, the words “element” and “elements” are used to encompass, but are not limited to, all such structures, systems, and devices associated with quantum processors, as well as their related programmable parameters. Unless the context requires otherwise, throughout the specification and claims which follow, the word “comprise” and variations thereof, such as, “comprises” and “comprising” are to be construed in an open, inclusive sense, that is as “including, but not limited to.” Reference throughout this specification to “one embodiment” “an embodiment”, “another embodiment”, “one example”, “an example”, “another example”, “one implementation”, “another implementation”, or the like means that a particular referent feature, structure, or characteristic described in connection with the embodiment, example, or implementation is included in at least one embodiment, example, or implementation. Thus, the appearances of the phrases “in one embodiment”, “in an embodiment”, “another embodiment” or the like in various places throughout this specification are not necessarily all referring to the same embodiment, example, or implementation. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments, examples, or implementations. It should be noted that, as used in this specification and the appended claims, the singular forms “a,” “an,” and “the” include plural referents unless the content clearly dictates otherwise. Thus, for example, reference to a problem-solving system including “a quantum processor” includes a single quantum processor, or two or more quantum processors. It should also be noted that the term “or” is generally employed in its sense including “and/or” unless the content clearly dictates otherwise. The headings provided herein are for convenience only and do not interpret the scope or meaning of the embodiments. Hybrid Computing System Comprising a Quantum Processor FIG.1illustrates a hybrid computing system100including a digital computer102coupled to an analog computer104. In some implementations, the analog computer104is a quantum computer and the digital computer102is a classical computer. The exemplary digital computer102includes a digital processor (such as one or more central processor units106) that may be used to perform classical digital processing tasks described in the present systems and methods. Those skilled in the relevant art will appreciate that the present systems and methods can be practiced with other digital computer configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, personal computers (“PCs”), network PCs, mini-computers, mainframe computers, and the like, when properly configured or programmed to form special purpose machines, and/or when communicatively coupled to control an analog computer, for instance a quantum computer. Digital computer102will at times be referred to in the singular herein, but this is not intended to limit the application to a single digital computer. The present systems and methods can also be practiced in distributed computing environments, where tasks or sets of instructions are performed or executed by remote processing devices, which are linked through a communications network. In a distributed computing environment computer- or processor-readable instructions (sometimes known as program modules), application programs and/or data, may be located in both local and remote memory storage devices (e.g., nontransitory computer- or processor-readable media). Digital computer102may include at least one or more digital processors (e.g., one or more central processor units106), one or more system memories108, and one or more system buses110that couples various system components, including system memory108to central processor unit106. The digital processor may be any logic processing unit, such as one or more central processing units (“CPUs”) with one or more cores, graphics processing units (“GPUs”), digital signal processors (“DSPs”), application-specific integrated circuits (“ASICs”), field-programmable gate arrays (“FPGAs”), programmable logic controllers (PLCs), etc. Digital computer102may include a user input/output subsystem112. In some implementations, the user input/output subsystem includes one or more user input/output components such as a display114, mouse116, and/or keyboard118. System bus110can employ any known bus structures or architectures, including a memory bus with a memory controller, a peripheral bus, and a local bus. System memory108may include non-volatile memory, for example one or more of read-only memory (“ROM”), static random access memory (“SRAM”), Flash NAND; and volatile memory, for example random access memory (“RAM”) (not shown), all of which are examples of nontransitory computer- or processor-readable media. A basic input/output system (“BIOS”)120, which can form part of the ROM, contains basic routines that help transfer information between elements within digital computer102, such as during startup. Digital computer102may also include other non-volatile memory122. Non-volatile memory122may take a variety of forms, including: a hard disk drive for reading from and writing to a hard disk, an optical disk drive for reading from and writing to removable optical disks, and/or a magnetic disk drive for reading from and writing to magnetic disks, all of which are examples of nontransitory computer- or processor-readable media. The optical disk can be a CD-ROM or DVD, while the magnetic disk can be a magnetic floppy disk or diskette. Non-volatile memory122may communicate with digital processor via system bus110and may include appropriate interfaces or controllers124coupled to system bus110. Non-volatile memory122may serve as nontransitory long-term storage for computer- or processor-readable instructions, data structures, or other data (also called program modules) for digital computer105. Although digital computer102has been described as employing hard disks, optical disks and/or magnetic disks, those skilled in the relevant art will appreciate that other types of non-volatile computer-readable media may be employed, such a magnetic cassettes, flash memory cards, Flash, ROMs, smart cards, etc., all of which are further examples of nontransitory computer- or processor-readable media. Those skilled in the relevant art will appreciate that some computer architectures conflate volatile memory and non-volatile memory. For example, data in volatile memory can be cached to non-volatile memory, or a solid-state disk that employs integrated circuits to provide non-volatile memory. Some computers place data traditionally stored on disk in memory. As well, some media that are traditionally regarded as volatile can have a non-volatile form, e.g., Non-Volatile Dual In-line Memory Module variation of Dual In Line Memory Modules. Various sets of computer- or processor-readable instructions (also called program modules), application programs and/or data can be stored in system memory108. For example, system memory108may store an operating system126, and a set of computer- or processor-readable server instructions (i.e., server modules)128. In some implementations, server module128includes instructions for communicating with remote clients and scheduling use of resources including resources on the digital computer102and analog computer104. For example, a Web server application and/or Web client or browser application for permitting digital computer102to exchange data with sources via the Internet, corporate Intranets, or other networks, as well as with other server applications executing on server computers. In some implementations, system memory108may store a set of computer- or processor-readable calculation instructions (i.e., calculation module130) to perform pre-processing, co-processing, and post-processing to analog computer104. In some implementations, system memory108may store post-processing instructions, or make use of the instructions in calculation instructions module130. Execution of the post-processing instructions can cause a processor (such as CPU106) to perform post-processing in digital computer102. For example, digital computer102can perform post-processing of samples obtained from analog computer104based on post-processing instructions in calculation instructions module130. Post-processing of samples from a physical quantum annealer, such as analog computer104, is described in following sections of the present disclosure. Post-processing can include, for example, quantum Monte Carlo and/or annealed importance sampling. In accordance with the present systems and methods, system memory108may store at set of analog computer interface modules132operable to interact with the analog computer104. In some implementations, system memory108may store a set of Boltzmann machine instructions or a Boltzmann machine module134to provide procedures and parameters for the operation of the analog computer104as a Boltzmann machine. For example, the Boltzmann machine module134can implement a method (such as method300ofFIG.3) on digital computer102and analog computer104. The hybrid computer100following instructions in the Boltzmann machine module134can implement graphical representations of portions of Boltzmann machines. In some implementations, system memory includes a set of training and validations instructions or training and validations instructions module136. A Boltzmann machine can be trained via supervised or unsupervised learning. The hybrid computer100may implement training methods defined in the training and validations instructions module136. As well, a Boltzmann machine once trained may need validating. The hybrid computer100may validate a Boltzmann machine following methods defined in the training and validations instructions module136. In some implementations, system memory108may store a set of runtime instructions or runtime instructions module138to provide executable procedures and parameters to deploy and/or monitor a Boltzmann machine. While shown inFIG.1as being stored in system memory108, the modules shown and other data can also be stored elsewhere including in non-volatile memory122or one or more other non-transitory computer- or processor-readable media. The analog computer104can be provided in an isolated environment (not shown). For example, where the analog computer104is a quantum computer, the environment shields the internal elements of the quantum computer from heat, magnetic field, and the like. The analog computer104includes one or more analog processors140. Examples of analog processor140include quantum processors such as those described below in reference toFIG.2. A quantum processor includes programmable elements such as qubits, couplers, and other devices. The qubits are read out via readout system142. These results are fed to the various sets of computer- or processor-readable instructions for the digital computer102including server module128, calculation module130, analog computer interface modules132, or other modules stored in non-volatile memory122, returned over a network or the like. The qubits are controlled via qubit control system144. The couplers are controlled via coupler control system146. In some embodiments, the qubit control system144and the coupler control system146are used to implement quantum annealing, as described herein, on analog processor140. In some implementations, the digital computer102can operate in a networked environment using logical connections to at least one client computer system. In some implementations, the digital computer102is coupled via logical connections to at least one database system. These logical connections may be formed using any means of digital communication, for example, through a network, such as a local area network (“LAN”) or a wide area network (“WAN”) including, for example, the Internet. The networked environment may include wired or wireless enterprise-wide computer networks, intranets, extranets, and/or the Internet. Other embodiments may include other types of communication networks such as telecommunications networks, cellular networks, paging networks, and other mobile networks. The information sent or received via the logical connections may or may not be encrypted. When used in a LAN networking environment, digital computer102may be connected to the LAN through an adapter or network interface card (“NIC”) (communicatively linked to system bus110). When used in a WAN networked environment, digital computer102may include an interface and modem (not shown), or a device such as NIC, for establishing communications over the WAN. Non-networked communications may additionally, or alternatively, be employed. In accordance with some embodiments of the present systems and devices, a quantum processor (such quantum processor140) may be designed to perform quantum annealing and/or adiabatic quantum computation. An evolution Hamiltonian is constructed, that is proportional to the sum of a first term proportional to a problem Hamiltonian and a second term proportional to a delocalization Hamiltonian, as follows: HE∝A(t)HP+B(t)HD where HEis the evolution Hamiltonian, HPis the problem Hamiltonian, HDis the delocalization Hamiltonian, and A(t), B(t) are coefficients that can control the rate of evolution, and typically lie in the range [0,1]. In some implementations, a time-varying envelope function is placed on the problem Hamiltonian. A suitable delocalization Hamiltonian is given by: HD∝-12∑i=1NΔiσix where N represents the number of qubits, σixis the Pauli x-matrix for the ithqubit and Δiis the single qubit tunnel splitting induced in the ithqubit. Here, the σixterms are examples of “off-diagonal” terms. A common problem Hamiltonian includes a first component proportional to diagonal single qubit terms, and a second component proportional to diagonal multi-qubit terms, and may be of the following form: HP∝-ɛ2[∑i=1Nhiσiz+∑j>iNJijσizσjz] where N represents the number of qubits, σizis the Pauli z-matrix for the ithqubit, hiand Jijare dimensionless local fields for the qubits, and couplings between qubits, respectively, and ε is a characteristic energy scale for HP. The σizand σizσjzterms are examples of “diagonal” terms. The former is a single qubit term and the latter a two qubit term. Throughout this specification, the terms “problem Hamiltonian” and “final Hamiltonian” are used interchangeably unless the context dictates otherwise. Certain states of the quantum processor are, energetically preferred, or simply preferred by the problem Hamiltonian. These include the ground states but may include excited states. Hamiltonians such as HDand HPin the above two equations, respectively, may be physically realized in a variety of different ways. A particular example is realized by an implementation of superconducting qubits. Exemplary Superconducting Quantum Processor for Quantum Annealing FIG.2is a schematic diagram of a portion of an exemplary superconducting quantum processor200designed for quantum annealing (and/or adiabatic quantum computing) components from which may be used to implement the present systems and devices. The portion of superconducting quantum processor200shown inFIG.2includes two superconducting qubits202, and204. Also shown is a tunable σizσjzcoupling (diagonal coupling) via coupler210therebetween qubits202and204(i.e., providing 2-local interaction). While the portion of quantum processor200shown inFIG.2includes only two qubits202,204and one coupler206, those of skill in the art will appreciate that quantum processor200may include any number of qubits and any number of couplers coupling information therebetween. The portion of quantum processor200shown inFIG.2may be implemented to physically realize quantum annealing and/or adiabatic quantum computing. Quantum processor200includes a plurality of interfaces208,210,212,214, and216that are used to configure and control the state of quantum processor200. Each of interfaces208,210,212,214, and216may be realized by a respective inductive coupling structure, as illustrated, as part of a programming subsystem and/or an evolution subsystem. Such a programming subsystem and/or evolution subsystem may be separate from quantum processor200, or it may be included locally (i.e., on-chip with quantum processor200) as described in, for example, U.S. Pat. Nos. 7,876,248 and 8,035,540. In the operation of quantum processor200, interfaces208and214may each be used to couple a flux signal into a respective compound Josephson junction218and220of qubits202and204, thereby realizing a tunable tunneling term (the Δiterm) in the system Hamiltonian. This coupling provides the off-diagonal σxterms of the Hamiltonian and these flux signals are examples of “delocalization signals”. In some implementations, the tunneling term is selected to make a first portion of the qubits on the quantum processor more classical relative a second portion of the qubits. For example, qubit202may be a hidden unit in a Boltzmann machine and have a smaller tunneling term relative to qubit204. Similarly, interfaces210and212may each be used to apply a flux signal into a respective qubit loop of qubits202and204, thereby realizing the hiterms in the system Hamiltonian. This coupling provides the diagonal σzterms in the system Hamiltonian. Furthermore, interface216may be used to couple a flux signal into coupler206, thereby realizing the Jijterm(s) in the system Hamiltonian. This coupling provides the diagonal σizσjzterms in the system Hamiltonian. InFIG.2, the contribution of each of interfaces208,210,212,214, and216to the system Hamiltonian is indicated in boxes208a,210a,212a,214a, and216a, respectively. As shown, in the example ofFIG.2, the boxes208a,210a,212a,214a, and216aare elements of time-varying Hamiltonians for quantum annealing and/or adiabatic quantum computing. Throughout this specification and the appended claims, the term “quantum processor” is used to generally describe a collection of physical qubits (e.g., qubits202and204) and couplers (e.g., coupler206). The physical qubits202and204and the coupler206are referred to as the “programmable elements” of the quantum processor200and their corresponding parameters (e.g., the qubit hivalues and the coupler Jijvalues) are referred to as the “programmable parameters” of the quantum processor. In the context of a quantum processor, the term “programming subsystem” is used to generally describe the interfaces (e.g., “programming interfaces”210,212, and216) used to apply the programmable parameters (e.g., the hiand Jijterms) to the programmable elements of the quantum processor200and other associated control circuitry and/or instructions. As previously described, the programming interfaces of the programming subsystem may communicate with other subsystems which may be separate from the quantum processor or may be included locally on the processor. As described in more detail later, the programming subsystem may be configured to receive programming instructions in a machine language of the quantum processor and execute the programming instructions to program the programmable elements in accordance with the programming instructions. Similarly, in the context of a quantum processor, the term “evolution subsystem” generally includes the interfaces (e.g., “evolution interfaces”208and214) used to evolve the programmable elements of the quantum processor200and other associated control circuitry and/or instructions. For example, the evolution subsystem may include annealing signal lines and their corresponding interfaces (208,214) to the qubits (202,204). Quantum processor200also includes readout devices222and224, where readout device222is associated with qubit202and readout device224is associated with qubit204. In some embodiments, such as shown inFIG.2, each of readout devices222and224includes a DC-SQUID inductively coupled to the corresponding qubit. In the context of quantum processor200, the term “readout subsystem” is used to generally describe the readout devices222,224used to read out the final states of the qubits (e.g., qubits202and204) in the quantum processor to produce a bit string. The readout subsystem may also include other elements, such as routing circuitry (e.g., latching elements, a shift register, or a multiplexer circuit) and/or may be arranged in alternative configurations (e.g., an XY-addressable array, an XYZ-addressable array, etc.). Qubit readout may also be performed using alternative circuits, such as that described in PCT Patent Publication WO2012064974. WhileFIG.2illustrates only two physical qubits202,204, one coupler206, and two readout devices222,224, a quantum processor (e.g., processor200) may employ any number of qubits, couplers, and/or readout devices, including a larger number (e.g., hundreds, thousands or more) of qubits, couplers and/or readout devices. The application of the teachings herein to processors with a different (e.g., larger) number of computational components should be readily apparent to those of ordinary skill in the art. Examples of superconducting qubits include superconducting flux qubits, superconducting charge qubits, and the like. In a superconducting flux qubit the Josephson energy dominates or is equal to the charging energy. In a charge qubit it is the reverse. Examples of flux qubits that may be used include rf-SQUIDs, which include a superconducting loop interrupted by one Josephson junction, persistent current qubits, which include a superconducting loop interrupted by three Josephson junctions, and the like. See, examples of rf-SQUID qubits in Bocko, et al., 1997, IEEE Trans. on Appl. Supercond.7, 3638; Friedman, et al., 2000, Nature406, 43; and Harris, et al., 2010, Phys. Rev. B81, 134510; or persistent current qubits, Mooij et al., 1999, Science285, 1036; and Orlando et al., 1999, Phys. Rev. B60, 15398. In addition, hybrid charge-phase qubits, where the energies are equal, may also be used. Further details of superconducting qubits may be found in Makhlin, et al., 2001, Rev. Mod. Phys.73, 357; Devoret et al., 2004, arXiv:cond-mat/0411174; Zagoskin and Blais, 2007, Physics in Canada63, 215; Clarke and Wilhelm, 2008, Nature453, 1031; Martinis, 2009, Quantum Inf. Process.8, 81; and Devoret and Schoelkopf, 2013, Science339, 1169. In some embodiments, the qubits and couplers are controlled by on chip circuitry. Examples of on-chip control circuitry can be found in U.S. Pat. Nos. 7,876,248; 7,843,209; 8,018,244; 8,098,179; 8,169,231; and 8,786,476. Further details and implementations of exemplary quantum processors that may be used in conjunction with the present systems and devices are described in, for example, U.S. Pat. Nos. 7,533,068; 8,008,942; 8,195,596; 8,190,548; and 8,421,053. Sampling Using a Physical Quantum Annealer A physical quantum annealer (PQA) can be used to change the Hamiltonian of a quantum system, and can cause a change in a state of the quantum system. After annealing, the Hamiltonian of the quantum system can be similar, or the same, as a problem Hamiltonian HP. The PQA can be an open system, i.e., a system that interacts with the environment. In the case of an open-system quantum annealer, the state can be at least an approximation to a thermal state of the quantum system. In the special case of an adiabatic quantum annealer (where the system is isolated from the environment), the state can be at least an approximation to the ground state of Hamiltonian HP. The following paragraphs refer to an open-system quantum annealer. The state of a PQA at normalized time t (where t∈[0,1]) can be described by a density matrix ρij(t), where i,j denote eigenstates of a Hamiltonian H(t). The state can be modeled by an equation of the form {dot over (ρ)}ij=−i[H,ρ]+F(ρ), where F(ρ) is a linear matrix-valued function. At the start of annealing, the density matrix is diagonal and the state of the PQA can be described by a quantum Boltzmann distribution. At an intermediate time during annealing t1, the state of the quantum system can begin to deviate from the quantum Boltzmann distribution. One reason for the deviation can be a slowdown of open-system quantum dynamics. The point at which the state begins to deviate from the quantum Boltzmann distribution can be referred to as the freeze-out point. Past that point the state will deviate from quantum Boltzmann distribution. There can be multiple freeze-out points t1<t2< . . . <tnwhere the dynamics between progressively smaller subspaces of the state space slow down. If the points t1, . . . tnare sufficiently close to each other, the state of the PQA in the region t∈[t1, tn] can be close to a quantum Boltzmann distribution. The time up to which the state of the PQA is close to a quantum Boltzmann distribution can be denoted ast. For normalized time t>t, the state can increasingly deviate from a quantum Boltzmann distribution, and its evolution can be described as “running downhill” in the quantum configuration space, reaching equilibrium locally in subspaces while not necessarily reaching equilibrium globally. The distribution corresponding to the state of the PQA at normalized time t can be denoted as p(t). For annealing parameters θ(t) at time t, the corresponding quantum Boltzmann distribution can be denoted as pθ(t)QB. Samples returned by the PQA correspond to samples from distribution p(1). As described above, the distribution p(t) can be close to pθ(t)QB. It can be impractical to obtain samples from the PQA at timet, so, in practice, samples are typically obtained from the PQA in its final state after annealing. Sampling from Intermediate Quantum Boltzmann Distributions Using a Physical Quantum Annealer A physical quantum annealer (PQA), such as a superconducting quantum processor described in reference toFIGS.1and2, can return samples from the final distribution p(1). It can be beneficial to convert the samples from the final distribution p(1) to good-quality samples from an intermediate distribution p(t)≈pθ(t)QB. Good-quality samples are samples meeting a determined threshold for closeness to true samples from a distribution. The good-quality samples can be used in applications requiring samples from a quantum Boltzmann distribution. Furthermore, it can be beneficial to convert the good-quality samples from intermediate distribution pθ(t)QBto samples from another quantum Boltzmann distribution pθ′QB, and/or to samples from a classical Boltzmann distribution. In previous approaches, the samples from the final distribution p(1) obtained from a PQA were treated as though they came from an unknown distribution, and were post-processed (e.g., using a classical Markov Chain Monte Carlo method) to convert them to a classical Boltzmann distribution. A shortcoming of previous approaches is that little or no use is made of intermediate quantum distribution pθ(t)QBwhich contains global information about the final distribution p(1). Previous classical post-processing methods are local, and generally unable to affect global features of the distribution. Consequently, previous approaches to post-processing of samples obtained from a PQA can misrepresent global features of a classical Boltzmann distribution of interest. Quantum Monte Carlo Post-Processing The presently disclosed systems and methods can use Quantum Monte Carlo (QMC) post-processing to correct for local bias in samples returned by a PQA. QMC is a method that can be used to obtain samples from a quantum Boltzmann distribution on a classical computer. QMC post-processing can include taking the final samples xa, a=1 . . . N from the PQA, and initializing MCMC chains with those samples xa(0). Here x denotes a quantum state that is represented, for example, as a path configuration of Path Integral QMC. MCMC chains can be evolved using a QMC transition operator corresponding to the distribution of interest Tθ(t)(x(i),x(i+1)). The transition operator can satisfy the following detailed balance condition: pθ(t)QB(xi))Tθ(t)(x(i),x(i+1))=pθ(t)QB(x(i+1))Tθ(t)(x(i+1),x(i)). Running a QMC chain for long enough can yield samples from distribution pθ(t)QB. The minimum time needed to obtain such samples starting from random states xa(0)can be referred to as an equilibration time. Starting the MCMC chains with PQA samples can reduce the equilibration time. One reason can be that the global features of the distribution pθ(t)QBare captured more correctly by PQA samples (which can provide relative probabilities of subspaces of the quantum state space). To convert samples xainto equilibrium samples from pθ(t)QB, it can be sufficient to equilibrate locally (i.e., within subspaces). Local equilibration can be faster, and can be considered as a post-processing technique. As a result, applying QMC post-processing with M steps can produce good-quality samples xa(M)from quantum Boltzmann distribution pθ(t)QBfor relatively small M. In general, freeze-out pointtis unknown. One approach is to choose a freeze-out point, and halt the annealing for a determined time at the freeze-out point, before re-starting the annealing. This approach is referred to as “annealing with pause” or “mid-anneal pause”, and is described in International PCT Patent Application Publication No. WO2017075246A1 and U.S. Patent Application Ser. No. 62/331,288 entitled “SYSTEMS AND METHODS FOR DEGENERACY MITIGATION IN A QUANTUM PROCESSOR”. Another approach to determiningtis to compute certain statistics of a quantum Boltzmann distribution pθ(t)QBfor various points t∈[0,1], and definetas the point where these statistics are closest to the ones computed from samples obtained from a physical quantum annealer, and post-processed as described above. Such statistics can include spin and spin-spin expectations, average energy, variance of energy, and other suitable statistics. There can be several points where the statistics are close, and these points correspond to multiple freeze-out points. In one implementation, the first of these points is selected ast. Annealed Importance Sampling to Convert Samples from a Quantum Boltzmann Distribution to Another Boltzmann Distribution The presently disclosed systems and methods include the use of annealed importance sampling to convert samples from good-quality samples of pθ(t)QBto samples from another quantum Boltzmann distribution pθ′QB. A sequence of intermediate quantum Boltzmann distributions can be generated as follows: pθkQB, k=1 . . .L, θ1=θ(t), θL=θ′ so that distributions in every pair of consecutive distributions in the above sequence are sufficiently close to one another. Sufficiently close means that importance sampling of a first distribution from a pair of consecutive distributions in the above sequence can be performed efficiently using samples from a second distribution from the pair of consecutive distributions. One approach to selecting parameters θkis to linearly interpolate between θ(t) and θ′, and choose L to be large enough that the distributions are sufficiently close. A sequence of statesx=(x1, x2. . . xL) can be sampled from intermediate distributionswith a probability as follows: P(x1,x2,…,xL)=TθtL(xL,xL-1)…Tθt3(x3,x2)Tθt2(x2,x1)pθt1(x1). A weight can be assigned to each sample as follows: w(x1,x2,…,xL)=p~θt2(x1)p~θt1(x1)p~θt3(x2)p~θt2(x2)…p~θtL(xL-1)p~θtL-1(xL-1) where p~θtk is an unnormalized probability. The samples xLcan be used to compute an expected value of a function F(x) as follows: 〈F(x)〉x∼pθl=Σx→∼P(x→)w(x→)F(xL)Σx→∼P(x→)w(x→) Efficiency of the approach can be characterized by the number of effective samples, as follows: Neff=(Σx→∼P(x→)w(x→))2Σx→∼P(x→)w(x→)2 If distributions pθ(t)QBand pθ′QBare sufficiently different from one other, the number of effective samples Neffcan be small enough that the estimator for F(x) has a relatively high variance. Increasing the number of intermediate distributions L can increase the number of effective samples Neff, and reduce the variance of the estimator for F(x). Training Quantum Boltzmann Machines and Restricted Boltzmann Machines FIG.3is a flow-diagram that illustrates a method300for post-processing samples from a physical quantum annealer, in accordance with the present systems, devices, articles, and methods. One or more of the acts in method300may be performed by or via one or more circuits, for instance one or more hardware processors. In some examples, a device including a hybrid computer (such hybrid computer100ofFIG.1) performs the acts in method300. Method300starts at302, for example in response to an invocation by an invoking program, procedure, routine or function. At304, a computational system (e.g., hybrid computer100ofFIG.1) collects samples from a physical quantum annealer (PQA). At306, the computational system applies QMC post-processing to the collected samples. If, at308, the computational system determines the post-processed samples are not for input to a Quantum Boltzmann Machine, method300proceeds to310. At310, the computational system applies annealed importance sampling (AIS) post-processing to the post-processed samples output from the QMC. At312, method300ends. If, at308, the computational system determines the post-processed samples are for input to a Quantum Boltzmann Machine, method300proceeds to the end at312. FIG.4Ais a graph400aof an evolution of an analog processor over time. An analog processor may be a quantum processor comprising superconducting qubits and couplers. Vertical axis402represents the normalized evolution coefficient s and the horizontal axis404represent the time of the evolution of the analog processor. The normalized evolution coefficient s may represent the normalized flux applied to a compound Josephson junction or the normalized persistent current IPof a flux qubit. The normalized evolution coefficient s changes monotonically over time, increasing from 0 to a maximum value of 1. The normalized evolution coefficient can also be referred to as the anneal fraction. The normalized evolution coefficient (or anneal fraction) is a parameter that can vary with time between 0 and 1, and can be used to define an annealing schedule. A person skilled in the art will understand that the rate of change of the normalized evolution coefficient s over time is shown inFIG.4Afor illustration purposes only and in other implementations the normalized evolution coefficient can increase at a slower or faster rate. In some implementations the normalized evolution coefficient s can change non-linearly. Examples of evolution schedules of analog processors are described in Patent Publication No. US 2015/0363708. Techniques described herein are used to operate a hybrid processor comprising an analog processor and a digital processor where the normalized evolution coefficient s may increase and/or decrease over the course of the operation of the hybrid processor. For certain operations, it may be desirable to operate the hybrid processor such that the analog processor reaches a predetermined classical spin state at the end of a first or initial evolution. This technique may allow study of problem dynamics, or it may be used for obtaining samples from the analog processor. FIG.4Bis a graph of an example evolution400bof an analog processor over time, operating with a digital processor to form a hybrid processor according to the present systems, methods and apparatus. An analog processor may comprise a quantum processor. Vertical axis402represents the normalized evolution coefficient s and the horizontal axis404the time of the evolution of the analog processor. Before the start of example evolution400b, the hybrid processor may determine a classical spin state and apply one or more preparatory biases to the analog processor to target the evolution of the analog processor towards the classical spin state. Preparatory biases may be applied via the analog processor's circuitry components, for example via on-chip DACs or analog lines. Preparatory biases may influence the evolution of the analog processor towards a classical state. When the analog processor is a quantum processor with n qubits, there are 2nclassical states. In example evolution400bthe normalized evolution coefficient s increases from a value of 0 at time t=0 to a value of 1 at time t1. A person skilled in the art will understand that the rate of the evolution from time t=0 to t1is shown inFIG.4Bfor illustration purposes only and in other implementations the rate of the evolution of the analog processor from 0 to t1may be faster or slower than illustrated. At t1, the evolution is paused until time t2. During the time interval between t1and t2, shown inFIG.4Bas time interval406, the digital processor may remove the preparatory biases applied before the start of example evolution400b. A person skilled in the art will understand that time interval406can be dependent, at least in part, on the particular hardware and configuration of the analog processor, and the digital processor comprising the hybrid processor. The time taken by the digital processor to reprogram the analog processor and remove the applied preparatory biases may be different than shown inFIG.4B. In some implementations, time interval406may range, for example, from 100 μs to 200 μs. When the analog processor is a quantum processor, the digital processor may pause the evolution and retain the target classical spin state by keeping the energy barrier of the qubits high. Additionally or in alternative, the hybrid processor may pause the evolution of the analog processor for a time interval longer than needed to reprogram the analog processor, thereby performing other operations, such as readout or post-processing, during time interval406. After time interval406, the evolution of the analog processor resumes in a direction opposite the direction before time interval406, i.e. backwards (also referred to in the present application as in a reverse direction). During this phase, the normalized evolution coefficient s decreases from 1 to a value s* at time t3. The digital processor may determine the value of s* before the start of example evolution400b, or during time interval406. Where the analog processor is a quantum processor, after time interval406, the energy barriers of the qubits are lowered until an intermediate transverse field and/or tunneling energy is reached. The intermediate transverse field and/or tunneling energy may be determined by the digital processor. After time t3, the evolution of the analog processor is paused for a time interval408(between times t3and t4). Time interval408may be determined by the digital processor, either before the start of example evolution400bor during time interval406. In some implementations, time interval408may, for example, range from 1 μs to several milliseconds. A person skilled in the art will understand that the rate of change of the normalized evolution coefficient s between time t2and time t3may be the same as the rate of change between 0 and time t1, or may be different. The digital processor may, for example, determine the rate of change of the normalized evolution coefficient. After time interval408, the evolution of the analog processor resumes in the same direction as the evolution from 0 to time t1, i.e. the normalized evolution coefficient s increases from value s* to 1 until the analog processor reaches a classical spin state at time t5. Where the analog processor is a quantum processor, the digital processor may raise the energy barriers of the qubits to reach a classical spin state. The classical spin state reached at time t5may not be the same as the classical spin state reached at time t1, given that the preparatory biases have been removed at time interval406. After time t5, the digital processor may read out the classical spin state reached at t5, and may perform post-processing. In an alternative implementation, the hybrid processor performs post-processing on the obtained classical spin states at time interval406using classical methods. Therefore, the evolution of the analog processor is paused for a length of time necessary for the digital processor to perform the post-processing operations. An example of a classical post-processing method is Houdayer cluster moves, performed a predetermined number of times. Other classical post-processing methods can be used. Alternatively, or in addition, post-processing may be used to improve samples obtained by the analog processor at time t1. In an effort to improve the diversity of the samples obtained from the analog processor, the samples obtained at t1can be post processed as described above and used as feedback to run the evolution of the analog processor one or more times. During the time interval406, after the digital processor has completed the post-processing operation, the digital processor can apply preparatory biases to the analog processor using the post-processed samples as input to influence the evolution of the analog processor towards obtaining a more diverse set of samples (e.g., obtaining samples from regions in the energy landscape that had not been previously explored by the analog processor). At time t2, the evolution of the processor resumes backwards (i.e., in reverse) as described above until the normalized evolution coefficient reaches value s* at t3. As noted above, the samples obtained at t5may not be the same as the samples obtained at t1or the post-processed samples at t1. After time t5the digital processor may read out the samples obtained by the analog processor. FIG.5is a graph500of an example evolution500of an analog processor operating with a digital processor to form a hybrid processor according to the present systems, methods and apparatus, where the analog processor evolves backwards and forwards over time in the course of an annealing schedule. (Backwards evolution is also referred to as reverse annealing in the present application.) An analog processor may be a quantum processor comprising superconducting qubits and couplers. Vertical axis502represents the normalized evolution coefficient s* and the horizontal axis504represents the time of the evolution of the analog processor. Before the start of example evolution500, the digital processor may determine a set of normalized evolution coefficients as follows: s*={s1*,s2*,s3*, . . . ,sn*} Example evolution500resembles example evolution400buntil time t5, which is described above. Time interval506is a time interval between t1and t2, as described above with respect to time interval406of example evolution400b. After time interval506, the evolution of the analog processor resumes in a direction opposite the direction before time interval506between t2and t3. Time interval508is a time interval between t3and t4, as described above with respect to time interval408of example evolution400b. After time interval508and between t4and t5, the evolution of the analog processor resumes in the same direction as the evolution from 0 to time t1. After time t5, the digital processor may read out the state of the analog processor and/or perform post-processing. At time t6, the evolution of the analog processor resumes, and the normalized evolution coefficient s decreases from 1 to value s2* at time t7. Time interval510is a time interval between t5and t6. The value of s2* may be different from the value of s1* and, similarly, the rate of change of the normalized evolution coefficient s between t6and t7may be different from the rate of change of the normalized evolution coefficient s at other times during example evolution500. After a time interval512, the evolution of the analog processor continues until the analog processor reaches a classical spin state at time t9and the normalized evolution coefficient s reaches a value of 1. As noted before with respect to example evolution400b, the classical spin state reached at t9may not be the same as the classical spin state reached at t1and/or t5. After time t9, the evolution of the analog processor is paused for a time interval514, where time interval514may be determined by the digital processor. During time interval514the digital processor may read out the state of the analog processor and/or perform post processing. After time t10, the evolution of the analog processor resumes in a similar pattern, evolving to a predetermined value of the normalized evolution coefficient s, pausing for a time interval, and resuming until the analog processor reaches a classical spin state, at times t11, t12and t13, respectively. Example evolution500may be used to study a particular problem dynamic, or to generate sample from the analog processor. FIG.6illustrates a flow diagram of a computational method600using a hybrid computing system for evolving an analog processor where the analog processor evolves backwards and forwards over time over the course of an annealing schedule. The hybrid computing system comprises a digital processor and a quantum processor. Computational method600comprises acts602to630; however, a person skilled in the art will understand that the number of acts is exemplary and, in some implementations, certain acts may be omitted, further acts may be added, and/or the order of the acts may be changed. Computational method600starts at602, for example in response to a call from another routine. At604, the digital processor determines a classical spin state for the analog processor. A classical spin state is a set of spin configurations as follows: Si={−1,+1} Computational method600will initially evolve the analog processor towards this classical spin state. At606, the digital processor receives an Ising problem to be solved via the analog processor. Such an Ising problem may be, for example, an optimization problem or a sampling problem. At608, the digital processor determines a set of preparatory biases that need to be applied to the elements of the analog processor so that the analog processor will evolve towards the classical spin state determined at604. Where the analog processor is a quantum processor, the preparatory biases may be the flux biases applied to some or all of the qubits in the quantum processor. Preparatory biases can influence the evolution of the quantum processor towards the classical spin state determined at604so that the classical spin state is achieved with high fidelity (i.e., the probability of achieving the classical spin state is close to unity e.g. 0.9999). At610, the digital processor programs the analog processor with the Ising problem received at606. The digital processor will program the h and J values of the Ising problem. Where the analog processor is a quantum processor, the digital processor will apply h and J values to the qubits and couplers of the quantum processor. At612, the digital processor programs the analog processor with the preparatory biases determined at608. Where the analog processor is a quantum processor, the digital processor may load one or more pulses to the most significant digit of the qubits' flux bias DACs, in addition to the Ising problem bias term h. For example, two or three steps of the most significant digit of the qubits' flux bias DACs may be applied, so that qubits can be biased in the desired direction corresponding to the classical spin state determined at604. At614, the analog processor evolves towards the classical spin state determined at604. The rate of evolution at614may not be constant so that the evolution may be non-linear, e.g., ramping up at a certain stage or pausing before resuming towards the classical spin state. At616, the digital processor latches the state of analog processor for a first dwell time. Where the analog processor is a quantum processor, the qubits' energy barriers may be kept high for the first dwell time to retain the classical spin state. The digital processor may determine the first dwell time to be at least the time needed to reprogram the analog processor by removing the preparatory biases. In other implementations, the first dwell time may be longer. At618, the digital processor reprograms the analog processor to remove the preparatory biases, and the analog processor is programmed with the Ising problem received at606. The time taken by this operation may depend on the particular configuration of the analog processor and the digital processor. Where the analog processor is a quantum processor, the digital processor can remove the one or more pulses to the most significant digit of the qubits' flux bias DACs that were applied at612, leaving the bias term h of the Ising problem received at606such that the quantum processor is now programmed with the Ising problem only. At620, the digital processor determines evolution parameters including an intermediate tunneling energy and a second dwell time. The second dwell time can be independent from the analog processor programming time and may be different from the first dwell time. At622, the analog processor evolves in a backward direction until the intermediate tunneling energy is reached. Where the analog processor is a quantum processor, the qubits' energy barriers can be lowered to achieve the intermediate tunneling energy so that qubits in the quantum processor may not be in a classical spin state at622. In some implementations, one or more variables can be clamped in a classical spin state. Where the analog processor is a quantum processor, the clamped variables can each be represented by a respective one or more qubits, and the qubits representing the clamped variables can form a first subset of qubits of the quantum processor. At622, the analog processor can evolve a second subset of qubits of the quantum processor, the second subset excluding qubits in the first subset of qubits (i.e., excluding qubits representing the clamped variables), in a backward direction until the intermediate tunneling energy is reached. The energy barriers of the second subset of qubits can be lowered to achieve the intermediate tunneling energy so that qubits in the second subset of qubits in the quantum processor may not be in a classical spin state at622. At624, the digital processor pauses the analog processor for the second dwell time determined at620. Where the analog processor is a quantum processor, the qubits' energy barriers may be kept at the intermediate tunneling energy level so that the evolution of the quantum processor is paused. At626, the analog processor evolves towards a classical spin state. Where the analog processor is a quantum processor, the digital processor raises the energy barrier of the qubits to evolve the quantum processor towards a classical spin state. The classical spin state reached at626may not be the same classical spin state reached at614, owing to the removal of the preparatory biases at618. At628, the digital processor reads out the state of the analog processor. During the read-out operation, the evolution of the analog processor is paused so that it maintains a classical spin state. Several methods may be employed for reading out the state of the analog processor, an example of a method and apparatus for reading out a state of a quantum processor are described in PCT Patent application PCT/US2016/031885. At630, computational method600ends, for example, until invoked again. FIG.7shows a flow diagram of a computational method700using a hybrid computing system for evolving an analog processor over time, where the analog processor evolves backwards and forwards over the course of an annealing schedule. The hybrid computing system comprises a digital processor and a quantum processor. Computational method700may be implemented as an extension of computational method600ofFIG.6and comprises acts702to732; however, a person skilled in the art will understand that the number of acts is exemplary and, in some implementations, certain acts may be omitted, further acts may be added, and/or the order of the acts may be changed. Computational method700starts at702, for example in response to a call from another routine. At704, the digital processor determines a classical spin state configuration, as described above with reference to604of computational method600. At706, the digital processor receives an Ising problem to be solved by the analog processor, as described above with reference to606of computational method600. At708, the digital processor determines a set of preparatory biases, as described above with reference to608of computational method600. At710, the digital processor programs the analog processor with the Ising problem, as described above with reference to610of computational method600. At712, the digital processor programs the analog processor with the preparatory biases, as described above with reference to612of computational method600. At714, the analog processor evolves towards the classical spin state, as described above with reference to614of computational method600. At716, the digital processor latches the state of analog processor for a first dwell time, as described above with reference to616of computational method600. At718, the digital processor reprograms the analog processor to remove the preparatory biases, as described above with reference to618of computational method600. At720, the digital processor determines evolution parameters including an intermediate tunneling energy and a second dwell time, as described above with reference to620of computational method600. At722, the analog processor evolves in a backward direction until the intermediate tunneling energy is reached, as described above with reference to622of computational method600. At724the digital processor pauses the analog processor for second dwell time, as described above with reference to624of computational method600. At726, the analog processor evolves towards a classical spin state, as described above with reference to626of computational method600. At728, the digital processor reads out the state of the analog processor. At730, the digital processor determines whether to iterate based on an exit condition. In response to an exit condition not being met, control proceeds to712, and the digital processor performs a further iteration of acts712to728. At712, the digital processor programs the analog processor with preparatory biases. In response to the exit condition being met, control proceeds to732. An exit condition may comprise iterating for defined number of times. At732, computational method700terminates, for example, until invoked again. FIG.8is a flow diagram of a computational method800using a hybrid computing system for evolving an analog processor over time, where the analog processor iterates forwards and backwards over the course of an annealing schedule and where the digital processor does not reprogram the analog processor at each iteration. Computational method800comprises acts802to830; however, a person skilled in the art will understand that the number of acts is exemplary and, in some implementations, certain acts may be omitted, further acts may be added, and/or the order of the acts may be changed. Computational method800starts at802, for example in response to a call from another routine. At804, the digital processor determines a classical spin state for the analog processor, as described above with reference to604of computational method600and/or704of computational method700. At806, the digital processor receives an Ising problem to be solved by the analog processor. Such an Ising problem may be, for example, an optimization problem or a sampling problem. The digital processor also receives a set of normalized evolution coefficients: s*={s1*,s2*,s3*, . . . ,sn*} and a set of dwell times: t*={t1*,t2*,t3*, . . . ,tn*} At808, the digital processor determines a set of preparatory biases, as described above with reference to608of computational method600and/or708of method700. At810, the digital processor programs the analog processor with the Ising problem as described above with reference to610of computational method600and/or710of computational method700. At812, the digital processor programs the analog processor with the preparatory biases as described above with reference to612of computational method600and/or712of computational method700. At814, the analog processor evolves towards the classical spin state, as described above with reference to614of computational method600and/or714of computational method800. At816, the digital processor latches the state of the analog processor for a first dwell time, as described above with reference to616of computational method600and/or716of computational method700. At818, the digital processor reprograms the analog processor to remove the preparatory biases, as described above with reference to618of computational method600and/or718of method700. At820, the analog processor evolves backwards until the normalized evolution coefficient s reaches value s1*, the first value of s in the set s* received by the digital processor at806. Where the analog processor is a quantum processor, the energy barrier is lowered until the normalized evolution coefficient s reaches value s1*. When s has value s1*, qubits in the quantum processor may not be in a classical spin state. At822, the digital processor pauses the quantum processor for a dwell time t1*, where t1* is the first value in the set of dwell times t* received by the digital processor at806. At824, the analog processor evolves towards a classical spin state. Where the analog processor is a quantum processor, the digital processor raises the energy barrier of the qubits to evolve the quantum processor towards a classical spin state. The classical spin state reached at824may not be the same classical spin state reached at814. At826, the digital processor reads out the state of the analog processor. At828, the digital processor determines whether to iterate based on an exit condition. In response to an exit condition not being met, control proceeds to820, and the digital processor iterates through acts820to826of method800. At820, the analog processor evolves backwards until the next value s* is reached (e.g., if in the previous iteration the backward anneal was paused at s3* in the current iteration the backward anneal will pause at s4*). Similarly, at822, the digital processor latches the state of the analog processor for a dwell time corresponding to the next value of t* (e.g., if in the previous iteration the state of the analog computer was latched for dwell time t3* in the current iteration the state of the analog computer will be latched for a dwell time t4*). In response to the exit condition being met, control proceeds to830. An exit condition may comprise iterating for defined number of times. For example, computational method800may iterate n times, where n is the size of the sets s* and t*. At830, computational method800terminates, for example, until invoked again. Compared to computational method700, the time taken to reprogram the processor with the Ising problem and the preparatory biases can be saved, or at least reduced, in computational method800. Computational method800can have a faster execution cycle than that of computational method700. Computational methods600,700and/or800may be used to operate a hybrid computer for studying problem dynamics, where an initial classical spin state is determined and a study of the escape rate out of the classical spin state at different points during an annealing schedule is desired. Alternatively, or additionally, computational methods700and/or800may be used for sampling. By repeating backward and forward annealing, the analog processor can explore neighborhoods close to the initial classical spin configuration. For example, computational methods700and/or800may be used for approximating Boltzmann sampling distributions. Read out may be performed at each iteration of computational methods700and/or800(acts726and826, respectively), or the state of the analog processor may be temporarily stored in a non-transitory memory storage device until the end of the iterations, and the digital processor may read out all the classical spin state at the end of computational methods700and/or800, respectively. Where the analog processor is a quantum processor, a quantum flux parametron (QFP) shift register, with n QFPs per qubit, where n is the number of iterations of computational methods700and/or800, may store all the classical spin states obtained during the execution of computational methods700and/or800. A person skilled in the may understand that the hybrid computing system may operate computational method700and800incrementally, wherein computational method700constitutes an outer loop and acts820-826of computational method800an inner loop. FIG.9is a graph of an example evolution900of an analog processor, operating with a digital processor to form a hybrid processor according to the present systems, methods and apparatus, where the analog processor evolves forwards and backwards over several intervals. An analog processor may comprise a quantum processor. Vertical axis902represents the normalized evolution coefficient s and the horizontal axis904the time of the evolution of the analog processor. Before the start of the evolution the digital processor may program a problem onto the analog processor. Where the analog processor is a quantum processor, the digital processor may, for example, assign bias and coupling strengths to some, or all, of the qubits and couplers of the quantum processor. The digital processor determines an annealing schedule for the analog processor (e.g., the digital processor may determine the rate of the anneal). In example evolution900, the normalized evolution coefficient s increases from 0 to a value s1in time t1. A person skilled in the art will understand that the rate of the evolution from 0 to t1is shown inFIG.9for illustration purposes only and in other implementations the rate of the evolution of the analog processor from 0 to s1may be faster of slower than illustrated. In addition, where the analog processor is a quantum processor, some of the qubits in the quantum processor may have a different annealing rate than other qubits or they may start annealing at a later time. At time t1the digital processor programs the analog processor with a first candidate annealing schedule. The first candidate annealing schedule may be the same as the initial annealing schedule determined by the digital processor before the start of the evolution. At time t1the evolution of the analog processor may be paused for a time necessary to program the candidate annealing schedule (not shown inFIG.9) or for other purposes, before resuming until the normalized evolution coefficient s reaches value s2at time t2, where s2>s1. The values s1and s2, and/or t1and t2, may be determined by the digital processor before the start of the evolution and may be determined, at least in part, by the class of problem that is to be programmed into the analog processor. At time t2the evolution of the analog processor proceeds in an opposite direction (i.e., backwards) with respect to the direction of the evolution up to time t2. At time t3the normalized evolution coefficient s decreases to value s1. At time t3the digital processor programs the analog processor with a second candidate annealing schedule that may be different from the first candidate annealing schedule. The evolution of the analog processor may be paused for the time needed to program the second candidate annealing schedule into the analog processor. After time t3the evolution of the analog processor proceeds in the first direction (i.e., forward) until a time t4when the normalized evolution coefficient s reaches value s2again, before proceeding in the opposite direction (i.e., backwards) until the normalized evolution coefficient reaches value s1at time t5. At times t2and t4, the digital processor may readout the spin configurations of the analog processor. While inFIG.9the evolution of the analog processor is shown to move forward and backwards between the values s1and s2two times, a person skilled in the art will understand that the analog processor may evolve between s1and s2more than two times, or only once. Similarly, the evolution of the analog processor proceeds forwards and backwards between s2and s3, and successively between values of the normalized evolution coefficient s3and 1. Although inFIG.9the evolution of the analog processor is shown to proceeds forwards and backwards between three values s1-s3of the normalized evolution coefficient s, a person skilled in the art will understand that the evolution of the analog processor may proceed as described between less than three or more than three values of the normalized evolution coefficient s. At time t14, after intervening times t6through t13, the analog processor can have tried a number of candidate annealing schedules between intervals of the normalized evolution coefficient s to attempt to find an optimal annealing schedule for each interval. Depending on the problem, or problem class, to be solved by the analog processor, a specific annealing schedule may be more suited than others to find, for example, a more diverse set of samples, a solution with a lower energy, or a solution that requires less post-processing. FIG.10is a flow diagram illustrating a computational method1000using a hybrid computing system for evolving an analog processor over time, where the analog processor evolves forwards and backwards over intervals of the normalized evolution coefficient s in an attempt to determine a more suitable annealing schedule for each interval. Computational method1000comprises acts1002to1020; however, a person skilled in the art will understand that the number of acts is exemplary and, in some implementations, certain acts may be omitted, further acts may be added, and/or the order of the acts may be changed. Computational method1000starts at1002, for example in response to a call from another routine. At1004the digital processor initializes a counter i to an initial value i=0 and determines the number of intervals of the normalized evolution coefficient s for which method1000should run. The number of intervals can determine the value of the counter i. The digital processor may also determine a set of candidate annealing schedules for each interval of s. Alternatively, a set of candidate schedules may be determined by a separate process or routine, and passed to the digital processor as an input to computational method1000, or the set of candidate schedules for each interval of i+1 can be determined after the candidate schedule for interval i has been computed in an iteration of computational method1000. At1006the digital processor programs the analog processor with one of the candidate annealing schedules for the interval [si, si+1]. At1008the analog processor starts an evolution in a first direction following the candidate annealing schedule programmed at1006until the normalized evolution coefficient reaches value si+1. Typically, the first direction is a forward direction (i.e. towards s=1). At1010the digital processor reads out the state of the analog processor. The digital processor may store this information in a memory element for future calculation and/comparison. In some implementations, the analog processor may need to evolve until s=1 before the digital processor can read out the state of the analog processor. In evolving until s=1, the analog processor may evolve faster than to the evolution at1008. In some cases, the analog processor may follow a ramp to s=1. At1012the digital processor determines whether an exit condition has been met. In one implementation, the exit condition is completion of a defined number of iterations, the number of iterations corresponding to the number of candidate annealing schedules for the interval [si, si+1] In this implementation, the exit condition ensures that the candidate annealing schedules have been tried. If the exit condition has been met, control passes to1016, otherwise to1014. At1014, evolution of the analog processor proceeds in the opposite direction (typically in the backwards or reverse direction i.e., away from s=1) until the normalized evolution coefficient decreases to value siagain. After1014, control returns to1006, where the digital processor programs the analog processor with a different one of the candidate schedules determined at1004. At1016, the digital processor determines whether for the current value of the counter i the condition si+1=1 is met. If the condition is met, evolution of the analog processor has reached the end, and computational method1000has iterated over the previous intervals of s, and method1000proceeds to1020. Computational method1000terminates at1020, until it is invoked again, for example. Alternatively, before terminating execution of computational method1000, the digital processor may determine a more suitable annealing schedule for each interval, based at least in part on the readout information collected at1010, and program the analog processor to evolve according to the more suitable annealing schedule for each interval. If the condition is not met, control passes to1018. At1018, the digital processor increments the value of the counter i to i+1, progressing execution of computational method1000to the next interval of s. Control then passes to1006, where the digital processor programs the analog processor with one of the candidate annealing schedules for the interval [si, si+1] determined at1004. An analog processor may solve a computational problem involving more than one variable. Where the analog processor is a quantum processor, the variables of the problems may be represented by qubits. Depending on the problem and the topology of the quantum processor, one or more variables in the problem may be represented in the quantum processor by more than one qubit. For example, one variable may be represented by a group of two or more qubits that are influenced to behave as a single qubit by programmable couplers. Such groups of qubits are commonly referred to as chains. These chains are distinct from the QMC chains and MCMC chains described above. The qubits in a chain can take the same spin as each other at the end of the evolution of the quantum processor (either spin up or spin down). The digital processor may assign a coupling strength to the programmable couplers so that the qubits in the chain behave as a single qubit. The stronger the chain, the more likely it is that the qubits in the chain can behave as a single qubit. The coupling strength, or chain strength, may vary over the evolution of the quantum processor. For example, the chain strength may be defined as a ratio of the coupling strength of qubits in the chain to the strongest logical graph coupling. FIG.11is a graph of an exemplary variation of chain strength over the course of an evolution of a hybrid computing system comprising a digital processor in communication with an analog processor. In particular,FIG.11shows an exemplary variation of chain strengths1102a,1102b,1102c, and1102d(collectively1102) of four variables over the course of an evolution of an analog processor. The vertical axis1104represents the chain strengths of the four variables and the horizontal axis1106represents the time of the evolution of the analog processor. One or more variables may be represented in a quantum processor by more than one qubit, so that the chain strength of one variable may be the combination of the coupler strengths of the couplers connecting the qubits in the chain. InFIG.11, chain strength1102aremains approximately constant over the course of the evolution of the analog processor, while chain strengths1104b,1104c, and1104dvary. A person skilled in the art will understand that the analog processor may be representing a computational problem with more than four variables, or with less than four variables, and the chain strength of each variable may be approximately constant, or vary over the course of the evolution, or be approximately constant for a phase of the evolution and vary over another phase of the evolution. The digital processor in the hybrid computing system programs the chain strength of each chain at the beginning of the evolution, thus setting the chain strength to an initial value. However, during the evolution, some of the chain strengths may vary, depending on, for example, external influence or noise in the analog processor. Therefore, the initial value of the chain strength may not be an optimal value, or a preferred value, for throughout the evolution. To compensate for variation in chain strength, the digital processor may pause the evolution of the analog processor at predetermined times to read out the chain strength of the variables. InFIG.11, the digital processor may read out the chain strength of the four variables at time intervals of Δt, at times t1to t8. While inFIG.11the digital processor reads out the values of the chain strengths at regular time intervals, a person skilled in the art will understand that in a different implementation, the digital processor may read out the values of the chain strengths at irregular (or unequal) intervals. FIG.12is a flow diagram of a computational method1200using a hybrid computing system for evolving an analog processor where the chain strengths of the variables in the analog processor changes over the course of the evolution and where the digital processor determines a more optimal, or preferred, chain strength for each problem variable, and for each interval, based on reading out the chain strengths during the evolutions. The analog processor may be a quantum processor. Computational method1200comprises acts1202to1220; however, a person skilled in the art will understand that the number of acts is exemplary and, in some implementations, certain acts may be omitted, further acts may be added, and/or the order of the acts may be changed. Computational method1200starts at1202, for example in response to a call from another routine. At1204, the digital processor initializes a counter i to an initial value i=0. The digital processor determines a number of intervals of the normalized evolution coefficients for which method1200should run. The number of intervals can determine the value of the counter i. At the same time, the digital processor may program other elements of the analog processor in order to prepare the analog processor for solving a computational problem. The digital processor also determines, based at least in part on the problem to be solved, a set of candidate chain strengths for each interval. Alternatively, the set of chain strengths may be determined by a separate routine, and passed as an input to computational method1200, or the set of candidate chain strengths for each interval of i+1 can be determined after the candidate chain strengths for interval i have been computed via an iteration of computational method1200. The chain strength for each chain can depend, at least in part, on the problem to be solved by the analog processor and/or, in the case of a quantum processor, on the number of qubits comprising each variable and/or other characteristics of the quantum processor. At1206the digital processor programs the analog processor with one of the candidate chain strengths from the set of candidate chain strengths. The digital processor may also program other parameters of the analog processor, as necessary. At1208, the evolution of the analog processor starts and continues until a value of the normalized evolution coefficient si+1. At1210, the digital processor reads out the values of the chain strengths of the variables represented by chains in the analog processor. At the same time, the digital processor may read out other values of other elements of the analog processor. To do so, the evolution of the analog processor may pause for a time necessary to carry out the readout operation. Additionally, or in alternative, the analog processor may evolve at a faster rate, or ramp up the evolution, until the normalized evolution coefficient reaches value 1 before carrying out the readout operation. At1212, the digital processor determines whether an exit condition has been met. If the exit condition has been met, control passes to1216, otherwise control passes to1214. An exit condition may be, for example, the execution of a number of iterations, where the number of iterations corresponds to the number of candidate chain strengths for each interval of the normalized evolution coefficient. An alternative exit condition may be reaching a threshold of a performance measure. At1214, the evolution of the analog processor proceeds in an opposite direction (i.e., in a backwards or reverse direction—the reverse evolution also referred to in the present application as a reverse anneal) until the value of the normalized evolution coefficient s decreases to si. Control then passes to1206where the digital processor programs the analog processor with another one of the candidate chain strengths from the set of candidate chain strengths. The digital processor may also program other parameters of the analog processor, as desired. Alternatively, or in addition, the digital processor may program the chain strengths based on a combination of another one of the candidate chain strengths and the readout at1208. At1216, the digital processor determines whether the value si+1=1. In this case, all the analog processor will have tried all the candidate chain strengths for all the intervals of the normalized evolution coefficient i. Upon meeting the condition control passes to1220, otherwise to1218. At1218, the digital processor increases the value of the counter i to i+1, progressing the execution of computational method1200to the next interval of s. Control then passes to1206, where the digital processor programs a candidate chain strength for the next interval of s. At1220, computational method1200terminates, until it is invoked again. Alternatively, before terminating execution of computational method1200, the digital processor may determine a more suitable chain strength for each interval, based at least in part on the readout information collected at1208, and program the analog processor to evolve according to the more suitable chain strength for each interval of s. FIG.13is a flow diagram of a computational method1300using a hybrid computing system for evolving an analog processor, where the analog processor evolves backwards and forwards to mitigate the effect of broken chains. When a problem requires the use of chains to be programmed into the specific topology of the quantum processor, it may occur that at the end of the evolution of the quantum processor one or more chains contains qubits whose spin do not agree, e.g., some qubits in a single chain have spin up while other qubits in the same chains have spin down. In this case the chain is said to be broken. Solutions containing broken chains may not be meaningful and may require post-processing. Additionally or in alternative, repeating solving the same problem with the quantum processor may help mitigate the effect of the broken chains. Computational method1300attempts to mitigate the effect of broken chains in a solution by evolving only the broken chains in a quantum processor backwards and then evolving them forward. Computational method1300comprises acts1302to1318; however, a person skilled in the art will understand that the number of acts is exemplary and, in some implementations, certain acts may be omitted, further acts may be added, and/or the order of the acts may be changed. Computational method1300starts at1302, for example in response to a call from another routine. At1304, the analog processor solves a problem by evolving and reaching a solution. The digital processor may program the problem into the analog processor by programming the parameters of the analog processors, including chain strength. At1306, the digital processor reads out the state of the analog processor. Where the analog processor is a quantum processor, the digital processor reads out the spin configuration of the qubits. At1308, the digital processor determines if there are any broken chains in the solution read out at1306. Where the analog processor is a quantum processor, the digital processor checks if the spin configurations of the qubits within a chain agree with one another. The digital processor can check one or more of the chains. If there are no broken chains in the solution read out at1306, control passes to1310, otherwise to1312. At1312, the digital processor sets or programs the state of the analog processor so that unbroken chains are held fixed. Where the analog processor is a quantum processor, the digital processor can program the quantum processor by setting the spin configurations of qubits not belonging to a broken chain as fixed. At1314, the evolution of the analog processor proceeds backwards so that the value of the normalized evolution coefficient s decreases. Given that the state of the non-broken chains has been set at1312, only a portion of the analog processor will evolve backwards. The digital processor will determine when to pause the backwards anneal of the analog processor, trying to pause it at a time when the chains were unbroken. In order to do so, the digital processor may have to read out the state of the analog processor one or more times at various points during the backwards (reverse) anneal. At1316, evolution of the analog processor proceeds forward, until the end of the evolution, when the normalized evolution coefficient s reaches a value of 1. Given that unbroken chains were held fixed at1312, only the portion of the processor that had broken chains at1308is evolving forward. At1318, the digital processor reads out the state of the analog processor. After1318, control passes back to1308, where the digital processor determines if there are broken chains in the solution read out at1318. Method1300iterates until there are no broken chains in the solution. At1310, computational method1300terminates, until, for example, it is invoked again. Alternatively, computational method1300may be implemented without evolving the analog processor backwards. At1314, the analog processor, instead of evolving backwards, may start a new evolution cycle for the broken chains only, where the unbroken chains are set fixed at1312. As described above, the digital processor can, at each iteration, check if there are broken chains in a solution obtained by the analog processor, and repeat evolving the broken chains until a result is produced with no broken chains. Alternatively, the digital processor may determine to stop the iterations based on the completion of a defined number of iterations. Other methods exist for post-processing samples obtained with an analog processor. Some post processing techniques are described in US Patent Publication No US2015363708, U.S. Pat. Nos. 7,307,275, 8,244,650, 8,494,993 and US Patent Publication No 20160071021. Some approaches to post-processing are classical, i.e., they use a digital processor to post-process samples from an analog processor. As described in the present application, other approaches use a hybrid computing system, the hybrid computing system comprising an analog processor and a digital processor, where the post-processing operation is performed on the analog processor. Where the analog processor is a quantum processor that produces a sample s for a problem Hamiltonian (h, J), where h is a bias applied to the qubits and J is a coupling strength, embedded on the quantum processor hardware with Hamiltonian (h′, J′), some of the chains on the sample s may be broken. Suppose that variables b1, . . . , bnhave broken chains (i.e., not all the qubits in bitake the same spin) and the variables a1, . . . , amdo not have broken chains. The digital processor may construct a post-processing Hamiltonian (h(s),J(s)) to be solved by the quantum processor as follows. For the unbroken chain corresponding to variable ai, spin s(ai) can be defined as the unique spin value obtained by the qubits in the chain. Given that airepresents an unbroken chain, qubits in aican take the same spin value as each other at the end of an evolution cycle. For each variable bi, corresponding to broken chains, it is possible to define bi+and bi−as the set of qubits within biwith up and down spins, respectively. Accordingly, it is possible to define ci(j)as the connected components of the chain corresponding to bi. That is, each ci(j)is the maximum subset of the chain corresponding to bi, such that every qubit in ci(j)has the same spin and if there is a coupler between ci(j)and ci(k)then the spins in ci(j)and ci(k)have opposite value. Similarly, it is possible to define s(ci(j)) to be the unique spin taken by all the qubits in the connected component ci(j). The post processing Hamiltonian (h(s),J(s)) is defined with variables v1, . . . vNcorresponding to the connected components of broken chains ci(j). The digital processor, after analyzing the sample s and determining that it contains broken chains, will program into the quantum processor qubit biases hx(s)and interaction between variables Jxy(s)as follows. The qubit biases hx(s), where vxis the variable corresponding to the chain component ci(j), are defined as hx(s)=∑q∈ci(j)hq′+∑k=1m∑p∈ai∑q∈ci(j)Jpq′s(ai) In hx(s)the qubit biases are collected along the chain component and the contribution from the unbroken chains are moved into the spin biases. The two-term interactions between variables vxand vy, corresponding to chain components ci(j)and care given by the problem interations between two chain components in the embedded Hamiltonian (h′,J′): Jxy′=∑p∈ci(j)∑p∈ck(l)Jp′q The broken chains can then be post-processed by the quantum processor via a post-processing Hamiltonian (h(s),J(s)). The approach may be repeated for each of the samples s obtained by the quantum processor. An implementation of the above described technique to produce a post-processing Hamiltonian is shown below: input : embedding, h, J, embeddedh, embeddedJ, sampleoutput: sampleh, sampleJN ← 0:for each variable v do| if all qubits in embedding(v) have the same spin in sample then| | s(v) ← sample(v)| else| | s(v) ← 0;| | for each connected component c of embedding(v) do| | | component(N) ← c: variable(N) ← v: N ← N + 1;| | end| endendfor i ← 1, ..., N do| sampleh(i) ← 0; for each qubit q in component(i) do| | sampleh(i) ← sampleh(i) + embeddedh(q);| | for each variable v do| | | for each qubit p in embedding(v) do| | | | sampleh(i) ← sampleh(i) + embeddedJ(p q)*sample(v);| | | end| | end| end| for j ← 1, ..., N, j ≠ i do| | sampleJ(i j) ← 0;| | for each qubit p in component(i) do| | | for each qubit q in component(j) do| | | | sampleJ(i j) ← sampleJ(i j) + embeddedJ(p q);| | | end| | end| endendreturn sampleh, sampleJ The above described method(s), process(es), or technique(s) could be implemented by a series of processor readable instructions stored on one or more nontransitory processor-readable media. Some examples of the above described method(s), process(es), or technique(s) method are performed in part by a specialized device such as an adiabatic quantum computer or a quantum annealer or a system to program or otherwise control operation of an adiabatic quantum computer or a quantum annealer, for instance a computer that includes at least one digital processor. The above described method(s), process(es), or technique(s) may include various acts, though those of skill in the art will appreciate that in alternative examples certain acts may be omitted and/or additional acts may be added. Those of skill in the art will appreciate that the illustrated order of the acts is shown for exemplary purposes only and may change in alternative examples. Some of the exemplary acts or operations of the above described method(s), process(es), or technique(s) are performed iteratively. Some acts of the above described method(s), process(es), or technique(s) can be performed during each iteration, after a plurality of iterations, or at the end of all the iterations. The above description of illustrated embodiments, including what is described in the Abstract, is not intended to be exhaustive or to limit the embodiments to the precise forms disclosed. Although specific embodiments of and examples are described herein for illustrative purposes, various equivalent modifications can be made without departing from the spirit and scope of the disclosure, as will be recognized by those skilled in the relevant art. The teachings provided herein of the various embodiments can be applied to other analog processors, not necessarily the exemplary quantum processors generally described above. The various embodiments described above can be combined to provide further embodiments. To the extent that they are not inconsistent with the specific teachings and definitions herein, all of the US patents, US patent application publications, US patent applications, referred to in this specification and/or listed in the Application Data Sheet, including U.S. Provisional Patent Application No. 62/347,421, filed Jun. 8, 2016; U.S. Provisional Patent Application No. 62/364,169, filed Jul. 19, 2016; and U.S. Provisional Patent Application No. 62/417,940, filed Nov. 4, 2016 are incorporated herein by reference, in their entirety. Aspects of the embodiments can be modified, if necessary, to employ systems, circuits and concepts of the various patents, applications and publications to provide yet further embodiments. These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure. | 91,448 |
11861456 | DETAILED DESCRIPTION Embodiments of the present disclosure now will be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all, embodiments of the disclosure are shown. Indeed, embodiments of the disclosure may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Like numbers refer to like elements throughout. Overview Quantum computers manipulate qubits for use in various computing actions. In one such context, quantum computers utilize qubits as inputs to one or more logic gates configured for performing logical operations based on the states of the input qubits. Configurations of logic gates may be combined in a myriad of ways in a quantum program, for example where the quantum program is specifically configured for reaching a desired result. One such environment in which qubits are manipulated is a one-dimensional quantum computing environment. In a one-dimensional quantum computing environment, qubit movement is constrained to a single dimension meaning that qubits can move either forward or backward along this dimension. One example of a one-dimensional quantum computing environment is where qubits are linearly arranged within a confinement apparatus of a quantum computer, for example at one or more predefined regions of an ion trap while maintaining ion positions in a well-defined order. In some such contexts, the quantum computer is configured for performing logical operations (e.g., logic gates) utilizing input qubits that are adjacent to one another. Accordingly, in the one-dimensional arrangement of qubits, only those that are positioned at adjacent positions within the qubit ordering may be input to the same logic gate. Additionally, in some such implementations, each region may be configured to enable one or more qubits to be located at that region while maintaining well-defined positions within the qubit ordering, such that no two qubits may share a single position within the ordering. Non-limiting examples of such implementations include transportable trapped ion quantum computers, such as quantum charge coupled device(s) that require physically adjacent pairs of qubits within the same gating zone (e.g., adjacent positions within the computing environment) for gating at the appropriate time slice. The present disclosure includes embodiments for all one-dimensional quantum computing environments, regardless of various factors associated with such a quantum computing environment. For example, the present disclosure includes embodiments for all one-dimensional quantum computing environments regardless of qubit type and regardless of the particular swap operation available so long as the swap operation occurs between nearest neighbor sites (e.g., adjacent qubit position(s) in the qubit ordering) in the one-dimensional quantum computing environment. Non-limiting examples include one-dimensional arrays of superconducting qubits, quantum dot qubits, neutral atom qubits, photonic qubits, any qubit type where the adjacent swaps are quantum swap gates, one-dimensional arrays of logical qubits formed from a collection of any physical qubit type where the swap operation is a logical swap gate or swaps of batches of physical qubits between nearest neighbor blocks of qubits, one-dimensional arrays of qubits within a larger and/or higher-dimensional arrangement of qubits regardless of qubit type and swap-operation implementation, and/or the like. Thus, it should be appreciated that the disclosure should not be limited to any particular quantum computing environment, any particular one-dimensional quantum computing environment implementation, qubit implementation, swap operation implementation, and/or any combination thereof. Additionally, in some implementations, the physical structure of the quantum computer minimizes the ability for qubits to be readily moved across long distances. For example, in the one-dimensional arrangement, qubits may be limited to movements only to regions adjacent to the qubit's current region or to the next occupied region (e.g., the next region having another qubit located therein) or the next unoccupied region (e.g., the next region not having a qubit located therein). In this regard, qubits may switch positions in a well-defined order with a qubit one position higher or one position lower in the order, such that regardless of the distance between qubits the qubits nevertheless remain in the desired, well-defined order within the one-dimensional environment. It should be appreciated that such contexts may involve a plurality of qubit swaps to facilitate movement of each qubit from a starting position to a desired position within the well-defined order of qubits. Such movements similarly may need to be performed for a plurality of qubits, for example to organize the qubits for use in executing a desired set of logic gates within a particular time slice. For example, a quantum program includes a plurality of logic gates to be performed within one or more time slices. However, the management and movement of qubits is a resource intensive task. Each repositioning action requires both time and energy to effectuate a desired movement. Further still, the precise nature of qubit management increases the likelihood of errors resulting from qubit movement, for example qubit memory error which accumulates in increasing fashion the longer the movement takes. Accordingly, to conserve computing resources and minimize errors, it is desirable to efficiently perform qubit repositioning within the quantum computing environment. Further, conventional non-quantum computing system implementations configured for manipulating qubit positions are often significantly resource intensive, difficult to maintain, and/or otherwise not suitable for use based on restrictions in the operation of a quantum computer (e.g., to only manipulate qubits to move one region in either direction), such that improved methodologies performed by a non-quantum computer for generating a reduced number of basic operations improves the operation of non-quantum computers as well as the corresponding advantages to the operation of the quantum computer. For example, such improved methodologies reduce the use of conventional computing resources (e.g., processing resources, memory resources, networking resources, and/or the like), as compared to conventional implementations known in the art for generating such instructions for controlling the quantum computer to reach a desired result. Accordingly, effecting efficient qubit repositioning that meets the restrictions of modern day quantum computing implementations is desirable. Embodiments of the present disclosure provide for instruction compilation for at least one time slice in a one-dimensional quantum computing environment, for example by automatically generating an algorithm swap command set. The algorithm swap command set may be efficiently generated using a sorting algorithm with data movement constrained to swapping and/or otherwise manipulating data elements between adjacent data storage sites within a one-dimensional array and storing a manipulation indicator, such as a swap indicator, for each manipulation determined to be performed. Sorting algorithms that generate swap command sets where swaps can be performed in parallel reduce the total time for qubit repositioning. One such sorting algorithm is even-odd transposition sort. For one-dimensional quantum computing environments limited to repositioning only to adjacent regions, the algorithm swap command set represents movements that can be effectuated by the quantum computer to reposition qubits to desired positions within the constraints of the quantum computer. The algorithm swap command set contains a set of parallel swap commands, each parallel swap command representing a set of swaps that can be applied at the same time in parallel to sets of adjacent qubit pairs, where the set of all qubits in the adjacent qubit pair sets for a single parallel swap command are unique (i.e. one qubit is only included in at most one swap within the parallel swap command). Further, in some contexts, a target qubit position set may be identified in a particular manner, for example based on a near-midpoint qubit open index pair for each qubit pair, to reduce the number of resulting parallel swap commands. Accordingly, the even-odd transposition sort may be executed to reduce the number of required parallel swap commands to at most N, where N represents the number of qubits of a qubit set that may require repositioning in the quantum computing environment. Additionally, assigning target qubit positions near a near-midpoint qubit open index pair for each qubit pair, and/or a single qubit for gating as a single input (which may not require repositioning, and/or otherwise may be manipulated to any position), the number of parallel swap commands can be further reduced to less than N in many cases. Additionally or alternatively, in some embodiments, slot-based target assignment is performed, resulting in a target qubit positions set with a further reduced worst-case scenario, as described herein. By generating the algorithm swap command set, and/or any corresponding instruction sets therefrom, using the even-odd transposition sort or other sorting algorithm satisfying the constraints, complexity is reduced due to the simplified nature of generating such instructions for effecting such repositioning. Additionally or alternatively, by reducing the number of parallel swap commands to be performed, some example embodiments of the present disclosure conserve computing resources and/or reduce prospective errors that may occur in the quantum computing environment. Furthermore, in some example embodiments, forward-looking processing of subsequent time slices may enable further reduction of the number of operations to be performed, for example by identifying when qubit pairs at a first (e.g., a current) time slice are adjacent while repositioning. In this regard, prior to proceeding for other qubits at a first time slice becoming adjacent, and gating may take place for said adjacent qubits at in an earlier operation. Additionally or alternatively, in some embodiments, forward-looking processing optimizations may perform selecting new target positions for said qubits that were gated early based on qubits to be gated on a second time slice or other subsequent time slice, such as to minimize the number of parallel swap commands at a second time slice. It should be appreciated that such embodiments enable repositioning of any number of qubits. Definitions In some embodiments, some of the operations above may be modified or further amplified. Furthermore, in some embodiments, additional optional operations may be included. Modifications, amplifications, or additions to the operations above may be performed in any order and in any combination. Many modifications and other embodiments of the disclosure set forth herein will come to mind to one skilled in the art to which this disclosure pertains having the benefit of the teachings presented in the foregoing description and the associated drawings. Therefore, it is to be understood that the embodiments are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Moreover, although the foregoing descriptions and the associated drawings describe example embodiments in the context of certain example combinations of elements and/or functions, it should be appreciated that different combinations of elements and/or functions may be provided by alternative embodiments without departing from the scope of the appended claims. In this regard, for example, different combinations of elements and/or functions than those explicitly described above are also contemplated as may be set forth in some of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation. The term “one-dimensional quantum computing environment” refers to quantum computing hardware, software, middleware, firmware, and/or a combination thereof, wherein qubits are positioned in a one-dimensional space. In some example embodiments, a one-dimensional space may be a linear arrangement of locations. In some example embodiments, a one-dimensional quantum computing environment includes an ion trap comprising a number of one-dimensionally arranged regions, such as parallel to the longitudinal axis of the ion trap, for trapping qubits in various regions along the linear arrangement of regions within the ion trap. For example, regions may be defined such that a qubit at a particular index is located within a particular space associated with one or more other qubits (e.g., the first qubit is positioned ahead of all other qubits, the second qubit is positioned anywhere between a first qubit and a third qubit, and so on). The positions of qubits within a one-dimensional quantum computing environment may be controlled using one or more electrodes positioned along the length of the ion trap, for example by adjusting voltage(s) applied to one or more of the electrode(s) to move the ions along the one-dimensional space. In some example embodiments, a one-dimensional quantum computing environment includes a one-dimensional arrangement of logical qubits formed from a collection of physical qubits. The term “qubit set” refers to a representation of a number of qubits within a one-dimensional quantum computing environment. In at least one example embodiment, the qubit set corresponds to a plurality of qubits trapped within the ion trap, and/or other quantum computing hardware for storing qubits, in a one-dimensional quantum computing environment. In one example embodiment, the qubit set corresponds to a plurality of logical qubits formed from a collection of physical qubits. The term “time slice” refers to a particular instance or interval of time within which one or more computational operations is to occur within a computing environment. In at least one example embodiment, a time slice represents a time interval within which a set of quantum gate(s) is/are to be executed, which may be performed in series, in parallel, or substantially in parallel. The term “circuit depth” refers to the number of time slices that embody execution of a particular quantum circuit. It should be appreciated that a quantum circuit embodying a quantum program may include any number of time slices depending on the operations to be performed. The term “depth-1 circuit” refers to a quantum circuit having a single time slice. In such contexts the term “time slice” and “depth-1 circuit” may be used interchangeably as synonymous. The terms “position index” and “position” refer to electronically managed data representing a position of a qubit within a one-dimensional ordering of qubits within a one-dimensional quantum computing environment at a particular time slice. In this regarding, a position index “p” represents a qubit's position in the range [0, Q), where Q represents the number of positions in the one-dimensional quantum computing environment. The term “positions vector” refers to any vector having a vector length “L” of unique positions, where the vector length L is less than or equal to the number of positions in the one-dimensional quantum computing environment. For example, given a particular positions vector “v” of length “L” and a vector index “i” in the range [0, L), v[i] represents a unique position in the range of positions [0, Q). In some contexts, a positions vector comprises the same number of qubit positions Q (e.g., L=Q) for processing all positions in the one-dimensional quantum computing environment, and in some other contexts a positions vector comprises a smaller number of qubit positions (e.g., L<Q) for processing a subset of all the positions in the one-dimensional quantum computing environment. It should be appreciated that a one-dimensional quantum computing environment may include any number of positions for processing qubits of a qubit set. The term “complete set of positions” and “all positions” refer to one or more data structures including representations of all positions in a one-dimensional quantum computing environment. For example, in an example context having 12 positions, “all positions” would refer to a list, set, or other data structure including data values corresponding to each of said 12 positions (e.g., a list of indexes 0 through 11 corresponding to positions 1 through 12). The term “initial position index” refers to a particular position index for a qubit at a beginning of a time slice. The term “start positions pairing set” refers to electronically managed data representing the set of all start position pairs for a particular time slice. In some embodiments, the start positions pairing set includes pairs of initial pairing indices. The term “target position index” refers to a particular position index for a qubit located at a corresponding position index at an end of a time slice. The term “initial qubit position set” refers to electronically managed data representing an initial position index set for all, or a subset of, qubits embodying a qubit set. In some embodiments, an initial qubit position set is embodied by a vector having a length equal to the number of qubits in the qubit set, wherein the value at each index of the vector represents an initial position index for the qubit represented by the index of the vector. The term “target qubit position set” refers to electronically managed data representing the target position of each current position of qubits indexed by current position. In some embodiments, the target qubit position set is embodied by a vector indexed by current position (“targets”). For example, given the current positions set (“current”) and a particular indexed qubit (“q”), p1=current[q] where p1 represents the current position index of the qubit q, and further t1=targets[p1] where t1 represents the target position of the current position p1 containing qubit q for a particular time slice. The target qubit position set includes a target qubit position index for all qubits, or a subset of qubits, in a qubit set. In some embodiments, a target qubit position set is embodied by a vector having a length equal to the number of qubits in the qubit set, wherein the value at each index of the vector represents a target position index for the qubit represented by the index of the vector. In at least some embodiments, the target qubit position set includes a target position index for each qubit corresponding to an initial position index in an initial qubit position set. The terms “bipartition,” “bipartitioning,” “bi-partition,” and “bi-partitioning” with respect to a particular source set refer to identifying two subsets of the source set based on one or more criteria, where the two subsets are disjoint and the union of the two subsets is equivalent to the source set. The term “equally-sized subsets of a bipartition” refers to the two subsets of a bipartition where each subset comprises an equal number of elements from the source set. Non-limiting examples of equally-sized subsets of a bipartition of a complete set of positions, for example, include a first subset of positions with all even positions and a second subset of positions with all odd positions, a first subset containing the elements {0, 3, 4, 7, 8, 11, 12, . . . , N−4, N−1} and a second subset containing the elements {1, 2, 5, 6, 9, 10, . . . , N−3, N−2} out of N positions, a first subset containing the first positions of all start position pairs and the second subset containing the second positions of all start position pairs, and/or the like. The term “gating” refers to the physical positioning of one or more qubits, and/or pairing of two qubits within a defined proximity, to enable performance of a logical gate based on the state of the one or more qubits. In this regard, a plurality of qubits may be gated for purposes of performing a number of logical gates in parallel, and/or a quantum circuit based on various gating operations performed over a series of time slices. The term “qubit pair” refers to an unordered pair of unique qubit indices comprising a first qubit “q1” and a second qubit “q2,” where the first qubit q1 and the second qubit q2 are each in the range [0, Q), and where q1 is not equal to q2 and Q represents the total number of qubits in the one-dimensional quantum computing environment. Unless specified otherwise elsewhere within this description, the order of qubits in a qubit pair does not matter, such that the qubit pair (q1, q2) is determinable as equivalent to the qubit pair (q2, q1) for purposes of processing. In some embodiments, the first qubit and the second qubit of a qubit pair are associated with one another for gating at a defined time slice. In some embodiments, a logical gate within a designed circuit identifies a qubit pair to be used in performing the logical gate. Any consistency in the storage and/or display of qubit pairs and/or qubit position indices generally throughout is provided for convenience and to enhance the understandability of the disclosure and is not meant to limit the scope or spirit of the disclosure. The terms “qubit pairing set” and “positions pairing set” refers to an electronically managed data object comprising set of qubits and/or positions paired that represent qubit pairs marked for gating at a particular time slice, or a plurality of time slices. For example, in at least one example embodiment, a qubit pairing set including position pairs corresponding to qubit pairs for each of a plurality of time slices based on a designed quantum computing circuit of one or more logical gates to be executed. The term “start” when used with respect to a particular time slice refers to a Q-length positions vector, where Q is the number of qubits in a qubit set maintained in a one-dimensional quantum computing environment, indexed by qubit representing the starting positions of all qubits at the beginning of the particular time slice. For example, for a vector “start” and a particular index “q,” start[q] represents the starting position of qubit q for a particular time-slice, where q falls in the range [0, Q) and Q represents the number of positions in the one-dimensional quantum computing environment. The term “start position” for a particular qubit refers to the position index of the particular qubit identified from a start vector for a particular time slice, the start position index p for a start vector v and qubit q defined by start[q]. The term “current” refers to a Q-length vector, where Q is the number of qubits in a qubit set maintained in a one-dimensional quantum computing environment, indexed by that represents the current position of each qubit in the qubit set at a particular step in one or more algorithm(s) for assigning and/or manipulating qubit position(s) as described herein. In one example context, “current” refers to a vector of the current positions of all qubits at a particular step in the described process(es) for assigning a target qubit position set. For example, in some embodiments, current[q] defines the current position of qubit q from a particular qubit set. At the beginning of a particular time slice, “current” is initialized to be equivalent to the “start” vector. Throughout the permutation sequence for assigning the target qubit position set, “current” may be updated to reflect the current positions of qubits as they swap through positions of the one-dimensional quantum computing environment on their way to assigned target positions for the particular time slice. In some such contexts, “current” values are unique in the index range [0, N). The term “position pair” refers to electronically managed data representing a pair of unique positions comprising a first position p1 and a second position p2, where the first position and first position fall within the range [0, N), where N represents the number of positions and p1 is different than p2. Unless otherwise specified herein, a position pair is unordered, such that the position pair (p1, p2) is equivalent to the position pair (p2, p1). The term “start position pair” when used with respect to a particular time slice refers to electronically managed data representing a position pair (p1, p2) corresponding to a particular qubit pair (q1, q2) that identifies the starting position index for each of the qubits q1 and q2. The position pair (p1, p2) comprises a first position p1 representing a first start position corresponding to a first qubit q1 of the particular qubit pair in a qubit pairing set and the position pair comprises a second position p2 representing a second start position corresponding to a second qubit q2 of the particular qubit pair. A start position pair (p1, p2) corresponding to a particular qubit pair (q1, q2) is determinable from a start vector (“start”) such that p1 is defined by start[q1] and p2 is defined by start[q2]. Unless otherwise specified herein, a start position pair is unordered, such that the start position pair (p1, p2) is equivalent to the start position pair (p2, p1). The terms “even-even pair” and “ee-pair” refer to a position pair (p1, p2) where both p1 and p2 represent even position indices. In some contexts, an even-even pair represents a start position pair representing two even position indices, which may be referred to as an “even-even start position pair.” The term “odd-odd pair” and “oo-pair” refer to a position pair (p1, p2) where both p1 and p2 represent odd position indices. In some contexts, an odd-odd pair represents a start position pair representing two odd position indices, which may be referred to as an “odd-odd start position pair.” The terms “even-odd pair,” “odd-even pair,” “eo-pair,” and “oe-pair” refer to a position pair (p1, p2) where either p1 is odd and p2 is even, or p1 is even and p2 is odd. In some contexts, an even-odd pair represents a start position pair representing one even position index and one odd position index in any order, which may be referred to as an “even-odd start position pair.” The term “even-odd start position pairing set” refers to a start positions pairing set comprising any number of start position pairs, where each position pair comprises an even-odd start position pair. The term “slot” refers to an indexed representation corresponding to two position indices in a one-dimensional quantum computing environment. The correlation between a position and a slot is determinable based on a particular algorithm. In some embodiments, the two positions of a slot are adjacent, and two qubits occupying the same slot satisfy the adjacency requirement. For example, a slot (“s”) for a particular position (“p”) is determined performing integer division on the particular position by a determinable division factor, such as p//2==s where “//” represents integer division discarding remainder. In this regard, a particular slot may map to two different positions (e.g., p1==(s*2), and p2==(s*2)+1). The term “near-midpoint open index pair” refers to one or more data objects representing unassigned target position indices of a target qubit position set to be assigned for a first qubit and a second qubit of a qubit pair. In some embodiments, the near-midpoint open index pair represents two target position indices located at, and/or nearest to, the midpoint indices based on the initial position indices for the first qubit and the second qubit of the qubit pair. The term “even-odd transposition sort” refers to a computing algorithm for sorting data values within a vector, the vector having any number of indices, by performing swaps of the data values located at adjacent indices at alternating even and odd cycles where the even cycle has lower index even and the odd cycle has lower index odd for a number of iterations. It should be appreciated that an even-odd transposition sort relies on comparing data values at adjacent indices of the vector and swapping the data values if the order of the data values does not meet a desired value ordering (e.g., if the data values are to be ascending but are not, swap). Even-odd transposition sort is also known as: even-odd sort, odd-even sort, odd-even transposition sort, brick sort, parity sort, or parallel bubble sort. The term “even swap” refers to a swap determined to be performed between data values residing at adjacent locations in a data vector where the lower index is even as part of an even-odd transposition sort. In some embodiments, one or more even swaps are performed during an even cycle of an even-odd transposition sort. The term “odd swap” refers to a swap determined to be performed between data values residing at adjacent locations in a data vector where the lower index is odd as part of an even-odd transposition sort. In some embodiments, one or more odd swaps are performed during an odd cycle of an even-odd transposition sort. The term “swap indicator” refers to an electronically managed data value representing determination of a swap to be performed. In some embodiments, a swap indicator represents a first data value to indicate determination of an even swap to be performed, and a second value to indicate determination of an odd swap to be performed. In other embodiments, a swap indicator represents a first value to indicate determination of a swap to be performed, and a second value to indicate determination of no swap to be performed, for example in a circumstance where the determination as to whether the swap is an even swap or an odd swap can be derived through another manner. The term “algorithm swap command set” refers to one or more electronically managed data objects representing all swaps determined to be performed based on an even-odd transposition sort or one or more steps thereof. In some embodiments, for example, the algorithm swap command set includes any number of swap indicators determined to be performed to sort an initial qubit position set to reflect a target qubit position set. The term “parallel swap command” refers to one or more electronically managed data objects representing a set of swaps to be performed in parallel at the same time. In some embodiments, a parallel swap command may refer to the set of swap indicators generated from an even cycle of the even-odd transposition sort. In some embodiments, a parallel swap command may refer to the set of swap indicators generated from an odd cycle of the even-odd transposition sort. The term “qubit manipulation instruction set” refers to one or more electronically managed data objects representing instructions for manipulating qubits based on an associated algorithm swap command set. In this regard, in some embodiments, the qubit manipulation instruction set represents one or more qubit-level operations to be performed to represent the actions embodied in a corresponding algorithm swap command set. The term “qubit swap instructions” refers to electronically managed data representing a swap of at least a first qubit and second qubit. In this regard, a pair of qubits which are swapped have their positions interchanged such that the first qubit's final position after the swap is in the second qubit's initial position before the swap, and the second qubit's final position after the swap is in the first qubit's initial position before the swap. The term “qubit split instructions” refers to electronically managed data representing a split of at least a first qubit and a second qubit associated with a well-defined order and initially residing in a common region into two separate adjacent regions where after the split the first qubit resides in a first region of the two separate regions and the second qubit resides in a second region of the two separate regions, and the order of the qubits is maintained in its initial well-defined order. The term “qubit join instructions” refers to electronically managed data representing a join of at least a first qubit initially residing in a first region and a second qubit initially residing in a second, separate adjacent region into finally a common region while maintaining an initial well-defined order for the two qubits. The term “qubit join instructions” may also be referred to as “qubit merge instructions” and “qubit recombine instructions.” The term “qubit shift instructions” refers to electronically managed data representing a shift of one or more qubits residing initially in a first region to finally a second region without changing a well-defined order of the qubits in the one-dimensional quantum computing environment. The term “hardware instruction set” refers to one or more hardware instructions performed to physically manipulate the qubits for performing the actions represented by the qubit manipulation instruction set. A non-limiting example of a hardware instruction set includes applying time-varying electric potentials to control electrodes of an ion trap and/or associated hardware, to cause the hardware to perform a desired action represented by a qubit manipulation instruction. The term “qubit manipulation hardware” refers to computing hardware, firmware, and/or software for configuring the hardware, configured to execute one or more hardware instructions for manipulating a position of or region containing one or more qubits in a quantum computing environment. Non-limiting examples of qubit manipulation hardware includes ion traps; electrodes connected for interacting with one or more qubits positioned at one or more regions along an ion trap of a quantum computer; voltage and/or waveform generators for providing voltage signals to the electrodes; lasers and corresponding beam delivery paths used to enact gates, cooling functions, and/or measurement functions on the one or more qubits; and/or the like. The term “adjacent” when used with respect to position indices and/or qubits located at position indices refers to two positions which differ by 1 in index. For an example two positions p1 and p2, p1 is adjacent to p2 if and only if abs(p1−p2)==1, where abs(x) of some value x is defined as the absolute value of x, representing x in a circumstance where x is greater than or equal to zero (0) and representing −x (negative x) in a circumstance where x is less than zero. Qubits located in adjacent positions at a particular time slice may be gated. Example Computing System and Environment FIG.1illustrates a block diagram of a system that may be specially configured within which embodiments of the present disclosure may operate, in accordance with at least one example embodiment of the present disclosure. The block diagram provides a schematic diagram of an example quantum computer system100comprising an ion trap apparatus and/or package150, in accordance with an example embodiment. In various embodiments, the quantum computer system100comprises a computing entity10and a quantum computer102. In various embodiments, the quantum computer102comprises a controller30, a cryostat and/or vacuum chamber40enclosing an ion trap apparatus and/or package150, one or more manipulation sources60, and one or more voltage sources50. In an example embodiment, the one or more manipulation sources60may comprise one or more lasers (e.g., optical lasers, microwave sources, and/or the like). In various embodiments, the one or more manipulation sources60are configured to manipulate and/or cause a controlled quantum state evolution of one or more ions within the ion trap of the ion trap apparatus and/or package150. For example, in an example embodiment, wherein the one or more manipulation sources60comprise one or more lasers, the lasers may provide one or more laser beams to the ion trap within the cryostat and/or vacuum chamber40via corresponding beam paths66(e.g.,66A,66B,66C). In various embodiments, the quantum computer102comprises one or more voltage sources50. For example, the voltage sources50may comprise a plurality of transport and/or trapping (TT) voltage drivers and/or voltage sources and/or at least one radio frequency (RF) driver and/or voltage source. The voltage sources50may be electrically coupled to the corresponding TT electrodes and/or RF rails for generation of the trapping potential configured to trap one or more ions within the ion trap of the ion trap apparatus and/or package150via the corresponding electrical leads. In some such embodiments, the ion trap includes and/or otherwise defines one or more physical regions at which a qubit may be located, or otherwise positioned, within the ion trap. In various embodiments, the regions are arranged in a linear and/or one-dimensional arrangement along the longitudinal axis of the ion trap (e.g., defined by the RF rails of the ion trap). In at least one example context, each physical region is configured to enable storage of one or more qubits at that region while maintaining a well-defined order of the qubit positions across all regions in the ion trap. Alternatively or additionally, in some embodiments, the ion trap is configured to enable movement of regions and movement of the qubits within those regions along the ion trap. In some such embodiments, adjacent qubits within the qubit ordering may have their positions swapped to enable repositioning of qubits in the one-dimensional quantum computing environment. In this regard, a qubit located at a first position (e.g., position 0), may be swapped only with a second position located along the ion trap in a first direction (e.g., position 1). A qubit located at a second position (e.g., position 1, between positions 0 and 2) may be swappable with position 2 (in a first direction) or position 0 (in another direction, for example opposite the first direction). In some embodiments, for example, voltage signals may be applied to the electrodes (e.g., TT electrodes) of the ion trap to cause a potential experienced by the ions within the ion trap that causes one or more actions for swapping adjacent qubits to allow for such repositioning (e.g., in a swap between position 1 and position 2 in the qubit ordering, a qubit initially at position 1 becomes positioned at position 2, and the qubit at position 2 becomes positioned at position 1 in parallel and/or otherwise substantially at the same time). In various embodiments, a computing entity10is configured to allow a user to provide input to the quantum computer102(e.g., via a user interface of the computing entity10) and receive, view, and/or the like output from the quantum computer102. The computing entity10may be in communication with the controller30of the quantum computer102via one or more wired or wireless networks20and/or via direct wired and/or wireless communications. In an example embodiment, the computing entity10may translate, configure, format, and/or the like information/data, quantum computing algorithms, and/or the like into a computing language, executable instructions, command sets, and/or the like that the controller30can understand and/or implement. In various embodiments, the controller30is configured to control the voltage sources50, cryogenic system and/or vacuum system controlling the temperature and pressure within the cryogenic and/or vacuum chamber40, manipulation sources60, and/or other systems controlling various environmental conditions (e.g., temperature, pressure, and/or the like) within the cryogenic and/or vacuum chamber40and/or configured to manipulate and/or cause a controlled evolution of quantum states of one or more ions within the ion trap. In various embodiments, some or all of the ions trapped within the ion trap are used as qubits of the quantum computer102. In some embodiments, the controller30is embodied by a non-quantum computer configured for performing an even-odd transposition sort, as described herein. Additionally or alternatively, in some embodiments the controller30is configured for generating a swap command set, qubit manipulation instruction set, and/or hardware instruction set associated with the performed even-odd transposition sort. In this regard, the controller30may be configured to generate appropriate instructions for controlling the quantum computer102in the manner desired (e.g., to reposition various qubits to desired positions). Alternatively or additionally, in some embodiments, the computing entity10is configured for performing the even-odd transposition sort, and/or for generating the swap command set, qubit manipulation instruction set, and/or hardware instruction set associated with the performed even-odd transposition sort. In some such embodiments, the computing entity10is configured to communicate some or all of the generated data (such as the swap command set, qubit manipulation instruction set, and/or hardware instruction set). In this regard, the data generated via the computing entity10may be utilized by the controller30for controlling the quantum computer102in the manner desired (e.g., to reposition various qubits to desired positions), such that the computing entity10indirectly controls at least one aspect of the quantum computer102. Example Apparatus of the Present Disclosure The methods, apparatuses, systems, and computer program products of the present disclosure may be embodied by any variety of devices. For example, a method, apparatus, system, and computer program product of an example embodiment may be embodied by a fixed computing device, such as a personal computer, computing server, computing workstation, or a combination thereof. Further, an example embodiment may be embodied by any of a variety of mobile terminals, mobile telephones, smartphones, laptop computers, tablet computers, or any combination of the aforementioned devices. In at least one example embodiment, the controller30is embodied by one or more computing systems, such as the apparatus200as shown inFIG.2. In other embodiments, the computing entity10is embodied by the apparatus200as shown inFIG.2. The apparatus200may include a processor202, memory204, input/output module206, communications module208, and/or qubit instruction processing module210. In some embodiments, the qubit instruction processing module210is optional, may be embodied by one or more of the other modules, and/or may be embodied by another system associated with the apparatus200. Although the components are described with respect to functional limitations, it should be understood that the particular implementations necessarily include the use of particular hardware. It should also be understood that certain of the components described herein may include similar or common hardware. For example, two modules may both leverage use of the same processor, network interface, storage medium, or the like to perform their associated functions, such that duplicate hardware is not required for each module. The use of the term “module” and/or the term “circuitry” as used herein with respect to components of the apparatus200should therefore be understood to include particular hardware configured to perform the functions associated with the particular module as described herein. Additionally or alternatively, the terms “module” and “circuitry” should be understood broadly to include hardware and, in some embodiments, software and/or firmware for configuring the hardware. For example, in some embodiments, “module” and “circuitry” may include processing circuitry, storage media, network interfaces, input/output devices, and the like. In some embodiments, other elements of the apparatus200may provide or supplement the functionality of the particular module. For example, the processor202may provide processing functionality, the memory204may provide storage functionality, the communications module208may provide network interface functionality, and the like, to one or more of the other modules. In some embodiments, the processor202(and/or co-processor or any other processing circuitry assisting or otherwise associated with the processor) may be in communication with the memory204via a bus for passing information among components of the apparatus. The memory204may be non-transitory and may include, for example, one or more volatile and/or non-volatile memories. In other words, for example, the memory may be an electronic storage device (e.g., a computer readable storage medium). The memory204may be configured to store information, data, content, applications, instructions, or the like, for enabling the apparatus200to carry out various functions in accordance with example embodiments of the present disclosure. The processor202may be embodied in any one of a myriad of ways and may, for example, include one or more processing devices configured to perform independently. Additionally or alternatively, the processor202may include one or more processors configured in tandem via a bus to enable independent execution of instructions, pipelining, and/or multithreading. The use of the terms “processor,” “processing module,” and “processing circuitry” may be understood to include a single-core processor, a multi-core processor, multiple processors internal to the apparatus, other central processing unit (“CPU”), microprocessor, integrated circuit, and/or remote or “cloud” processors. In an example embodiment, the processor202may be configured to execute computer-coded instructions stored in the memory204or otherwise accessible to the processor. Alternatively, or additionally, the processor202may be configured to execute hard-coded functionality. As such, whether configured by hardware or software means, or by a combination thereof, the processor202may represent an entity (e.g., physically embodied in circuitry) capable of performing operations according to an embodiment of the present disclosure while configured accordingly. Alternatively, as another example, when the processor is embodied as an executor of software instructions, the instructions may specifically configure the processor to perform the algorithms and/or operations described herein when the instructions are executed. As one example context, the processor202may be configured to provide functionality for instruction compilation for at least one time slice in a one-dimensional quantum computing environment. In this regard, the processor202may be specially configured to support functionality of the controller30. In some such embodiments, the processor202includes hardware, software, firmware, and/or the like, configured for generating an algorithm swap command set for gating qubits of a qubit set in appropriate positions at a given time slice. The algorithm swap command set may further be used to generate one or more intermediate instruction sets, and/or repositioning the qubits of the qubit set according to the algorithm swap command set, such as using qubit manipulation hardware (e.g., one or more electrodes). It should be appreciated that the processor202may be configured to perform such functionality for any number of time slices. The apparatus200further includes input/output module206. The input/output module206may, in turn, be in communication with processor202to provide output to the user and, in some embodiments, to receive an indication of a user input. The input/output module206may comprise one or more user interfaces, and may include a display to which user interface(s) may be rendered. In some embodiments, the input/output module206may comprise a web user interface, a mobile application, a desktop application, a linked or networked client device, and/or the like. In some embodiments, the input/output module206may also include a keyboard, a mouse, a joystick, a touch screen, touch areas, soft keys, a microphone, a speaker, or other input/output mechanisms. In some such embodiments, the input/output mechanisms are configured to enable a user to provide data representing one or more user interaction(s) for processing by the apparatus200. The processor and/or user interface module comprising the processor, for example processor202, may be configured to control one or more functions of one or more user interface elements through computer program instructions (e.g., software and/or firmware) stored on a memory accessible to the processor (e.g., memory204, and/or the like). The communications module208may be any means such as a device or circuitry embodied in either hardware or a combination of hardware and software that is configured to receive and/or transmit data from/to a network and/or any other device, circuitry, or module in communication with the apparatus200. In this regard, the communications module208may include, for example, at least a network interface for enabling communications with a wired or wireless communications network. For example, the communications module208may include one or more network interface cards, antennas, buses, switches, routers, modems, and supporting hardware and/or software, or any other device suitable for enabling communications via a network. Additionally or alternatively, the communication interface may include the circuitry for interacting with the antenna(s) to cause transmission of signals via the antenna(s) or to handle receipt of signals received via the antenna(s). The qubit instruction processing module210includes hardware, software, firmware, and/or a combination thereof, configured to support instruction compilation functionality and/or corresponding qubit manipulation functionality in a one-dimensional quantum computing environment associated with the controller30. In some embodiments, the qubit instruction processing module210may utilize processing circuitry, such as the processor202, to at least a portion, or all, perform these actions. In some such embodiments, the qubit instruction processing module210includes hardware, software, firmware, and/or a combination thereof, for at least identifying an initial qubit position set associated with a qubit set, identifying a target qubit position set associated with the qubit set, and generating an algorithm swap command set by performing an even-odd transposition sort based on at least the target qubit position set. In some embodiments, the qubit instruction processing module210is further configured for receiving a quantum program comprising a qubit pairing set associated with a qubit set, and/or determining a near-midpoint open index pair for a qubit pairing set, and/or assigning, in the target position set, target position indices based at least on the near-midpoint open index pair. In some embodiments, the qubit instruction processing module210is further configured for identifying a second target qubit position set associated with the qubit set at a second time slice and generating a second algorithm swap command set by performing an even-odd transposition sort based on at least a second target qubit position set. In some embodiments, the qubit instruction processing module210is further configured to perform target assignment (e.g. identify a target qubit position set) based on particular starting positions for a particular qubit set. In some such embodiments, the qubit instruction processing module210may be configured to perform near-midpoint target assignment, slot-based target assignment, and/or the like as described herein. Additionally or alternatively, in some embodiments, the qubit instruction processing module210is configured to perform one or more pre-processing algorithms on a start positions pairing set before performing such target assignment. It should be appreciated that, in some embodiments, the qubit instruction processing module210may include a separate processor, specially configured field programmable gate array (FPGA), or a specially configured application-specific integrated circuit (ASIC). In some embodiments, one or more of the aforementioned components is combined to form a single module. The single combined module may be configured to perform some or all of the functionality described above with respect to the individual modules. For example, in at least one embodiment, the qubit instruction processing module210may be combined with the processor202. Additionally or alternatively, in some embodiments, one or more of the modules described above may be configured to perform one or more of the actions described with respect to one or more of the other modules. Example Computing Environment of the Present Disclosure FIG.3illustrates an example quantum program, for example compiled in an example computing environment for at least one time slice in a one-dimensional quantum computing environment, in accordance with at least one example embodiment of the present disclosure. In some embodiments, one or more of the illustrated elements is embodied by one or more data object(s) and/or other data values maintained by a computing system, such as the controller30embodied by the specially configured apparatus200. In this regard, each illustrated element may be embodied, and/or manipulated, using hardware, software, firmware, and/or a combination thereof. In some embodiments, a computing entity may be used to generate a quantum program302and/or submit the quantum program302for compilation and/or execution. The quantum program302may be written in any of a myriad of quantum computing programming languages, and embody any number of commands to be performed via a quantum computing system. In this regard, the quantum program302may embody user-submitted instructions to be executed via the quantum computing system, for example by first compiling the quantum program into one or more executable set of instructions that the quantum computing system can process and/or execute. In this regard, the quantum computing system may implement the quantum program by initializing any number of qubits managed by the quantum computing system, and/or performing operations using the qubits, such as logical gate operations performed using associated pairs of qubits and/or individual qubits as the inputs to the logical gate operations. In some such embodiments, the quantum program302includes, and/or is embodied by, one or more sets of qubit pairs to be executed at various time slices. An example quantum program302is depicted, where the quantum program302is embodied by one or more qubit pairing set(s), each qubit pairing set comprising qubit pair(s) for qubits of a qubit set304, to be executed at a plurality of time slices. As illustrated, the quantum program302is associated with a qubit set304. The qubit set304includes 8 qubits, each qubit identified by a zero-based qubit index ranging from 0 to 7. Each qubit represented in the qubit set304may correspond to a qubit physically maintained in a corresponding quantum computing environment. Accordingly, the depicted qubit set304may correspond to an 8-qubit quantum computing system. It should be appreciated that, in other embodiments, any number of qubits may be utilized. The quantum program302is broken down into a plurality of time slices, specifically time slices Tk, Tk+1, and Tk+2. Each of the time slices is associated with a qubit pairing set, the qubit pairing set comprising pairs of the qubits to be gated at the corresponding time slice. For example, as illustrated, at time slice Tk, the qubit pairings are embodied by qubit pair306A that pairs qubit 0 with qubit 7, qubit pair306B that pairs qubit 1 with qubit 2, qubit pair306C that pairs qubit 3 with qubit 4, and qubit pair306D that pairs qubit 5 with qubit 6 (collectively, the qubit pairs306A-306D referred to as “qubit pairing set306”). In this regard, each qubit pair may be associated with one or more logical gates to be executed at the time slice. For example, as illustrated, qubit pair306C may represent qubit 3 and qubit 4 as inputs to a particular logic gate to be executed during time slice Tk, qubit pair306D may represent qubit 5 and qubit 6 as inputs to another logic gate to be executed during time slice Tk+1. In this regard, the plurality of qubit pairing set(s) for each time slice may embody all qubit pairings embodying the quantum program302. Although not depicted, it should be appreciated that in some embodiments one or more qubits is not associated with a qubit pair, such that the qubit need not be positioned adjacent to any particular qubit for the associated time slice. Execution of a logical gate may require that the qubit pair, embodying inputs to the logical gate, be located at adjacent positions within the quantum computing environment during the corresponding time slice to enable execution of the logical gate. In this regard, the qubits may require repositioning within a quantum computing system to position the qubits of a particular qubit pair adjacent to one another for execution of a corresponding logic gate. It should be appreciated that other logic gates may only require a single qubit, thus not requiring the qubit be positioned in any particular region or any particular index in a well-defined order. In this regard,FIG.4illustrates another example computing environment for instruction compilation for at least one time slice in a one-dimensional quantum computing environment, in accordance with at least one example embodiment of the present disclosure. Specifically, the example computing environment includes a plurality of data objects that may be identified, maintained, and/or utilized for positioning qubits based on a qubit pairing set for a particular time slice. The data object(s) may further be maintained by a computing system, such as the controller30embodied by the specially configured apparatus200. In this regard, each illustrated element may be embodied, and/or manipulated, using hardware, software, firmware, and/or a combination thereof. FIG.4specifically illustrates, for the qubit set304, an example initial qubit position set402and a target qubit position set404for time slice k. In this regard, the initial qubit position set402includes an initial position index for each qubit of the qubit set. Such initial qubit position indices may correspond to the physical ordering of the qubits along an ion trap in the quantum computing system at the beginning of the time slice. In this regard, for example, data value402A represents an initial position index for the qubit at index 0 (e.g., qubit 0 of the qubit set304), data value402B represents an initial position index for the qubit at index 1 (e.g., qubit 1 of the qubit set304), data value402C represents an initial position index for the qubit at index 2 (e.g., qubit 2 of the qubit set304), data value402D represents an initial position index for the qubit at index 3 (e.g., qubit 3 of the qubit set304), and so on. In some embodiments, each qubit may be located at a default initial position index, for example defaulted to the index range {0:7} associated with the qubits. In other contexts, the initial qubit position set may be optimized such that initial qubit positions are assigned for qubits that are already adjacent to one another and to be gated in the current time slice. Additionally or alternatively still, the initial qubit position set is generated such that the number of parallel swap commands required to reposition from the initial qubit position set to the target qubit position set for one or more subsequent time slices is reduced and/or otherwise minimized. For example, the initial qubit position set for the first time slice may be generated such that the second time slice has minimal parallel swap commands required to reposition from the initial qubit position set to the target qubit position set for the second time slice. It should be appreciated that embodiments may determine and/or otherwise generate the default initial position index for each qubit in any manner such that one or more time slices are optimized to reduce, minimize, and/or otherwise eliminate swap operations between one or more qubits to reposition based on a target qubit position set. Accordingly, the target qubit position set404includes a target position index for each qubit of the qubit set. Such target qubit position indices may correspond to the physical ordering of the qubits along an ion trap in the quantum computing system during and/or at the end of the time slice, for example for execution of one or more logic gates. In this regard, for example, data value404A represents a target position index for the qubit at index 0 (e.g., qubit 0 of the qubit set304), data value404B represents a target position index for the qubit at index 1 (e.g., qubit 1 of the qubit set304), data value404C represents a target position index for the qubit at index 2 (e.g., qubit 2 of the qubit set304), data value404D represents a target position index for the qubit at index 3 (e.g., qubit 3 of the qubit set304), and so on. Accordingly, the target position index, for a particular qubit, in the target qubit position set404corresponds to the position within the ordering of qubits along the ion trap to which the qubit must be moved. It should be appreciated that, in some circumstances, one or more qubits may not be moved, such that the initial position index for the qubit matches the target position index for the qubit. In some embodiments, the target qubit position set404is determined, for example by the apparatus200, based on a qubit pairing set for time slice k, such as the qubit pairing set306as depicted and described with respect toFIG.3. In this regard, each qubit pair may be located such that the target position indices associated with each of the qubits in the qubit pair are adjacent to one another. In other words, for example in some contexts, the data value for a first target position index corresponding to a first qubit of the qubit pair may be adjacent to the data value for a second target position index corresponding to a second qubit of the qubit pair, where the data values are at most 1 position from one another. In this regard, a target position index of value “X” is adjacent to the target position index of value “X+1” and/or “X−1,” if each of such values represents a valid index. As depicted, the target qubit position set404positions each qubit adjacent to a corresponding qubit of a qubit pair in the qubit pairing set306at time slice k, as illustrated with respect toFIG.3. For example, as depicted in qubit pair306A ofFIG.3, qubit 0 is paired with qubit 7, and accordingly qubit 0 is associated with a target qubit position index404A of the value “2” while qubit 7 is associated with a target qubit position index404H of the value “3,” such that the target position indices for the qubit pair are adjacent. Similarly, as depicted in qubit pair306B ofFIG.3, qubit 1 is paired with qubit 2, and accordingly qubit 1 is associated with a target qubit position index404B of the value “0” while qubit 2 is associated with a target qubit position index404C of the value “1”, such that the target position indices for the qubit pair are adjacent. Similarly, as depicted in qubit pair306C ofFIG.3, qubit 3 is paired with qubit 4, and accordingly qubit 3 is associated with a target qubit position index404D of the value “4” while qubit 4 is associated with a target qubit position index404E of the value “5,” such that the target position indices for the qubit pair are adjacent. Similarly, as depicted in qubit pair306D ofFIG.3, qubit 5 is paired with qubit 6, and accordingly qubit 5 is associated with a target qubit position index404F of the value “6” while qubit 6 is associated with a target qubit position index404gof the value “7,” such that the target position indices for the qubit pair are adjacent. Accordingly, with each qubit pair of the qubit pairing set306located in adjacent indices (e.g., corresponding to adjacent positions in the qubit ordering along an ion trap), each qubit pair may be utilized as inputs to execute a desired logic gate. FIG.5illustrates example operations of an even-odd transposition sort and a corresponding algorithm swap command set, in accordance with at least one example embodiments of the present disclosure. In this regard, the algorithm swap command set504corresponds to the various operations to reposition qubits from initial positions to target positions. Specifically,FIG.5depicts even-odd transposition sort for repositioning the qubit set304based on the target qubit position set404, specifically to reposition the qubit set304from the initial qubit position set402to the target qubit position set. Utilizing the even-odd transposition sort, it should be appreciated that qubits may be sorted in at most Q parallel swaps operations, where Q represents the number of qubits in a qubit set. Further, as described herein, determination and utilization of near-midpoint indices for qubit pairs is utilized to reduce the number of parallel swap operations required to reach the target qubit position set. In this regard, utilizing near-midpoint indices may reduce the number of parallel swaps nearly in half (Q/2 swaps) regardless of the initial positions for the various qubits. The even-odd transposition sort includes a number of operational steps502A-502E (collectively “steps502”). Specifically, the algorithm operates on a data vector and starts at step502A where the data vector is loaded with an array of values having the same ordering as the target qubit position set404, and ends at step502E where the data vector is sorted. In some embodiments, the data vector may be loaded with the target qubit position set itself. As illustrated, the steps502include a plurality of swap operations for swapping qubits at adjacent position indices. As illustrated, qubits that are swapped at a particular step are illustrated using dashed (or “broken” lines). The algorithm swap command set504includes a swap indicator for each swap determined for performance during the even-odd transposition sort. As should be appreciated, the steps of the even-odd transposition sort alternates determining swaps to be performed for even indices (e.g., 0, 2, 4, and 6 as illustrated) and odd indices (e.g., 1, 3, and 5, as illustrated) beginning with the even indices. It should be noted, in some embodiments, that the algorithm may begin with the odd indices and achieve the same result. Each data value at an index in the data vector is compared with the data value at the next index (e.g., each data value at index “X” is compared with data value at index “X+1”). In the illustrated example context, the data vector is first loaded with the target qubit position set404and is utilized as a starting point for executing the even-odd transposition sort, for example at step502A. Subsequently, at step502B, swap operations are determined for each of the even indices 0, 2, 4, and 6. In this regard, a swap operation may be determined for a given index when the data value at the lower index is determined to be in an unsorted order relative to the data value at the next index. For example, a swap operation (e.g., an “even swap”) may be determined for an even index X when the data value located at index X+1 is less than the data value located at index X. As illustrated, at the first step502B, the data vector values at indices 0 and 1 are swapped. In this regard, the data value at the index 0, value “2,” is determined to be in unsorted order with respect to the data value at subsequent index 1, value “0.” In this regard, the value “0” is less than the value “2,” and not in proper order (e.g., ascending order in the context described), thus requiring a swap to properly order the values. Accordingly, the data values for the two indices are swapped at the first step502B. Similarly, the data value at index 6, value “7,” is determined to be in unsorted order with respect to the data value at subsequent index 7, value “3.” In this regard, the value “3” is less than the value “7,” but not in the proper order, thus requiring a swap. Accordingly, the data values for the two indices are swapped at the first step502B. These swaps are recorded as swap indicators in the corresponding indices in the algorithm swap command set504. Specifically, the left index of the swap is indicated as a “L” and the right index of a swap is indicated as a “R,” for each determined swap. Such swaps end the first step502B for the even indices. In this regard, the data vector may represent the data values for the qubit position indices as they are manipulated via execution of the even-odd transposition sort. It should be appreciated that, as described, the data vector may be loaded based on the target qubit position set and thereby manipulated (e.g., based on swap commands) until the data vector is properly sorted. Once the data vector is properly sorted, the even-odd transposition sort may terminate, and processing for a new time slice may begin as described herein. Additionally or alternatively, the data vector may be processed during intermediary steps for any of a myriad of purposes, for example to optimize based on forward-looking analysis of the various qubits as described herein. For subsequent time slices, it should be appreciated that upon completion of a slice, for example at time slice k, the qubits of the qubit set have specific current positions. In this regard, such current positions at the end of a time slice may be utilized as initial qubit position indices for the subsequent (k+1) time slice, and in some embodiments may be used to derive the target qubit position set for the subsequent (k+1) time slice so as to optimize the number of required swap operations. In this regard, the algorithm may continue in this manner for any number of time slices, for example up to K time slices where K is the circuit depth. The process continues in analogous fashion for the odd indices of the data vector. For example, a swap operation (e.g., an “odd swap”) may be determined for an odd index X when the data value at index X+1 is less than the data value at index X. For example, at the second step502C, the data value at index 1, value “2,” is determined to be unsorted with respect to the data value at the subsequent index 2, value “1.” In this regard, the value “1” is less than the value “2,” but is not in the proper order, thus requiring a swap. Accordingly, the data values for the two indices are swapped at the second step502C. Similarly, the data value at index 5, value “6,” is determined to be unsorted with respect to the position index value at index 6, value “3.” In this regard, the value “3” is less than the value “6,” but is not in the proper order, thus requiring an additional swap. Accordingly, the data values at the two indices are swapped at the second step502C. Corresponding swap indicators are stored in at the corresponding indices in the algorithm swap command set for each determined swap. Such swaps end the second step502C for the odd indices. The process continues for the third step502D, at which the data value at index 4, value “5,” is swapped with the data value at index 5, value “3.” A corresponding swap indicator is subsequently stored to the algorithm swap command set504. The process then continues for the fourth step502E, at which the data value at index 3, value “4,” is swapped with the data value at index 4, value “3.” A corresponding swap indicator is subsequently stored to the algorithm swap command set. After the fourth step502E, the data values are in the proper order, and the even-odd transposition sort is complete. Accordingly, the algorithm swap command set504represents the swaps necessary to reposition the qubits according to the target qubit position set404. In this regard, the algorithm swap command indicator may be processed to perform the corresponding swaps associated with swap indicators therein, for example by generating one or more intermediate instruction sets and/or executing such instructions. FIG.6illustrates various configurations for an algorithm swap command set, in accordance with example embodiments of the present disclosure. Such configurations include algorithm swap command set504, which utilizes a first swap indicator to indicate a left index to be swapped, and a second swap indicator to indicate a right index to be swapped. In this regard, the swap indicators may be processed to determine whether a qubit is to be swapped left or right. FIG.6further includes a first alternative algorithm swap command set602. The algorithm swap command set indicates the lower index of each swap with a swap indicator equal to 1. In this regard, a corresponding system may process the first alternative algorithm swap command set602to identify each swap based on the swap indicators, and perform a swap for the index at which the swap indicator is located together with the subsequent index. In so doing, the right swap indicator need not be stored within the first alternative algorithm swap command set602. In other embodiments, it should be appreciated that an algorithm swap command set may be generated that includes a swap indicator at the right index for each swap, such that a swap may be performed for the index at which the swap indicator is located together with the previous index (e.g., index “X” and index “X−1”). FIG.6further includes a second alternative algorithm swap command set604. The algorithm swap command set604utilizes knowledge regarding the even-odd transposition sorting algorithm to reduce the amount of data that requires storage. Specifically, in this regard, the algorithm swap command set604stores swap indicators at a corresponding position based on whether the row represents an even index phase or an odd index phase. In the example context as depicted with respect toFIG.5, for example, the first step502B is performed for even indices, thus the first row of the algorithm swap command set604corresponds to even indices of the position index set being processed. Accordingly, index 0 of the first row of the algorithm swap command set604corresponds to the first even index of the position index set (e.g., index 0) being processed, where index 1 of the first row of the algorithm swap command set604corresponds to the second even index of the position index set (e.g., index 2) being processed, where index 2 of the first row of the algorithm swap command set604corresponds to the third even index of the position index set (e.g., index 4) being processed, and where index 3 of the first row of the algorithm swap command set604corresponds to the fourth even index of the position index set (e.g., index 6) being processed. Subsequently, the second step502C is performed for odd indices, thus the second row of the algorithm swap command set604corresponds to odd indices of the position index set being processed. Accordingly, index 0 of the second row of the algorithm swap command set604corresponds to the first odd index of the position index set (e.g., index 1) being processed, where index 1 of the second row of the algorithm swap command set604corresponds to the second odd index of the position index set (e.g., index 3) being processed, and so on. This process may continue as the subsequent steps are performed, with the stored swap indicators alternating between representing swaps for even indices and swaps for odd indices. Accordingly, based on at least a determined and/or predetermined starting value (e.g., the first step performed for even indices, for example), the value of the row of the algorithm swap command set604being processed may be used to determine whether the row corresponds to swap indicators for even indices, or swap indicators for odd indices. In this regard, the data size of the algorithm swap command set604may be reduced. Example Data Flow and Processes of the Disclosure Having described example systems, apparatuses, and computing environments associated with embodiments of the present disclosure, example data flows and corresponding flowcharts including various operations performed by the above described apparatuses and/or systems will now be discussed. It should be appreciated that each of the flowcharts depicts an example computer-implemented process that may be performed by one or more of the above described apparatuses, systems, and/or devices, for example using one or more of the components described herein. The blocks of each process may be arranged in any of a number of ways, as depicted and described herein. In some such embodiments, one or more blocks of a first process may occur in-between one or more blocks, or otherwise operate as a sub-process, of a second process. Additionally or alternatively, the process may include some or all of the operations described and/or depicted, including one or more optional blocks in some embodiments. In regards to the below described flowcharts, one or more of the depicted blocks may be optional in some, or all, embodiments of the present disclosure. Optional blocks are depicted with broken (or “dashed”) lines. Similarly, it should be appreciated that one or more of the operations of each flowcharts may be combinable, replaceable, and/or otherwise altered as described herein. FIG.7Aillustrates an example data flow diagram of an example process for instruction compilation for at least one time slice in a one-dimensional quantum computing environment, in accordance with at least one example embodiment of the present disclosure. Specifically, as illustrated,FIG.7Adepicts operational data flow between various devices and/or systems, specifically a controller30, a computing entity10, and the remaining components of the quantum computer102(e.g., the manipulation sources60and/or voltage sources50). It should be appreciated that the operations may be described from the perspective of any of the devices and/or systems depicted. FIG.7Billustrates an example flowchart of the data flow operations in the example process for instruction compilation for at least one time slice in a one-dimensional quantum computing environment, as depicted inFIG.7A, in accordance with at least one example embodiments of the present disclosure. In this regard, the example operations are depicted and described with respect to the perspective of the controller30. In this regard, the controller30may be embodied by any number of computing devices, for example the apparatus200as depicted and described herein with respect toFIG.2. The apparatus200may be configured for communication with any number of other devices and/or systems, for example the computing entity10and/or other components of the quantum computer102. In this regard, each operation will be described from the perspective of the controller30embodied by the specially configured apparatus200. At optional operation702, the apparatus200receives a quantum program comprising at least one qubit pairing set associated with a qubit set. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for receiving a quantum program comprising at least one qubit pairing set associated with a qubit set. In some such embodiments, the apparatus200receives the qubit program from a computing entity, such as the computing entity10. In this regard, the computing entity10may be utilized to input and/or generate the quantum program, for example utilizing one or more programming languages configured for compilation and/or implementation via a quantum computer. In this regard, the quantum program may include and/or be embodied by one or more qubit pairing set(s) to be executed via the quantum computer, for example at various time slices. In some such embodiments, a qubit pairing set corresponds to various qubit pairs to be utilized as inputs for one or more logic gates at a particular time slice. In this regard, all qubits of a qubit set (e.g., having a predetermined number of qubits maintained by a quantum computer) may be associated with a qubit pair of the qubit pairing set at each time slice to associate two qubits for positioning at adjacent positions within the quantum computer, for example by moving paired qubits to adjacent regions within an ion trap of the quantum computer. At operation704, the apparatus200identifies an initial qubit position set associated with the qubit set. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for identifying an initial qubit position set associated with the qubit set, and initialize a current position set with the initial qubit position set. In some embodiments, the apparatus200identifies an initial qubit position set that has qubit pairings for the first time slice adjacent to each other such that no qubit repositioning is required for the first time slice. In addition, the initial qubit position set may also be chosen such that the number of parallel swap commands required to reposition qubits for the second time slice is minimized. In other embodiments, an initial qubit position set may be identified based on one or more previously executed qubit repositioning. For example, in at least one context, the initial qubit position set corresponds to a target qubit position set for a previous time slice. In this regard, the current position set may represent the well-defined order of the qubits as currently positioned based on initialization of the apparatus, and/or based on repositioning of the cubits during one or more previous time slices. At operation706, the apparatus200identifies a target qubit position set associated with the qubit set. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for identifying a target qubit position set associated with the qubit set. In some embodiments, the target qubit position set corresponds to target position indices for each qubit in the qubit set during a particular time slice. In this regard, the qubits may each require repositioning from an initial position index associated with the qubit in the initial qubit position set to the target position index associated with the qubit in the target qubit position set to enable execution of one or more logic gates based on qubit pairs, for example of a qubit pairing set for the time slice. In this regard, the target qubit position set may include adjacent target position indices for qubits to be input to a single logic gate. In some embodiments, the target qubit position set is based on the initial qubit position set for the current time slice. In this regard, the apparatus200processes the initial qubit position set to optimize the target qubit position set, so as to reduce the number of required parallel swap commands. One example context of such optimization is using a near mid-point open index pairings for qubit pairs as described herein. In other embodiments, the target qubit position set is generated agnostic with respect to the initial qubit position set. In some embodiments, a data vector is initialized based on the indices in the target qubit position set. In this regard, the data vector may be manipulated as various steps of an even-odd transposition sort are performed, such that intermediary indices are determinable at each step and reflected as updates in the data vector. In some embodiments, the target qubit position set embodies the data vector. At operation708, the apparatus200generates an algorithm swap command set by performing an even-odd transposition sort based on at least the target qubit position set. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for generating an algorithm swap command set by performing an even-odd transposition sort based on at least the target qubit position set. For example, the even-odd transposition sort may be performed to generate swap indicators for repositioning qubits from the positions indicated by the initial qubit position set to the positions indicated by the target qubit position set. For example, each swap determined in the even-odd transposition sort may cause generating of a swap indicator for inclusion in the algorithm swap command set. Accordingly, the algorithm swap command set may be processed to generate corresponding instructions for repositioning the qubits of the qubit set for executing corresponding logic gates, for example logic gates embodying a quantum program and represented in a qubit pairing set for a current time slice. In this regard, in some embodiments, the even-odd transposition sort manipulates a data vector to reposition from the target qubit position set (to which the data vector is initialized) until the data vector is properly sorted. The data vector may embody each intermediate step as swap operations are performed and marked as described herein. In some such embodiments, the data vector may be processed as described herein at each swap for optimizations, forward-looking determinations, and/or the like. It should be appreciated that once a data vector is properly sorted from a target qubit position set, the apparatus may determine processing for the current time slice is complete. In some embodiments, a loop may be formed based on the operations704,706, and708for a plurality of time slices, as described herein. In this regard, the generated swap commands indicated via performance of the even-odd transposition sort for a particular time slice may embody an algorithm swap command subset for that particular time slice—where the full algorithm swap command set embodies swap operations for all time slices of a quantum program. In this regard, for subsequent iterations, the initial qubit position set identified for a subsequent time slice may comprise updating the current position set based on the swap operations represented in an algorithm swap command subset for the previous time slice. As such, the qubits are indicated as beginning from the positions to which they were repositioned during the previous time slice. Subsequently, the target qubit position set for the subsequent time slice may be determined, in some embodiments based on the initial qubit position set and in other embodiments using any of a myriad of target position assigning algorithms as described herein. In this manner, each iteration of the even-odd transposition sorts may be utilized to generate an algorithm swap command subset based on at least the target qubit position set for each time slice. Upon completion of each iteration of the even-odd transposition sort, the resulting algorithm swap command subset for the time slice may be added to a data object embodying an algorithm swap command set for the full quantum circuit. Accordingly, once processing for all time slices (e.g., {0:(K−1)} time slices, where K is the total number of time slices) is complete, each algorithm swap command subset corresponding to each time slice would have been added to the full algorithm swap command set, such that the full algorithm swap command set embodies all swap operations required for the quantum circuit. At optional operation710, the apparatus200generates, based on at least the algorithm swap command set, a qubit manipulation instruction set. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for generating, based on at least the algorithm swap command set, a qubit manipulation instruction set. In this regard, the qubit manipulation instruction set may correspond to a series of actions that may be compiled for execution via one or more components of a quantum computer to perform the swaps corresponding to the swap indicators of the algorithm swap command set. For example, the qubit manipulation instruction set may include one or more qubit swap instructions, qubit split instructions, qubit join instructions, qubit shift instructions, and/or any combination thereof. It should be appreciated that, in some embodiments, additional, alternatively, and/or a subset of instructions may be included in the qubit manipulation instruction set. At optional operation712, the apparatus generates a hardware instructions set based on at least the qubit manipulation instruction set. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for generating a hardware instruction set based on at least the qubit manipulation instruction set. In this regard, the hardware instruction set may correspond to one or more physical manipulations of components of the quantum computer, for example quantum computer102, to effect the actions represented by the qubit manipulation instruction set. For example, the hardware instruction set may represent one or more voltages to be applied to various electrodes used to effect an ion trap, and/or the qubits stored at regions thereof. It should be appreciated that the hardware instruction set may include predetermined voltages to be applied that correspond to each of qubit swap instructions, qubit split instructions, qubit join instructions, and/or qubit shift instructions. At optional operation714, the apparatus200executes the hardware instructions set using qubit manipulation hardware includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for executing the hardware instructions set using qubit manipulation hardware. In some such embodiments, the qubit manipulation hardware may include any number of hardware components configured for effecting the repositioning of qubits in the quantum computer, for example one or more electrodes and/or the like. For example, in some embodiments, the qubit manipulation hardware includes voltage sources50and electrodes of the ion trap. By executing the hardware instruction set, the apparatus200is configured to reposition the qubit set to the positions represented in the target qubit position set by physically executing the swaps represented in the algorithm swap command set through any number of hardware-level operations. FIG.8illustrates additional operations of an example process for instruction compilation for at least one time slice in a one-dimensional quantum computing environment, specifically for generating an algorithm swap command set by performing an even-odd transposition sort based on at least the initial qubit position set and the target qubit position set, in accordance with at least one example embodiment of the present disclosure. In this regard, the example process as illustrated may be performed by one or more specially configured systems such as a controller30, for example embodied by the specially configured apparatus200. In this regard, in some such embodiments, the apparatus200is specially configured by computer program instructions stored therein, for example in the memory204and/or another component depicted and/or described, and/or otherwise accessible to the apparatus200, for performing the operations as depicted and described. In some embodiments, the specially configured apparatus includes and/or otherwise is in communication with one or more other apparatuses, systems, devices, and/or the like, to perform one or more of the operations as depicted and described. For example, the apparatus200may include and/or communicate with one or more components of a quantum computer, and/or a computing entity, to facilitate one or more of the operations of the process depicted inFIG.8. The illustrated process begins at operation802. In some embodiments, the process begins after one or more of the operations depicted and/or described with respect to one of the other processes described herein. For example, in some embodiments as described, the process begins after execution of operation706. In this regard, the process may replace or supplement one or more blocks depicted and/or described with respect to one of the other processes described herein. For example, in some embodiments as depicted, the process depicted with respect toFIG.8supplants, supplements, and/or otherwise replaces the operation depicted and described with respect to operation708. Additionally or alternatively, as depicted, upon completion of the process depicted inFIG.8and/or one or more operations associated therewith, flow may return to one or more operations of another process, for example to optional operation710as depicted. At operation802, the apparatus200stores, in a data object representing the algorithm swap command set, a first swap indicator for each even swap determined from the even-odd transposition sort and storing, to the data object representing the algorithm swap command set, a second swap indicator for each odd swap determined from the even-odd transposition set. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for storing, in a data object representing the algorithm swap command set, a first swap indicator for each even swap determined from the even-odd transposition sort, and storing, to the data object representing the algorithm swap command set, a second swap indicator for each odd swap determined from the even-odd transposition sort. In this regard, the data object stores indicators for all swaps performed as part of the even-odd transposition sort. By storing a first swap indicator for even swaps and a second swap indicator for odd swaps, the data object generated may be parsed to differentiate between even phase and odd phase without additional data stored and/or without prior knowledge regarding the even-odd transposition sort. FIG.9illustrates additional operations of an example process for instruction compilation for at least one time slice in a one-dimensional quantum computing environment, specifically for optimizing timing of gating operations for qubits that are positioned adjacent to one another before completion of an even-odd transposition sort, in accordance with at least one example embodiment of the present disclosure. In this regard, the example process as illustrated may be performed by one or more specially configured systems such as a controller30, for example embodied by the specially configured apparatus200. In this regard, in some such embodiments, the apparatus200is specially configured by computer program instructions stored therein, for example in the memory204and/or another component depicted and/or described, and/or otherwise accessible to the apparatus200, for performing the operations as depicted and described. In some embodiments, the specially configured apparatus includes and/or otherwise is in communication with one or more other apparatuses, systems, devices, and/or the like, to perform one or more of the operations as depicted and described. For example, the apparatus200may include and/or communicate with one or more components of a quantum computer, and/or a computing entity, to facilitate one or more of the operations of the process depicted inFIG.9. The illustrated process begins at operation902. In some embodiments, the process begins after one or more of the blocks depicted and/or described with respect to one of the other processes described herein. For example, in some embodiments as described, the process begins after execution of operation706. In this regard, the process may replace or supplement one or more blocks depicted and/or described with respect to one of the other processes described herein. For example, in some embodiments as depicted, the process depicted with respect toFIG.9supplants, supplements, and/or otherwise replaces the operation depicted and described with respect to operation708. Additionally or alternatively, as depicted, upon completion of the process depicted inFIG.9and/or one or more operations associated therewith, flow may return to one or more operations of another process, for example to optional operation710as depicted. At operation902, the apparatus200determines, while executing the even-odd transposition sort, a second qubit pair for gating at the first time slice is associated with a first position index and a second position index, the first position index and the second position index representing adjacent position indices. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for determining, while executing the even-odd transposition sort, a second qubit pair for gating at the current time slice has been positioned in adjacent indices before the even-odd transposition sort is completed (e.g., before placing the qubits in the final desired order). In this regard, the second qubit pair may represent a first qubit of the qubit set and a second qubit of the qubit set that are similarly to be utilized as inputs for a logic gate to be performed at the first time slice (e.g., the current time slice) together with other qubit pairs (and/or individual qubits) for gating. In this regard, the first position index and second position index may indicate that the qubits of the second qubit pair will be adjacent during an intermediate stage of repositioning from the initial qubit position set to the target qubit position set. It should be appreciated that such intermediate determinations may be identified for any number of qubit pairs, such that the execution of logic gates using qubit pairs that are adjacent before they reach their target qubit position indices may be identified for any number of qubit pairs such that instructions may be generated to perform those logic gates early (e.g., before the even-odd transposition sort is completed). At operation904, the apparatus200stores at least one command to perform a logical operation based on at least the second qubit pair. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for storing at least one command to perform a logical operation based on at least the second qubit pair. In this regard, the apparatus200may store a first command to perform the logical operation. In at least one example context, the apparatus200performs the logical operation by executing the logic gate utilizing the qubits of the second qubit pair as input. Additionally or alternatively, the apparatus200may store the same command. In this regard, by performing the logical operation early, the apparatus200may not be required to re-execute the logical operation at the completion of the even-odd transposition sort, and/or may not be required to continue to reposition the qubits of the second qubit pair to adjacent positions for such purposes during the current time slice. Accordingly, such early execution may conserve execution power, processing power, and/or both, which may be further improved based on whether such a circumstance is determined for multiple qubit pairs. At optional operation906, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for determining a first updated target qubit position index for a first qubit of the second qubit pair. In some such embodiments, the first updated target qubit position index for the first qubit is determined based on a third qubit pair comprising at least the first qubit for gating at a second time slice. In this regard, the target qubit position index for the first qubit of the second qubit pair may be updated since the logical operation may be performed when the second qubit pair becomes adjacent earlier than termination of the even-odd transposition sort for the current time slice. For example, because the qubit no longer is required to be positioned adjacent to the second qubit of the second qubit pair at the end of the current time slice, the target qubit position index associated with the first qubit of the second qubit pair may be updated (e.g., to the first updated target qubit position index) such that the qubit is closer in proximity and/or otherwise adjacent to another qubit for gating during execution of the next time slice. It should be appreciated that the apparatus200may similarly process data associated with the second time slice to determine the qubit pair that includes the first qubit at the second time slice, such that the first qubit may be positioned closer to another qubit for which it will be gated in the second time slice. In some embodiments, the apparatus200may regenerate the entirety of the target qubit position set for all qubits to further reduce the number of swaps required for the current time slice and future time slice(s) (e.g., the subsequent time slice). At optional operation908, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for determining a second updated target qubit position index for a second qubit of the second qubit pair. In some such embodiments, the second updated target qubit position index for the second qubit is determined based on a fourth qubit pair comprising at least the second qubit for gating at the second time slice. It should be appreciated that, in some contexts, the fourth qubit pair may be the same as the third qubit pair. It should be appreciated that the second updated target qubit position index associated with the second qubit of the second qubit pair may be determined in a manner similar to that described with respect to the first qubit of the second qubit pair in operation906. For example, the target qubit position index for the second qubit of the second qubit pair may be updated since the logical operation that utilizes this qubit (e.g., the logical gate) may be performed when the second qubit pair becomes adjacent earlier than termination of the even-odd transposition sort for the current time slice. Similarly, the target qubit position index associated with the second qubit of the second qubit pair may be updated (e.g., to the second updated target qubit position index) such that the second qubit is closer in proximity and/or otherwise adjacent to another qubit for gating during execution of the next time slice. It should be appreciated that the apparatus200may similarly process data associated with the second time slice to determine the qubit pair that includes the second qubit at the second time slice, such that the second qubit may be positioned closer to another qubit for which it will be gated in the second time slice. It should be appreciated that the process described with respect toFIG.9may be repeated for any number of qubit pairs for a given time slice. For example, in some embodiments, only one qubit pair becomes adjacent before termination of the even-odd transposition sort. In other embodiments, no qubit pairs may become adjacent before termination of the even-odd transposition sort. In yet other embodiments still, all qubit pairs may become adjacent before termination of the even-odd transposition sort. It should further be appreciated that in circumstances where all qubit pairs may be executed before termination of the even-odd transposition sort (e.g., due to adjacent positioning between the two qubits of the qubit pair during intermediary steps of the even-odd transposition sort), the apparatus200may be configured to terminate the even-odd transposition sort early and/or begin execution of processing with respect to a subsequent time slice. FIG.10illustrates additional operations of an example process for instruction compilation for at least one time slice in a one-dimensional quantum computing environment, specifically for generating a second algorithm swap command set by performing a second even-odd transposition sort, in accordance with at least one example embodiment of the present disclosure. In this regard, the example process as illustrated may be performed by one or more specially configured systems such as a controller30, for example embodied by the specially configured apparatus200. In this regard, in some such embodiments, the apparatus200is specially configured by computer program instructions stored therein, for example in the memory204and/or another component depicted and/or described, and/or otherwise accessible to the apparatus200, for performing the operations as depicted and described. In some embodiments, the specially configured apparatus includes and/or otherwise is in communication with one or more other apparatuses, systems, devices, and/or the like, to perform one or more of the operations as depicted and described. For example, the apparatus200may include and/or communicate with one or more components of a quantum computer, and/or a computing entity, to facilitate one or more of the operations of the process depicted inFIG.10. The illustrated process begins at operation1002. In some embodiments, the process begins after one or more of the operations depicted and/or described with respect to one of the other processes described herein. For example, in some embodiments as described, the process begins after execution of operation714. In this regard, the process may replace or supplement one or more operations depicted and/or described with respect to one of the other processes described herein. For example, in some embodiments as depicted, the process depicted with respect toFIG.10supplants, supplements, and/or otherwise replaces the operation depicted and described with respect toFIG.7. Additionally or alternatively, as depicted, upon completion of the process depicted inFIG.10and/or one or more operations associated therewith, flow may end or return to one or more operations of another process. For example, in some embodiments, the process depicted with respect toFIG.10occurs after execution of operation708, and flow returns to optional operation710upon completion of the process depicted with respect toFIG.10. At operation1002, the apparatus200identifies a second target qubit position set associated with the qubit set at a second time slice. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for identifying a second target qubit position set associated with the qubit set at a second time slice. In some embodiments, the second target qubit position set is identified based on the current qubit position set as updated from the target qubit position set for the previous time slice. In other words, as the time slice progresses to a subsequent time slice, the position indices for the qubits may be determined based on where such qubits ended up positioned from the previous time slice. In a manner similar to the first time slice, the second time slice may be associated with execution of one or more logic gates for implementing a quantum program. Such logic gates may each utilize a qubit pair as input, for example where all qubit pairs for the time slice are embodied by a second qubit pairing set. The second target qubit position set may be based on such qubit pairs. For example, in some such embodiments, the second target qubit position set represents a determined set of target position indices, where each qubit in the qubit set is assigned a target position index that is adjacent to a second target position index for a second qubit with which the qubit is paired. Accordingly, when repositioned based on the second target qubit position set, each qubit pair may be located at adjacent regions for input to a single logic gate. It should be appreciated that, in some embodiments, one or more qubits is not associated with a pair and may be utilized as a single input to a logical operation at one or more time slice(s). At operation1004, the apparatus200generates a second algorithm swap command set by performing a second even-odd transposition sort, for example based on at least the second target qubit position set. In this regard, the second even-odd transposition sort may reposition the qubits to their target qubit position indices as indicated for execution of the logical operations during the second time slice. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for generating a second algorithm swap command set by performing a second even-odd transposition sort based on at least a second initial qubit position set and the second target qubit position set. In some such embodiments, the second initial qubit position set is embodied by a first target qubit position set for a first and/or previous (e.g., immediately preceding) time slice. In this regard, the second algorithm swap command set may include swap indicators for repositioning the qubit set from the second initial qubit position set to the second target qubit position set. For example, the second algorithm swap command set may represent swap indicators for repositioning the qubits from regions associated with a first target qubit position set (e.g., during a first time slice) to adjacent regions to enable qubit pairs to be input to at least one corresponding logic gate. In some embodiments, the apparatus200is configured for performing one or more additional actions based on at least the second algorithm swap command set. For example, in some embodiments, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for generating, based on at least the second algorithm swap command set, a second qubit manipulation instruction set. The apparatus200pay perform such actions in a similar manner to that as described above with respect to optional operation710. In this regard, the second qubit manipulation instruction set may include one or more qubit swap instructions, qubit split instructions, qubit join instructions, qubit shift instructions, and/or the like, or any combination thereof, that map be compiled for execution to reposition the qubit set according to the second algorithm swap command set. Additionally or alternatively, in some embodiments, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for generating a second hardware instruction set based on at least a second qubit manipulation instruction set. The apparatus200may perform such actions in a similar manner to that as described above with respect to optional operation712. In this regard, the second hardware instruction set may represent one or more voltages to be applied to qubit manipulation hardware, for example various electrodes, to effect repositioning of the qubit set within an ion trap. Additionally or alternatively still, in some embodiments, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for executing the second hardware instructions set using qubit manipulation hardware. The apparatus200may perform such actions in a similar manner to that as described with respect to operation714. In this regard, by executing the second hardware instructions set, the apparatus200may reposition the qubit set to positions corresponding to the second target qubit position set by executing the swaps represented in the second algorithm swap command set. FIG.11illustrates additional operations of an example process for instruction compilation for at least one time slice in a one-dimensional quantum computing environment, specifically for identifying a target qubit position set associated with the qubit set, in accordance with at least one example embodiment of the present disclosure. In this regard, the example process as illustrated may be performed by one or more specially configured systems such as a controller30, for example embodied by the specially configured apparatus200. In this regard, in some such embodiments, the apparatus200is specially configured by computer program instructions stored therein, for example in the memory204and/or another component depicted and/or described, and/or otherwise accessible to the apparatus200, for performing the operations as depicted and described. In some embodiments, the specially configured apparatus includes and/or otherwise is in communication with one or more other apparatuses, systems, devices, and/or the like, to perform one or more of the operations as depicted and described. For example, the apparatus200may include and/or communicate with one or more components of a quantum computer, and/or a computing entity, to facilitate one or more of the operations of the process depicted inFIG.11. The process depicted and described with respect toFIG.11provides an optimization for assigning target qubit position indices in a manner that reduces or minimizes the number of required parallel swap commands for repositioning qubits to be adjacent for gating as desired. It should be appreciated that the particular process defines a specific exemplary target qubit position index assignment algorithm. In other embodiments, one or more alternative and/or additional target qubit position index assignment algorithm(s) may be implemented. Indeed, it should be appreciated that such algorithms may vary with respect to the level of complexity without deviating from the scope and spirit of this disclosure. The illustrated process begins at operation1102. In some embodiments, the process begins after one or more of the operations depicted and/or described with respect to one of the other processes described herein. For example, in some embodiments as described, the process begins after execution of operation704. In this regard, the process may replace or supplement one or more operations depicted and/or described with respect to one of the other processes described herein. For example, in some embodiments as depicted, the process depicted with respect toFIG.11supplants, supplements, and/or otherwise replaces the operation depicted and described with respect toFIG.7. Additionally or alternatively, as depicted, upon completion of the process depicted inFIG.11and/or one or more operations associated therewith, flow may end or return to one or more operations of another process, for example returning to operation708as depicted. At operation1102, the apparatus200determines, for each of the at least one qubit pair for gating at the first time slice and starting with a qubit pair having a greatest position distance based on at least a first initial positioned index for a first qubit of the qubit pair and a second initial position index for a second qubit of the qubit pair, a near-midpoint open index pair. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for determining, for each of the at least one qubit pair for gating at the first time slice and starting with a qubit pair having a greatest position distance based on at least a first initial position index for a first qubit of the qubit pair and a second initial position index for a second qubit of the qubit pair, a near-midpoint open index pair. In at least some such embodiments, the near-midpoint open index pair for a particular qubit pair is based on at least the first initial position index and the second initial position index for the qubits of the qubit pair. In some such embodiments, the apparatus200may be configured for determining a position distance for the qubits of each qubit pair at a particular time slice. In this regard, the position distance for each qubit pair may be determined based on the difference between the first initial position indices for the first qubit of the qubit pair and the second qubit of the qubit pair, such as represented in an identified initial qubit position set for the time slice. For example, if a first qubit of the qubit pair is associated with a first initial position index of the value “1” and the second qubit of the qubit pair is associated with a second initial position index of the value “7,” the position distance may be determined to be “6” (e.g., 7 minus 1). Similarly, if a second qubit pair includes a first qubit associated with a first initial position index of the value “2” and a second qubit associated with a second initial position index of the value “4,” the position distance may be determined to be “2” (e.g., 4 minus 2). In some embodiments, the apparatus200is configured for determining the position distance for all qubit pairs at a particular time slice, for example as indicated by a qubit pairing set. Accordingly, the apparatus may subsequently assign target position indices for qubit pairs in descending order, such that the qubit pair associated with the greatest position distance is assigned at each step. If no qubit pair has the greatest position distance (e.g., there is a tie between one or more qubit pairs having equal position distances), the apparatus200may proceed to assign positions to qubit pairs in order, randomly, and/or using any other selection algorithm. With reference to the qubit pairs of the qubit pairing set306as illustrated inFIG.3, for example, the qubit pair306A is associated with a position distance having the value “7” (e.g., 7 minus 0), the qubit pair306B is associated with a position distance having the value “1” (e.g., 2 minus 1) as is each of the qubit pair306C (e.g., 4 minus 3 to equal 1) and the qubit pair306D (6 minus 5 to equal 1). Accordingly, the apparatus200may determine the qubit pair306A is associated with the greatest position distance, and thus assign target qubit position indices for the qubit pair306A first. The apparatus200may subsequently determine each of the remaining qubit pairs306B-306D is associated with the same position distance, and thus assign target position indices in any order (e.g., in ascending order based on the initial qubit position indices, and/or descending order, and/or the like). In some such embodiments, the apparatus200is configured for determining a near-midpoint open index pair for each qubit pair associated with a given time slice. For a given qubit pair, the near-midpoint open index pair may represent a first target position index for the first qubit of the qubit pair and a second target position index for the second target position index of the qubit pair. The target position index for each qubit may not be assigned to another qubit (e.g., is “open” for assignment) at a previous step. In this regard, the apparatus200may determine a midpoint between the initial position indices for the qubit pair, and determine the nearest adjacent target position indices of a target qubit position set that remains unassigned. Returning to the example qubit pairing set306illustrated with respect toFIG.3, starting with the qubit pair306A, the apparatus200may determine indices having the values “3” and “4” as possible midpoint indices. In some embodiments, the apparatus200may shift each qubit pair to be indexed such that the near-midpoint open index pair begins on an even index, for example to enable the maximum number of qubit pairs to be positioned if necessary. In some such embodiments, the apparatus200is configured for attempting to shift the indices to lower indices first, and subsequently determine if such lower indices remain unassigned (for example, by searching for such indices in the target qubit position set) before attempting to shift the indices to the immediately higher indices. In this regard, the apparatus200may determine the near-midpoint open index pair having the values of “2” and “3,” for example as depicted. In other embodiments, the apparatus200is configured for attempting to shift the indices to higher indices first, and subsequently determine if such higher indices remain unassigned before attempting to shift the indices to the immediately lower indices. In some such embodiments, the apparatus200may determine the near-midpoint open index pair having the values of “4” and “5.” At operation1104, the apparatus200determines, for assigning, in the target qubit position set, the first target position index and the second target position index based on at least the near-midpoint open index pair. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for determining, for assigning, in the target qubit position set, the first target position index and the second target position index based on at least the near-midpoint open index pair. For example, the apparatus200may associate, in the target qubit position set, the first target position index of the near-midpoint open index pair with the first qubit of the qubit pair. Additionally, the apparatus200may associate, in the target qubit position set, the second target position index of the near-midpoint open index pair with the second qubit of the qubit pair. By assigning such indices within the target qubit position set, the apparatus200may perform subsequent checks with the target qubit position set to prevent assigning the indices to another qubit at a future step. Upon assigning target position index “2” and target position index “3,” such indices may not be subsequently be assigned to another qubit. Accordingly, in some embodiments, the apparatus200may continue to perform the operations1102and/or1104for any number of qubit pairs. For example, the apparatus200may continue to determine another near-midpoint open index pair for the next qubit pair of a qubit pairing set. In some embodiments, the apparatus selects qubit pair306B next as all remaining qubit pairs have the same position distance and, in some such embodiments, the apparatus200may proceed in ascending order for the qubits of the qubit set. FIG.12, for example, depicts an example visualization for identifying the target qubit position set404for the qubit set304in accordance with at least one example embodiment of the present disclosure, for example based on the qubit pairing set as depicted and described with respect toFIG.3. It should be appreciated that each iteration of the process depicted with respect toFIG.11may result in another assignment of target position indices in the target qubit position set404, as depicted with respect toFIG.12. The qubit pair306B includes qubit 1 and qubit 2, which are associated with initial position indices “1” and “2” that are already adjacent. However, the target position index “2” is already assigned as described. Accordingly, the apparatus200may attempt to shift the indices to lower and/or higher indices, while minimizing the required distances the qubits are required to move. In this regard, the apparatus200may determine the lower indices “0” and “1” remain unassigned, and thus may proceed to assign such indices as target position indices for the qubit pair306B. Similarly, for qubit pair306C, the qubits of the qubit pair are assigned initial qubit position indices having the value “3” and “4,” and the apparatus200may determine such qubits are already adjacent (e.g., having a position distance of “1”). The apparatus200may attempt to position the qubits at a near-midpoint open index pair starting at the nearest unassigned even index, for example at “2” and “3.” However, the apparatus200subsequently may determine that such indices have already been assigned, for example by searching for such indices in the target qubit position set and identifying such indices are already associated with qubits 0 and 7 (e.g., qubit pair306A). The apparatus200may subsequently attempt to shift the near-midpoint open index pair to higher indices, for example having the values “4” and “5,” and determine that such indices remain unassigned. Accordingly, the apparatus200may subsequently assign, in the target qubit position set, the first target position index having the value “4” to the qubit 3, and the second target qubit position index having the value “5” to the qubit 4, based on at least the near-midpoint open index pair and the assigned indices in the target qubit position set. Similar actions may be performed for the qubit pair306D, with respect to indices “4” and “5” (as lower indices to the initial position indices for qubits5and6) already being assigned. Accordingly, the apparatus200may subsequently assign, in the target qubit position set, the first target position index having the value “6” to the qubit 5, and the second target qubit position index having the value “7” to the qubit 6, based on at least the near-midpoint open index pair. Having completed assigning target position indices within the target qubit position set, the apparatus200may continue to process the target qubit position set as described. Example Alternative Processes for Target Qubit Position Assignment Some embodiments implement one or more alternative process(es) for assigning target positions, for example embodied in a target qubit position set. In this regard, utilizing near-midpoint open index pairs may be utilized in whole or in part in some embodiments. In other embodiments, one or more alternative algorithms are implemented to identify and/or otherwise generate the target qubit position set. Non-limiting examples of such alternative embodiments are described with respect to the remainingFIGS.13-25. In some embodiments, the apparatus200is configured to perform the various operations of the processes described herein with respect to such remaining figures. In some embodiments, target assignment is performed based on two equally sized subsets of an equal bipartition of a complete set of positions. In this regard, the complete set may include elements of a first subset and elements of a second subset, where the number of elements in each subset is equivalent, the two subsets are disjoint, and the union of the two subsets is the complete set. Elements of a given starting positions pairing set (e.g., representing elements to be gated at a particular time slice) may be paired from each of the bipartition subsets, such that a first element of a given starting positions pair from the starting positions pairing set exists from the first bipartition subset and a second element of the given starting positions pair from the starting positions pairing set exists from the second bipartition subset. For example, the first position may exist in the first bipartition subset and the second position may exist in the second bipartition subset. One such example of equally-sized subsets of a bipartition comprises a first subset containing the first positions all start position pairs and the second subset containing the second positions of all start position pairs. The two subsets of the bipartition may further meet an adjacency requirement. In this regard, pairs of adjacent positions form slots (i.e. slot s in [0, N/2) contains adjacent positions {2*s, 2*s+1}) and may have one position of the slot in the first subset and the other position of the slot in the second subset. One such example of equally-sized subsets of a bipartition of a complete set meeting an adjacency requirement comprises partitioning a set of positions into an even set of positions and an odd set of positions, where each starting position pair comprises an even position and an odd position. It should be appreciated that any of a myriad of alternative algorithms for partitioning a complete set of positions into two equally-sized subsets with or without meeting an adjacency requirement based on a particular classification, characteristic, or other determination may be performed to generate the equally-sized subsets of a bipartition of the complete set. It should be appreciated that, in some embodiments, any valid mapping from first positions in the first bi-partitioned subset to any unique second position in the second bi-partitioned subset is valid. In other embodiments, mappings are considered valid only in instances where an adjacency requirement is also met, requiring each position in the first bi-partitioned subset to have at least one adjacent position in the second bi-partitioned subset. A vector of target slot assignments (target_slots vector) for such bipartition subsets may then be generated. In some embodiments, each position in one of the subsets (e.g., the first bipartition subset or the second bipartition subset) is assigned a fixed slot of an available slot set. The target_slots vector may subsequently be assigned the slot corresponding to the other position in the second bipartition subset. The target_slots vector may subsequently be sorted to attain parity between the slot value for each position in the second subset of the bipartition with the corresponding slot for the first position in the first subset of the bipartition. Example embodiments are provided herein that bipartition the starting positions into a first subset comprising even positions and a second subset comprising odd positions having slots formed out of a pair of adjacent positions, one from the even subset and one from the odd subset. It should be appreciated, however, that such other methodologies of partitioning a set of positions into equally sized subsets may be utilized and do not deviate from the scope and spirit of this disclosure. FIG.13depicts example data associated with manipulation of an example quantum computing environment in accordance with at least one example embodiment of the present disclosure. The example data depicted inFIG.13is processed in the example implementations depicted in the subsequentFIGS.14,15A,15B,16A, and16B. As depicted, an example qubit set1302is processed for purposes of explanation and understanding. The example qubit set1302comprises a qubit count (“Q”) of 12 qubits. In this regard, the qubit set1302comprises qubits indexed 0 through 11, representing the 12 total qubits. The qubit set1302may correspond to a position set that comprises positions indexed by the integer range [0, 1, . . . Q−1], which corresponds to 12 positions wherein the qubits of the qubit set1302may be arranged. It should be appreciated, as described herein, that the qubit set1302is exemplary and in other embodiments the qubit set1302may include any number of qubits. FIG.13further depicts an example positions vector1304. The positions vector comprises 12 positions, which matches the number of qubits in the qubit set1302. In this regard, each qubit from the qubit set1302may be positioned at any one of the position indices depicted as part of the position vector1304. For example, in some embodiments at a first time slice, each qubit may be located at the position corresponding to the qubits depicted numerical index (e.g., qubit 1 at position 1, qubit 2 at position 2, qubit 3 at position 3, and so on). As time slices are processed, qubits may be repositioned throughout the various positions of the position set1304. In this regard, the position vector1304may represent a current vector representing the positions of each qubit in the qubit set1302as such qubits are processed for a particular time slice. FIG.13further depicts an example even-odd starting positions pairing set1306based on starting positions for the various qubits of the qubit set1302. In some example contexts, the even-odd starting positions pairing set1306comprises several position pairs embodying pairs between the current positions indices for particular qubits to be gated at the particular time slice to be processed. The position pairs are subject to at least one constraint requiring the two elements of each position pair be in different subsets of the two equally-sized subsets of a bipartition of the set of all positions, for example the bipartition into even and odd positions. In the example depicted, such bipartitioning imposes an even-odd constraint, such that the even-odd starting positions pairing set1306results in an even-odd starting positions pairing set that includes each even-odd pair of two starting positions to be gated at the next time slice. In this regard, each even-odd position pair in the even-odd starting positions pairing set1306comprises an even starting position index paired with an odd starting position index, such that the even-odd constraint is satisfied for each position pair of qubits. In other contexts, the even-odd starting positions pairing set1306may not satisfy the even-odd constraint, for example where the starting positions pairing set1306includes one or more even-even position pairs comprising a first even position index paired with a second even position index, or where the even-odd starting positions pairing set1306includes one or more odd-odd position pair comprising a first odd position index paired with a second odd position index. In some such circumstances, a qubit pairing set comprising at least one even-even position pair or at least one odd-odd position pair may be converted to including all even-odd position pairs that satisfy the even-odd constraint. The conversion to a positions pairing set that satisfies the even-odd constraint may be performed utilizing one or more pairing set conversion algorithms as described herein, for example as described herein with respect toFIGS.17A,17B, and18-25. FIG.14depicts an example algorithmic transformation for determining a slot corresponding to each position of a position set. As depicted,FIG.14includes determinations of the slots assigned to each of the positions in the position set1402. The position set1402includes twelve total positions indexed from the integer range [0, 12) (e.g., where the integer range [X, Y) embodies the integer set {X, X+1, . . . , Y−1}, such as {0, 1, . . . , N−1}). In this regard, the positions may correspond to the available positions described with respect to the example computing environment ofFIG.13. Each position in the position set1402to a corresponding slot based on a particular slot determination algorithm. As depicted, the slot determination algorithm comprises integer division by two, such that a slot (“s”) is defined by p//2, where p//2 represents integer division (e.g., discarding remainder) of the position (“p”) by 2. In this regard, the position index 0 maps to slot index 0, as 0//2=0, and position index 1 similarly maps to slot index 0, as 1//2=0. Similarly, position index 2 maps to slot index 1, as 2//2=1, and position index 3 similarly maps to slot index 1, as 3//2=1. Such slot assignments continue for the remaining positions, such that each even position 2n is mapped to a slot similarly mapped to the subsequent odd position 2n+1. In this regard, adjacent pairs of indices are mapped to the same, shared slot. Qubit indices and/or position indices for such qubits may be assigned to a particular slot index, and such slot indices similarly permutated as described herein for purposes of target position assignment as described herein, for example with respect toFIGS.15-16. The resulting target assignment yields all start positions pairs in the various slots such that each start position pair becomes adjacent and thus may be gated. FIG.15Adepicts an example visualization of an example process for target assignment of start position pairs by determining a target positions vector for a particular time slice. The example process may be performed in accordance with any of the embodiments described herein. For example, in some embodiments, the apparatus200is configured to perform the operations of the described process in advance of generating an algorithm swap command set as described herein, for example as described herein with respect toFIGS.5-11. The example processes described for target assignment described with respect toFIGS.15A,15B,16A, and16Brely on one or more underlying assumptions. For example, example processes described for target assignment described with respect toFIGS.15A,15B,16A, and16Brely on the underlying assumption that the starting positions pairing set to be processed is even. Additionally or alternatively for example, the example processes described for target assignment described with respect toFIGS.15A,15B,16A, and16Brely on the underlying assumption that the starting positions pairing set to be processed is full in that it includes all positions. Additionally or alternatively for example, the example processes described for target assignment described with respect toFIGS.15A,15B,16A, and16Brely on the underlying assumption that the starting positions pairing set embodies an even-odd starting positions pairing set (e.g., only comprising eo-pairs). Some embodiments may perform one or more processes to check whether a starting positions pairing set meets each underlying assumption before initiating processing of the starting positions pairing set via the processes described with respect toFIGS.15A,15B,16A, and16B. In some embodiments, in circumstances where the embodiment (e.g., the apparatus200) determines that one or more of the underlying assumptions is not met, the embodiment may initiate one or more pre-processing algorithms to update the starting positions pairing set to satisfy each of the underlying assumptions. Non-limiting examples of pre-processing algorithms for updating the starting positions pairing set to satisfy these underlying assumptions are described in the next section and with respect toFIGS.17A,17B,18A,18B, and19-25. The target positions vector is derived from a starting positions pairing set, for example which may be determined from a qubit pairing set derived for the particular time slice and the current positions of such qubits at the beginning of the particular time slice. In this regard, the starting positions pairing set may represent pairs of the current position indices at the beginning of the time slots comprising qubits that are to be gated during the particular time slice. In this regard, embodiments of the present disclosure process the starting positions pairing set to generate target positions that locate the qubits, such that each first qubit is relocated from its starting positions to new positions adjacent to a corresponding position including a second qubit with which the first qubit is paired for gating. The example process is performed for a start positions pairing set that includes all positions in [0, N), where N is the number of starting positions, and N is even. The example process is further performed for a start positions pairing set where each position pair satisfies an even-odd constraint. For example, the even-odd starting positions pairing set1302may represent the positions pairing set for the next time slice to be processed, and it satisfies both the assumptions that the all positions are accounted for (e.g., [0, N) where N is 12), N is even, and all position pairs are eo-pairs. As depicted, the even-odd start positions pairing set1302comprises a full even-odd start positions pairing set sorted in increasing order of the even start position for each pair, with each even start position being the first start position in the pair, such that the resulting even-odd start positions pairing set1302has the form {(0, o0), (2, o1), (4, o2), . . . , (N−2, oN/2−1)}, where the okis the odd start position paired with even position 2k for k in the range [0, N/2). In some contexts where one or more of these assumptions is not met, for example the start positions pairing set includes an ee-pair or an oo-pair, the start positions pairing set may be pre-processed to meet such assumptions before the described process to determine the target positions vector for the particular time slice begins, as described herein. The example process for determining a target positions vector for a particular time slice is analogous to a problem of target assignment utilizing two rows. The even positions reside in a lower row (e.g., a second row), and the odd positions reside in the upper row (e.g., a first row). The rows are manipulated with the target goal of finding a common target column for each start position pair in the even-odd start positions pairing set1302. To accomplish this goal, either row may be fixed and the other may be manipulated to be in the same column as the corresponding paired even start position in the fixed row. For example purposes,FIG.15is depicted and described from the perspective of keeping the even start positions fixed in the lower row and manipulating the odd start positions in the upper row. It should be appreciated that the process may similarly be performed by setting the odd positions as fixed and manipulating the even positions. Each odd start position, ok, is assigned to column k of N/2 columns in the upper row. Next, even-odd transposition sort is performed on the upper row to generate a parallel swap command set for the upper row. The generated parallel swap command set permutes the upper row such that all odd starting positions reside in the same column as their fixed even-paired counterpart in the lower row. As a factor of utilizing even-odd transposition sort, such operations may be completed within N/2 parallel swap commands for the N/2 positions within the row. Once the qubits are permuted such that the qubit pairs are in a common column across the two rows, performing the same swap operations to both rows will maintain the common-column-property, thus satisfying the constraint that the qubit pair remains in a common-column with the qubit in its fixed pair position. Reversal of the swap commands applied to the upper row, for example, simply undoes its permutation by causing the upper row to move backwards along the path it followed (e.g., towards its starting position), while application of said swap commands to the lower row as well will cause the lower row to move in the same manner to maintain the common-column-property. In this regard, starting from the beginning (e.g., without sorted position assignments), an implementation may apply the first half of the parallel swap commands on the upper row and the second half of the parallel swap commands in reverse on the lower row to cause the qubit pairs to arrive at the same target column in no more than (N//2+1)//2 parallel swap commands. As such, by assigning target positions to the upper row based on the lower row starting positions, performing even-odd transposition sort on the upper row targets vector, and finding the location of the targets vector values at the mid-point of the sort, the mid-point locations may be used as the target positions for both the upper and lower rows to reduce the total number of parallel swap commands utilized to position the qubits for gating. The operations depicted and described perform an analogous process on a linear array of positions by treating the even positions as the lower row and the odd positions as the upper row. Embodiments construct a targets vector comprising slots of length N/2, which may be referred to as a “target_slots” vector indexed by slot “s.” Each slot “s” is associated with an adjacent pair of positions {2s, 2s+1}, with the lower position 2s representing the even position of an even-odd starting position pair. Given a position p, the corresponding slot may be determined by a slot determination algorithm, for example the slot determination algorithm1404where s=p//2 as depicted and described with respect toFIG.14. Some such embodiments initialize the vector “target_slots” such that: target_slots[os//2]=s for all s in the range [0, N/2). Utilizing this initialization, each odd start position is assigned the slot where its paired even start position resides. Thus, the resulting target_slots vector is initialized in a manner that pairs the various positions identified in the corresponding even-odd starting positions pairing set by positioning the odd position to the slot of its fixed even counterpart for the time slice being processed. Such embodiments further perform even-odd transposition sort on the target_slots vector to generate a parallel swap command sequence of length M, where M<=N/2. The parallel swap command sequence is embodied by a parallel swap command set that represents each swap performed in executing the even-odd transposition sort. It should be appreciated that the parallel swap command set representing the swap commands for sorting the target_slots vector may be embodied in any of a myriad of manners indicating performed left swaps and/or right swaps, as described herein. The resulting target_slots vector may be utilized to generate a new vector “target_slots_mid,” which represents the location of each position at the mid-point for completion of the even-odd transposition sort. In this regard, embodiments may apply the first (M+1)//2 parallel swap commands to the target_slots vector to generate the target_slots_mid vector, where target_slots_mid[os//2] is the slot where odd start position osresides at the mid-point of the even-odd transposition sort. Embodiments then assign the pair of sorted start positions, for example represented by the vector resulting from sorted((2s, os)), to the pair of target positions (2*target_slots_mid[os//2], 2*target_slots_mid[os//2]+1). Such assignment represents the slot where the odd position would reside if the odd positions were sorted alone. Next, embodiments perform even-odd transposition sort on the targets vector representing the target qubits pairing set for all positions to generate the steps to sort the qubits to their proper positions for purposes of processing the particular time slice. The target_slots vector may be sorted starting from either the even phase or the odd phase as the first phase for purposes of determining the targets vector for use in determining the parallel swap command set. It should be appreciated that, in some embodiments, even-odd transposition sorts are performed utilizing each of the even phase and the odd phase as the start phase. The targets vector generated from sorting the target_slots vector beginning with the even phase may differ from the targets vector generated from sorting the target_slots vector beginning with the odd phase. However, one of the resulting targets vectors may be computationally more efficient than the other in implementation via the one-dimensional quantum computing environment (e.g., when sorted in the manner described herein as a target qubits position set for instruction compilation via a subsequent even-odd transposition sort). In this regard, embodiments may execute a first subsequent even-odd transposition sort on the first targets vector generated from a first target_slots_mid vector in phase with a first phase of sort (e.g., associated with beginning with the even sort phase), and a first number of steps in the first resulting algorithm swap command set determined. Similarly, embodiments may execute a second subsequent even-odd transposition sort on the second targets vector generated from a second target_slots_mid vector in phase with a second phase of sort (e.g., associated with beginning with the odd sort phase), and a second number of steps in the second resulting algorithm swap command set determined. The number of steps associated with each of the resulting algorithm swap commands sets (e.g., corresponding to beginning sorting the target_slots vector in different phases of sort) may be compared to determine which contains fewer steps. Embodiments may select the resulting targets vector with the fewest steps to sort to reduce the computational resources required to perform the sort the qubits in the one-dimensional quantum computing environment, as the expenditure of conventional computing resources to generate efficient swap algorithms is preferred as less time consuming, costly, and error prone than expenditure of quantum computing resources in the one-dimensional quantum computing environment. As illustrated inFIG.15A, the described algorithm described is performed for the example even-odd starting positions pairing set1302as depicted and described herein. The even positions are assigned to slots in the lower row, represented by the even slots vector1502. The even slots vector1502corresponds to the six slots depicted and described with respect to the slot vector1406, which correspond to the 12 positions of the one-dimensional quantum computing environment depicted and described with respect toFIGS.13and14. For purposes of understanding, a vector of even positions1506is depicted that corresponds to the even slots vector1502. Embodiments initialize the target_slots vector to a starting configuration1504A. The odd positions are assigned a slot from the slots vector1406, for example based on the slot determination algorithm s=p//2, to generate the start configuration of the target_slots vector for the upper row associated with the odd positions. The resulting start configuration is then utilized to begin the even-odd transposition sort of the target_slots vector, for example at the even phase as depicted. For purposes of understanding, a vector of starting configuration positions1508A is depicted that represents the start configuration of the slot allocations in the target_slots vector1504A. Similarly, for purposes of understanding, each of the steps of the even-odd transposition sort performed for the target_slots vector are depicted in steps1504B-1504E, and the corresponding steps of the even-odd transposition sort for the positions corresponding to such slots are depicted in1508B-1508E. The embodiments permute the target_slots vector utilizing even-odd transposition sort. As depicted, the target_slots vector is fully sorted in 4 steps. At step1504B, the target_slots vector is swapped based on the even phase to swap the slots assigned to targets_vector at indices 2 and 3, and to swap indices 4 and 5. As described, such indices are swapped as the higher-order index includes a first value lower than the second value at the lower-order index. The embodiments continue the even-odd transposition sort at an odd phase at step1504C, where indices 1 and 2 of the target_slots vector are swapped, and indices 3 and 4 of the target_slots vector are swapped. The embodiments then continue the even-odd transposition sort at another even phase at step1504D, where indices 0 and 1 of the target_slots vector are swapped, and indices 2 and 3 of the target_slots vector are swapped. The algorithm then completes at the next odd phase at step1504E, where indices 1 and 2 of the target_slots vector are swapped, and indices 3 and 4 of the target_slots vector are swapped, thus resulting in the fully sorted target_slots vector. The fully sorted target_slots vector depicted at step1504E is arranged such that the common-column property applies to each slot of the target_slots vector for each qubit pair in the even-odd starting positions pairing set1302. The embodiments may generate a swap command set representing the swap commands resulting from the operations of the even-odd transposition sort. In this regard, the value of target_slots[s] before sorting commenced represents the destination of slot s (e.g., the odd position represented by slot s in embodiments where the even positions are fixed) to bring it to the slot occupied by its even position counterpart designated in the even-odd starting positions pairing set1302. Similarly, upon completion of the even-odd transposition sort for sorting the target_slots vector based on permuting the odd positions, the resulting swap command set may be applied in reverse order to the even positions to bring the qubits residing at the even positions to the slot where the corresponding paired odd position originally was located. In this regard, the mid-point step of the even-odd transposition sort may be identified to generate the target_slots_mid vector by applying only the first (M+1)//2 parallel swap commands, where M is the length of the parallel swap command sequence generated based on the completed even-odd transposition sort. The mid-point step inFIG.15comprises step1504C, as indicated by the asterisk (*), which corresponds to the target_slots_mid vector having the values of [2, 0, 4, 1, 3, 5]. This target_slots_mid vector corresponds to the positions at the mid-point step indicated by1508C, and the corresponding even positions corresponds to the positions indicated by the corresponding slot in the even positions vector1506. The target_slots_mid vector associated with the mid-points step may then be utilized to generate the final targets vector for all positions by assigning the pair of start positions sorted((2s, os)) to the pair of target positions (2*target_slots_mid[os//2], 2*target_slots_mid[os//2]+1) represented by the slot where the odd position would reside if the odd positions were sorted alone. As illustrated, the even-odd transposition sort is similarly performed associated with the even-odd starting positions pairing set1302, the slot vector1406, and the even slots vector1502depicted inFIG.16A. As such, the starting configuration1504A and the even slots vector1502remain the same for purposes of beginning the even-odd transposition sort. The first step, however, begins with an odd phase at step1604B of the even-odd transposition sort. As depicted inFIG.16A, the target_slots vector is fully sorted in 5 steps when beginning at the odd phase. At step1604B, the target_slots vector is swapped based on the odd phase to swap the slots assigned to targets_vector at indices 1 and 2, resulting in the updated targets_vector [2, 3, 4, 0, 5, 1]. As described, such indices are swapped as the higher-order index includes a first value lower than the second value at the lower-order index. The embodiments continue the even-odd transposition sort at an even phase at step1604C, where indices 2 and 3 of the target_slots vector are swapped, and indices 4 and 5 of the target_slots vector are swapped, resulting in the updated targets_vector [2, 3, 0, 4, 1, 5]. The embodiments then continue the even-odd transposition sort at another odd phase at step1604D, where indices 1 and 2 of the target_slots vector are swapped, and indices 3 and 4 of the target_slots vector are swapped, resulting in the target_slots vector [2, 0, 3, 1, 4, 5]. The embodiments then continue the even-odd transposition sort at another even phase at step1604E, where indices 0 and 1 of the target_slots vector are swapped, and indices 2 and 3 of the target_slots vector are swapped, resulting in the updated target_slots vector [0, 2, 1, 3, 4, 5]. The algorithm then completes at the next odd phase at step1604F, where indices 1 and 2 of the target_slots vector are swapped, resulting in the fully sorted target_slots vector. FIG.15Bdepicts an example generation of the targets vector corresponding to the target_slots_mid vector resulting from the operations depicted and described with respect toFIG.15A. The slots vector1406is vertically aligned with the corresponding indices of the even slots vector1502and targets_slots_mid vector1552for purposes of description. Further as depicted, the targets vector1554is generated from the target_slots_mid vector1552, which embodies the current values of the target_slots vector at the mid-point step1504C inFIG.15A. For purposes of explanation and understanding, target assignment is described sequentially based on the order of the slot indices at each index of the target_slots_mid vector1552. It should be appreciated that in some implementations, target assignment is performed sequentially in any order, in parallel, simultaneously, and/or the like. At index 0 the target_slots_mid vector1552refers to slot 2 of the corresponding position pairs (e.g., target_slots_mid[0]==2). The position pair destined for the second slot can be identified from the even-odd starting positions pairing set1302can be identified, such as via lookup or calculation. For example, since we have left the even indices fixed, embodiments may identify the pair comprising the even position corresponding to identified slot 2 as the pair (4, 1), because slot s==2 maps to even position 2s==4. Some such embodiments may perform one or more lookups to determine even position 2s==4 resides in the start position pair (4, 1). Accordingly, the targets vector indices at 4 and 1 will be assigned the values for the positions corresponding to slot 0 (e.g., positions 0 and 1). In some embodiments, as described herein, the order of the pair is not restrictive, such that the pair (4, 1) is equivalent to the pair (1, 4). Such flexibility enables the target positions corresponding to slot 0 (e.g., positions 0 and 1) to be assigned in either order to the corresponding indices 1 and 4. To reduce the worst-case scenario of the number of swaps to be performed in the subsequent even-odd transposition sort of the targets vector1554, the lower index of the pair (4, 1) in the targets vector is assigned corresponding to the lower indexed position corresponding to slot 0, and the higher index of the pair (4, 1) is assigned the higher index of the position corresponding to slot 0. For example in the embodiment depicted, the targets vector at index 1 is assigned the lower position of slot 0 (e.g., targets[1]=0), and the targets vector at index 4 is assigned the higher position of slot 0 (e.g., targets[4]=1). This process is similarly performed for each index of the target_slots_mid vector1552to complete the target assignment in the targets vector1554resulting therefrom. For example, at index 1 the target_slots_mid vector1552refers to slot 0 of the corresponding position pairs (e.g., target_slots_mid[1]==0), which corresponds to the position pair (0, 7). The indices 0 and 7 of the targets vector1554are assigned the values of the positions corresponding to slot 1 such that the lower index of the position pair is assigned the lower position represented by slot 1. Accordingly, as slot 1 indexes positions 2 and 3, the lower targets index 0 is assigned the value 2 (e.g., targets[0]=2) and the higher targets index 7 is assigned the value 3 (e.g., targets[7]=3). For purposes of brevity, subsequent assignment for slots 2, 3, 4, and 5 follow the same operations, and thus repeated description is omitted. In some embodiments, the even-odd transposition sort of the target_slots vector is performed starting at both the even phase and the odd phase to determine which of the starting phases leads to completion of the even-odd transposition sort in fewer steps.FIG.16Adepicts steps for performing even-odd transposition sort of the example even-odd starting positions pairing set starting from an odd phase, in accordance with at least one example embodiments of the present disclosure. The even-odd transposition sort is performed similar to the manner described herein, instead starting with swapping pairs with a lower odd index. The steps1602B-1602F are performed, resulting in a second mid-point vector generated at the third step1604C of [2, 0, 3, 1, 4, 5]. As depicted, the target_slots_mid vector resulting from the even-odd transposition sort beginning at the odd phase inFIG.15Bdiffers from the target_slots_mid vector resulting from the even-odd transposition sort beginning at the even phase inFIG.15A. It should be appreciated that the different target_slots_mid vector resulting from the even-odd transposition sort beginning in the odd phase similarly results in a different targets vector embodying a target qubit position set for further processing. For example,FIG.16Bdepicts an example visualization of target assignment of the target_slots_mid vector generated at the mid-point step of the even-odd transposition sort beginning at the odd phase as depicted and described inFIG.16A. As depicted inFIG.16B, the target_slots_mid vector1652results at the mid-point step of the even-odd transposition sort of the target_slots vector as depicted and described with respect toFIG.16A. The target_slots_midpoint vector1652begins in the same manner as the target_slots_midpoint vector1552depicted and described with respect toFIG.15A, with both target_slots_midpoint vectors referring to slot 2 at index 0 and referring to slot 0 at index 1. Accordingly, both targets vector1554and targets vector1654are assigned positions 0, 1, 2, and 3 at the same indices. However, the target_slots_midpoint vector1552differs from the target_slots_midpoint vector1652at index 2, as target_slots_midpoint vector1552refers to slot 4 and target_slots_midpoint vector1652refers to slot 3. Accordingly, as depicted inFIG.15B, the indices of the targets vector1554assigned target positions 4 and 5 (which correspond to slot 2) are the indices of the positions in the pair (8, 3), which corresponds to slot 4 referred to in target_slots_mid vector1552at index 2. Specifically, the lower index of the pair is assigned the lower position corresponding to slot 2, thus targets vector index 3 is assigned 4 and targets vector index 8 is assigned 5. Alternatively, as depicted inFIG.16B, the indices of the targets vector1654assigned target positions 4 and 5 are the indices of the positions in the pair (6, 5), which corresponds to slot 3 referred to in target_slots_mid vector1652at index 2. Specifically, the lower index of the pair is assigned the lower position corresponding to slot 2, thus targets vector index 5 is assigned 4 and targets vector index 6 is assigned 5. In this regard, the resulting targets vector1654that corresponds to the odd phase differs from the resulting targets vector1554that corresponds to the even phase. Such target vectors may similarly take a different number of steps to sort via even-odd transposition sort. Thus, embodiments of the present disclosure may perform a subsequent even-odd transposition sort on each of the targets vector1554and the targets vector1654and the corresponding algorithm swap command sets for each determined, with each new phase of sort in the algorithm swap command set representing a step for purposes of comparison. Such embodiments may subsequently compare the number of steps required to sort each of the targets vector, and select the targets vector corresponding to the lesser number of steps to sort as the target qubits position set for further processing, for example as described with respect toFIGS.3-11. The lesser steps to sort is selected to reduce the required time and amount of quantum computing resources expended to perform the swaps required to position the qubits from their initial positions to their target positions. Embodiments may perform even-odd transposition sort of the target_slots vector starting from the odd phase upon completion of the even-odd transposition sort starting from the even phase described with respect toFIGS.15A and15B. Alternatively or additionally, in some embodiments, even-odd transposition sort starting with the odd phase is performed first, and subsequently the even-odd transposition sort starting with the even phase is performed as described with respect toFIGS.15A and15B. Embodiments may compare the number of steps for the even-odd transposition sort beginning with the even phase to the number of steps for the even-odd transposition sort beginning with the odd phase. The implementation that resulted in the fewer number of steps may be selected, and the target_slots_mid vector for the selected implementation may be determined and/or further processed as described herein. In the example depicted, the sort beginning with an even phase resulted in 4 steps to fully sort the target_slots vector, whereas the sort beginning with an even phase resulted in 5 steps to fully sort the target_slots vector. In this example context, such embodiments select the even-odd transposition sort beginning at the even phase for further processing, and thus will determine the target_slots_mid vector based on the swap commands generated from performing the even-odd transposition sort beginning with the even phase. In circumstances where the even-odd transposition sort resulted in lesser steps (e.g., where beginning with the even phase resulted in 6 steps, for example based on a different even-odd starting positions pairing set), the target_slots_mid vector would be determined from the mid-point step represented by step1604D of the even-odd transposition sort beginning with the odd phase as depicted inFIG.16. The target_slots_mid vector determined from the selected implementation of the even-odd transposition sort may then be utilized to determine the targets vector corresponding to the final target positions for each starting position as described herein with respect toFIG.15, and subsequently processed as the target qubit positions set as described herein with respect toFIGS.5-11. Selecting the implementation that corresponds to the fewer number of steps minimizes the amount of computational resources needed to complete the swapping of qubits to their target positions in the one-dimensional quantum computing environment. Example Operations for Pre-Processing a Starting Position Pairing Set for Target Assignment As described herein, a starting position pairing set may satisfy various underlying assumptions before being processed for target assignment in a target qubit position set. It should be appreciated that such underlying assumptions each embody a constraint that must be met before further target assignment can be performed for a particular starting positions set. In this regard, the apparatus200, for example, may retrieve, receive, or otherwise identify a starting positions set and perform one or more checks for such assumptions and/or initiate pre-processing to ensure such assumptions are met in serial or in parallel with checks and/or pre-processing for other assumptions. In some embodiments, the starting positions pairing set to be processed for target assignment is subject to an evenness constraint requiring the number of positions in the starting positions pairing set to be even. In some such embodiments, the starting positions pairing set is processed to convert the set to include an even number of positions. In a circumstance where the original starting positions pairing set comprises an odd number of positions, such embodiments generate a new arbitrary position acting as a vacancy position. The new, arbitrary position may be sorted in the same manner with other positions, and the resulting targets vector would have an assignment to the vacancy and provide the vacancy position as an available target for other starting positions or itself. In some embodiments, the new arbitrary position is appended to the list of available positions. In other embodiments, the new arbitrary position is prepended to the list of available positions, and the list of positions shifted to have minimum index 0. In yet other embodiments, the new arbitrary position is inserted in the middle of the list of available starting positions with the portion of the starting positions right of insert shifted such that the list of starting positions occupies the index range [0, N) where N is even. In some embodiments, the targets vector may remove the new arbitrary position associated with a vacancy to generate a targets vector valid for an odd set of positions. In other embodiments, the targets vector may accept the vacancy as-is without removal. Any and all combinations generated and/or otherwise performed by such embodiments result in an updated starting positions pairing set that is a valid representation of overcoming the evenness constraint on the total number of positions. In some embodiments, the starting positions pairing set to be processed for target assignment is subject to a fullness constraint requiring the pairs to include all positions in a set of available positions. The fullness constraint further requires evenness as described above, and thus a starting positions pairing set may be processed to address the evenness constraint before addressing the fullness constraint. In some embodiments, an unused positions set is identified from the starting positions pairing set that includes all positions not represented in the starting positions pairing set. The unused positions set is sorted such that the positions therein are ordered. The unused positions are subsequently paired in sorted order (e.g., such that the first unused position at index zero of the sorted unused positions is paired with the second unused position at index 1 of the sorted unused positions, the third unused position at index 2 of the sorted unused positions is paired with the fourth unused position at index 3 of the sorted unused positions, and so on) to generate a new positions pair set based on the ordered unused positions. The new positions pair set comprises position pairs utilizing the unused positions that ensures the required travel distance of each position pair is minimized. The new position pairs set is added to the original starting positions pairing set to generate an updated starting positions pairing set, and the updated starting positions pairing set is subsequently processed irrespective of which pairs were previously considered unused. In some embodiments, the starting positions pairing set may similarly be pre-processed to ensure that an even-odd only constraint requiring only even-odd position pairs exist in the starting positions pairing set is satisfied. In this regard, some embodiments (for example the apparatus200) initiate such pre-processing operations in response to determining a starting positions pairing set includes at least one ee-pair and/or oo-pair.FIGS.17A,17B, and18-25depict example data and operations for associated with pre-processing a starting positions pairing set to satisfy an even-odd only constraint. The even-odd only constraint may rely on underlying assumptions embodied by the evenness and fullness constraints described herein. Thus, a starting positions pairing set not satisfying both of such constraints may be further pre-processed to ensure such constraints are met before further pre-processing to satisfy the even-odd only constraint. FIG.17Adepicts an example process to partition a positions pairing set (e.g., a starting positions pairing set) for conversion to an even-odd positions pairing set (e.g., an even-odd starting positions pairing set), in accordance with at least one example embodiments of the present disclosure. An example starting positions pairing set1752is depicted. The example starting positions pairing set1752satisfies other underlying assumptions (e.g., is full and even), however it includes multiple ee-pairs and oo-pairs. In some embodiments, for example the apparatus200, the starting positions pairing set1752is partitioned into an ee-pair set1754A, an oo-pair set1754B, and an eo-pair set1754C. Such embodiments generate the ee-pair set1754A comprising only the ee-pairs partitioned from the starting positions pairing set1752. Similarly, such embodiments generate the oo-pair set1754B comprising only the oo-pairs partitioned from the starting positions pairing set1752. Similarly, such embodiments generate the eo-pair set1754C comprising only the eo-pairs partitioned from the starting positions pairing set1752. Some embodiments perform a check for each positions pair in the starting positions pairing set1752of each position in the pair to determine whether the positions pair embodies an ee-pair, an oo-pair, or an eo-pair. Such embodiments partition the positions pair in the proper set of the ee-pair set1754A, oo-pair set1754B, or eo-pair set1754C accordingly based on the results of the check (e.g., ee-pairs in the ee-pair set1754A, and the like). Upon performing such partitioning for each pair in the starting positions pairing set1752, the ee-pairs set1754A, oo-pairs set1754B, and eo-pairs set1754C comprise all positions pairs to be further processed. It should be appreciated that the starting positions pair set1752satisfying other underlying assumptions, such as fullness and evenness constraints, will include the same number of ee-pairs and oo-pairs, such that the ee-pair set1754A and the oo-pair set1754B will include the same number of position pairs therein. Each ee-pair has a path to an oo-pair formed out of adjacent oe-pairs, where the sequence of oe-pairs to reach the oo-pair may have zero length. In general, this path has the form1756, which may include any number of adjacent eo-pairs from ee-pair to oo-pair. In the depicted format, the second element of each pair is even and is adjacent to the first element of the successive pair. The ee-pair may be adjacent to the oo-pair directly, and in this case the sequence of oe-pairs has zero length. An example path with oe-pairs is (2, 6)→(7, 24)→(25, 4)→(5, 12)→(13, 10)→(11, 3). In this example, the first pair (2, 6) is an ee-pair; the second element of the first pair (6) is even and adjacent to the first element of the second pair (7), which is odd. Continuing, the second element (24) of the pair (7, 24) is adjacent to the first element 25 of the successive pair (25, 4), and the second element of (4) of the pair (25, 4) is adjacent to the first element (5) of the successive pair (5, 12), and so on. The last pair (3, 11) is an oo-pair, and thus swapping with an even position from the preceding pair results in a new oe-pair. In this regard, performing parallel swap of the second element of every pair with the first element of its successive pair in the path results in a transformed pair set of {(2, 7), (6, 25), (24, 5), (4, 13), (12, 11), (10, 3)}, which comprises entirely eo-pairs. An example where the ee-pair is adjacent to the oo-pair directly comprises the path (6, 2)→(3, 11). Such a path includes no oe-pairs, and thus has an oe-pair path of zero length. In this regard, swapping the second element of the first pair (2) with the first element of the second pair (3), which are adjacent, results in the transformed pair set {(6, 3), (2, 11)|, which comprises entirely eo-pairs. Such paths are formed out of adjacent pairs. In some embodiments, the adjacent pairs are restricted to be consistent with the first phase of the even-odd transposition sort to be used to generate the algorithm swap command set comprising parallel swap commands for the target qubit position set to be processed. For example, embodiments may determine the first phase of the even-odd transposition sort to be used to generate the algorithm swap command set (e.g., based on user input, hardware configuration of the one-dimensional quantum computing environment, and/or the like). The first phase of sort being even corresponds to a parallel swap command that swaps adjacent positions where the lower index is even. In such circumstances where the first phase of the even-odd transposition sort is even, then even position (e) can only swap with the odd position o where o=e+1. Similarly, in circumstances where the first phase of sort is odd, even position e can only be swapped with odd position o where o=e−1. Some such embodiments construct paths from each pair of positions from the bipartitioned subsets (e.g., ee-pair to oo-pair) that are consistent with the constraint based on the first phase of sort. In this regard, it should be appreciated that in circumstances where the first phase of sort is even, the pairs adjacent to the ee-pair (e1, e2) that satisfy the restriction imposed by the first phase of sort are the pairs having odd elements e1+1 and e2+1, and accordingly there exists only two such pairs. Similarly, it should be appreciated that in a circumstance where the first phase of sort is odd, the allowable pairs adjacent to the ee-pair (e1, e2) are the pairs having odd elements e1−1 and e2−1. In such circumstances, two such pairs satisfying this constraint exist if neither e1 nor e2 are zero. However, in circumstances where either e1 or e2 is zero, only one such pair exists that satisfies the applicable constraint. Accordingly, there are two paths out of every ee-pair to a corresponding oo-pair in a circumstance where the first phase of sort is even, and 1 or two paths in a circumstance where the first phase of sort is odd. Similarly, it should be appreciated that there are two pairs into every oo-pair if the first phase of sort is even, and 1 or two paths in a circumstance where the first phase of sort is odd. Each intermediate oe-pair in a path from an ee-pair to an oo-air has only a single input and a single output (e.g., the preceding adjacent pair of a first element and the successive adjacent pair of a second element). In addition to oe-pairs that serve as intermediary pair in any path from an ee-pair to an oo-pair, one or more eo-pairs may form a closed cycle of the form oe1→oe2→oeN, where oeNis adjacent to oe1. In a circumstance where the first phase of sort is odd, then an oe-pair containing the even position (e) where e=0 or the odd position (o) where o=Q−1, where Q is the number of positions, is excluded from either the ee-to-oo paths or the closed cycles of oe-pairs. The ee-pair set1754A is utilized to determine a set of pair-disjoint paths from such ee-pairs to the oo-pairs present in the oo-pair set1754B. Given the ee-pair set1754A including N total ee-pairs, embodiments determine a set of N pair-disjoint paths from such ee-pairs to the oo-pairs in the oo-pair set1754B. The paths are used to form a single step parallel swap command by swapping the second element of every pair in the path with the first element of the successive pair, so long as one exists. By forming a single step parallel swap command that accomplishes such a transformation, the starting positions pairing set1752may be converted to including entirely eo-pairs in a minimized number of algorithmic steps, thus reducing the time, amount of quantum computing resources, and cost for performing such a conversion in the one-dimensional quantum computing environment. In some embodiments, a graph is formed to identify and/or analyze the paths between the positions of the first bipartition subset and the positions of the second bipartition subset (e.g., ee-pairs and oo-pairs). In this regard, embodiments may identify the paths between ee-pairs and oo-pairs via the graph, and select any such paths to generate a parallel swap command that converts the starting positions pairing set to satisfy the constraint of having each start position pair with a first element from a first subset of the bipartition of all positions and a second element from a second subset of the bipartition of all positions. In some embodiments, any of a myriad of path analysis and/or selection algorithms may be utilized to select such paths arbitrarily, randomly, and/or the like. Additionally or alternatively, some such embodiments identify and/or process paths in the generated graph to determine the resource costs and/or otherwise the efficiency advantages of each path for converting from the ee-pairs and oo-pairs to eo-pairs, such that the paths resulting in efficiency improvements (e.g., minimization of the worst-case scenario, or any reduction in the worst-case scenario or average scenario). Based on a starting positions pairing set, which may include all ee-pairs, all oo-pairs, and zero or more eo-pairs that are adjacent and swappable, embodiments may convert an arbitrary pair set to one that satisfies the constraint that each element of a start position pair be from a different subset of the bipartition, for example the even-odd only constraint (e.g., a set that comprises only eo-pairs). For example, some such embodiments process the various paths utilizing efficient shortest path algorithms, such as a modified version of Suurballe's node-disjoint total shortest path algorithm and/or similar algorithms, to identify the paths that reduce the worst-case scenario of efficiency for converting from ee-pairs and oo-pairs to all eo-pairs when considering all ee-pairs. It should be appreciated that the modified Suurballe's algorithm embodies one example implementation of such a methodology for identifying chains of start position pairs (e.g., paths), and the histogram of distances described herein is one of many cost metrics that may be usable. Other embodiments implement other graph theory algorithms and/or modified versions of graph theory algorithms that identify paths from the constructed graph. The modified Suurballe's algorithm may be utilized in circumstances to minimize the worst-case distance associated with such path traversals. In other circumstances, an alternative path analysis and/or path selection algorithm may be implemented, for example that selects the first path(s) identified, selects an arbitrary or otherwise random path, selects a shortest path identified based on another weighting metric, and/or the like. For example, in some embodiments, ease of implementation may be prioritized while reducing overall worst-case as opposed to minimizing the worst-case, and thus an alternative algorithm for analyzing paths may be selected. FIG.17Bdepicts an example weighted, directional graph generated corresponding to an example starting positions pairing set in accordance with at least one example embodiments of the present disclosure, Specifically,FIG.17Bdepicts an example weighted, directional graph1700generated corresponding to the starting positions pairing set1752depicted and described with respect toFIG.17A. In some such embodiments, the embodiments utilize the ee-pairs set1754A, the oo-pairs set1754B, and the eo-pairs set1754C to generate the weighted, directional graph1700(“graph1700”). The graph1700comprises two nodes for each ee-pair in the ee-pairs set1754A, specifically an input node and an output node. As depicted, for example, the ee-pair (2, 6) is associated with ee-input node1704A and ee-output node1704B, ee-pair (8, 22) is associated with ee-input node1704C and ee-output node1704D, and ee-pair (14, 20) is associated with ee-input node1704E and ee-output node1704F. Similarly, the graph1700comprises two nodes for each oo-pair in the oo-pairs set1754B. As depicted, for example, the oo-pair (1, 9) is associated with oo-input node1706A and oo-output node1706B, oo-pair (3, 11) is associated with oo-input node1706C and oo-output node1706D, and the oo-pair (21, 23) is associated with the oo-input node1706E and the oo-output node1706F. Embodiments generate the graph1700comprising zero-weight directed edges from each input node to the corresponding output node, for example from input node1704A to1704B,1706A to1706B,1704C to1704D,1706C to1706D, and the like. The graph1700is further generated comprising a single source node depicted as SRC node1702A (“src1702A”) and a single target node depicted as TAR node1702B (“tar1702B”). Src1702A is associated with a zero-weight directed edge to each of the ee-input nodes1704A,1704C, and1704E corresponding to one of the ee-pairs. Similarly, each oo-output node1706B,1706D, and1706F corresponding to one of the oo-pairs is associated with a zero-weight directed edge to tar1702B. For each ee-output node1704B,1704D, and1704F, one to two paths to oo-nodes exist. Similarly, for each oo-input node1706A,1706C, and1706E, one to two incoming paths from each ee-output node1704B,1704D, and1704F exists. In some embodiments, for paths having zero eo-nodes in between ee-pairs and oo-pairs, a single weighted directed edge from the ee-output node to the oo-input node is generated that represents the total cost of traversing the path from the ee-pair to the oo-pair. For example, ee-output node1704B associated with the ee-pair (2, 6) is directly connected via a single weighted edge to oo-input node1706A associated with the oo-pair (3, 11), as such pairs include immediately adjacent indices. Alternatively or additionally, in some embodiments, for paths having one or more oe-nodes in between ee-pairs and oo-pairs, a weighted directed edge is created from the ee-output node to the oe-node associated with the first oe-pair in the chain of one or more oe-pairs, where the weighted edge is assigned the weight for all nodes traversed in the chain of one or more oe-pairs to reach a corresponding oo-pair. A zero-weighted edge then is created from the oe-node associated with the first oe-pair to the oo-input node corresponding to the oo-pair swappable with the initial ee-pair. For example, node1704B associated with the ee-pair (2, 6) is connected to node1708A associated with the oe-pair (7, 24). In this path, subsequent intermediate oe-nodes must be traversed through until a corresponding oo-node is reached. For example, embodiments may determine that intermediate node1710A must be traversed through corresponding to swapping the oe-pair (25, 4) with (7, 24), subsequently intermediate node1710B must be traversed through corresponding to swapping the oe-pair (5, 12) with the oe-pair (25, 4), subsequently intermediate node1710C must be traversed through corresponding to swapping the oe-pair (13, 10) with the oe-pair (5, 12) until we reach an oe-pair that may be swapped with one of the oo-pairs, namely with the oo-pair (3, 11) corresponding to oo-input node1706C. In this regard, in some such embodiments the intermediate nodes1710A-1710C may be optional, and instead the weighted edge between the ee-output node1704B and the intermediate node1708A corresponding to the first oe-pair may be weighted as w1where w1includes the weights for each subsequent portion of the path, further allowing the intermediate node1708A to be directly connected with the oo-input node1706C. In yet other embodiments, the individual intermediate nodes1710A-1710C are maintained in the graph independently and their individual weights are similarly maintained independently. In some embodiments, for paths with a sequence of one or more intermediate oe-pairs, the first oe-pair is included in the path with a weighted directed edge from the ee-output node to the oe-node, with a corresponding weight representing the total path weight from ee-to-oo pair. In this regard, the weight from the ee-node to the eo-node may be generated from the weights of all subsequent eo-nodes. In other embodiments, the graph includes the full path intact, including all intermediate nodes connected by weighted edges comprising the weight for each individual node, as describe herein. The graph1700further comprises a sub-graph comprising nodes forming a closed cycle path. As depicted, the graph1700comprises nodes1712A corresponding to oe-pair (27, 16), node1712B corresponding to oe-pair (17, 18), and node1712C corresponding to oe-pair (19, 26), Such nodes are each associated with a directed node that connects the node to the node associated with the adjacent position in the swap sequence. Some embodiments process such closed cycle paths independently from the remainder of the graph1700, for example as described herein with respect toFIGS.24and25. In this regard, it should be appreciated that the pairs corresponding to the nodes of the closed cycle path are not utilized in the paths from any given ee-pair to any given oo-pair. In some embodiments, the non-zero weights from an ee-node to the first path node represents the total cost of the swap of the even position in all pairs along the path with its adjacent odd position in the next pair in the path. The total cost is measured as a differential histogram of distances of the pairs after the swaps are initiated relative to the pairs before the swaps are initiated. In this regard, given a set of pairs {(pa,1, pb,1), (pa,2, pb,2), . . . , (pa,N, pb,N)}, embodiments determine the histogram of distances as the number of times each distance abs(pa,k−pb,k) occurs for each k in the range {1, 2, . . . N}. FIG.18depicts example histogram of distances weight calculations for an example path for traversing through an example graph, in accordance with at least one example embodiment of the present disclosure. Specifically,FIG.18depicts calculations of histograms of distances representing the weights of an example path1800of the graph1700. The path comprises nodes1702A,1704A,1704B,1708A,1710A,1710B,1710C,1706C,1706D, and1702B, which corresponds to the set of pairs1802A, comprising {(2,6), (7,24), (25,4), (5,12), (13,10), (11,3)}. The distance for each segment of the path is determined by the absolute value of the difference between the two positions in the pair, or in other words abs(p1−p2) where p1 is the first position in the pair and p2 is the second position in the pair. Thus, for the set of pairs1802, the corresponding distance calculations are indicated as original sw1to original sw6respectively in the set of distance calculations1804A. As depicted, the set of pairs1802A is associated with the set of distance calculations1804A comprising {abs(2−6), abs(7−24), abs(25−4), abs(5−12), abs(13−10), abs(11−3)} respectively, yielding the values {4, 17, 21, 7, 3, 8}. The corresponding histogram of distances1806A is generated by mapping each occurrence of a particular value in the set of distance calculations1804A to the histogram, thus yielding {3:1, 4:1, 7:1, 8:1, 17:1, 21:1} where each element “x:y” represents the occurrence of the value x in the set of distance calculations1806A for y number of times (e.g., the value 3 occurs one time, the value 4 occurs one time, the value 7 occurs 1 time, the value 8 occurs 1 time, the value 17 occurs 1 time, and the value 21 occurs one time). The histogram of distances1806A embodies a histogram of distances for the original pairs in the set1802A, and may be referred to as h0. Some embodiments subsequently generate histogram of distances for the position pairs embodying the path after swaps of the pairs have been performed. As depicted, the swapped set of pairs is represented by1802, comprising {(2, 7), (6, 25), (24, 5), (4, 13), (12, 11), 10, 3)}. The distance for each segment of the swapped path is again determined using the absolute values of the differences between the two positions in the swapped pair. Thus, for the set of swapped pairs1802B, the corresponding distance calculations are indicated as swapped sw1 to swapped sw6 respectively in the set of distance calculations1804B for the set of swapped pairs1802B. As depicted, the set of swapped pairs1802B is associated with the set of distance calculations1804B comprising {abs(2−7), abs(6−25), abs(24−5), abs(4−13), abs(12−11), abs(10−3)} respectively, yielding the values {5, 19, 19, 9, 1, 7}. The corresponding histogram of distances1806B is generated by mapping each occurrence of a particular value in the set of distance calculations1804B for the set of swapped pairs1802B to the histogram, thus yielding {19:2, 9:1, 7:1, 5:1, 1:1} (e.g., the value 19 occurs 2 times, the value 9 occurs 1 time, the value 7 occurs 1 time, the value 5 occurs 1 time, and the value 1 occurs 1 time). Utilizing the histogram of distances for each of the set of pairs1802A and1802B, the resulting distances may be compared between the set of original position pairs1802A and the set of swapped position pairs1802B. Each distance represents a worst-case measure of efficiency for swapping the positions of the pairs into adjacency. By comparing the histogram of distances for a given path, the worst-case scenarios of each path may be directly identified and/or compared. For example, the histogram of distances1806A includes a largest (e.g., worst-case) distance of 21 with a second worst-case of 17, whereas the histogram of distances1806B includes a largest (e.g., worst-case) distance of 19, with a second worst-case of 19 again. In this regard, the original set of position pairs1802A presents a worst case scenario that is worse than the corresponding worst case scenario for the swapped set of position pairs1802B. The histogram of distances1806A for the set of original position pairs1802A (e.g., before any swap) and the histogram of distances1806B for the set of swapped position pairs1802B (e.g., after the swaps) may be utilized to differential histogram of distances relative to the transformation from original to swapped position pairs. For example, the differential histogram of distances1808corresponding to the path1800is determinable by subtracting the h0 from h1 (e.g., differential histogram of distances1808=histogram of distances1806B corresponding to the set of swapped position pairs1802B−histogram of distances1806A corresponding to the set of original position pairs1802A). As depicted, the differential histogram of distances1808resulting therefrom comprises {21:−1, 19:2, 17:−1, 9:1, 8:−1, 5:1, 4:−1, 3:−1, 1:1}. The differential histogram of distances represents the relative improvement to the worst-case scenario for performing even-odd transposition sort (in a circumstance where the differential histogram of distances includes a maximum distance value that is counted a negative number of times), or represents the relative cost to the worst-case scenario for performing even-odd transposition sort (in a circumstance where the differential histogram of distances includes a maximum distance value that is counted positive number of times), by swapping the original position pairs in the manner described by the path. In this regard, histograms of positive integer values (e.g., positive distances) are comparable and orderable by comparing largest values first using the following procedure. Given a histogram h, the number of times a value v is counted is given by h[v]. If v does not exist in the histogram (e.g., comprising the map), then h[v]==0. No value v exists in the map having h[v]==0, and in some embodiments such values are removed from the map. Subsequently, in some embodiments, the ordering operator<(“less than”) is defined on two histograms h1 and h2 such that h1<h2 is true if, and only if, the following procedure yields true:1. Determine the max values v1 and v2 such that h1[v1]!=0 and h2[v2]!=0 (a!=b means a is not equal to b)2. If v1 does not exist and v2 does not exist, return false3. If v2 does not exist, return true if h1[v1]<0 and return false otherwise4. If v1 does not exist, return true if h2[v2]>0 and return false otherwise5. If v1<v2, return true if h2[v2]>0 and return false otherwise6. If v1>v2, return true if h1[v1]<0 and return false otherwise7. If v1==v2, return true if h1[v1]<h2[v2] and return false if h1[v1]>h2[v2]8. If v1==v2==0, return false9. Update v1 and v2 to be the values having h1[v1]!=0 and h2[v2]!=0 and v1 is strictly smaller than its previous value and v2 is strictly smaller than its previous value and repeat steps 2-9 until termination. Further, in addition to the above process, in some embodiments histograms are comparable to zero. In a circumstance where the max value vmax=max (v|v>0 and h[v]!=0) in the histogram is not zero, then the histogram h>0 if and only if h[vmax]>0 and h<0 if and only if h[vmax]<0. A histogram with no positive zero values is considered zero, which may be equivalent to the empty map. In some embodiments, histograms can be added. For example, given two histograms h1 and h2, then h=h1+h2 has elements h[v]=h1[v]+h2[v] for all v in either h1 or h2. Additionally or alternatively, it should be appreciated that, in some embodiments, histograms can be subtracted. For example, given two histograms h1 and h2, then h=h1−h2 has elements h[v]=h1[v]−h2[v] for all v in either h1 or h2). Additionally or alternatively, in some embodiments, histograms can be negated. For example, given h1, h=−h1 has elements h[v]=−h1[v] for all v in h1. Histograms having each of such properties enable use in various graph theory algorithms. For example, histograms having each of the above properties enables such histograms to be utilized as weights in the directed graphs described herein, for example with respect to the directed graph1700and such properties are leveraged to enable the graph to be processed via Suurballe's algorithm as described herein. In this regard, the differential histogram of distances1808is assigned as the weight w1in the graph1700for the path as depicted. Embodiments may similarly determine the weights of the remaining paths w2, w3, w4, w5, and w6as depicted. Utilizing the same methodology used to calculate w1, the following weights may be determined:w1: {21:−1, 19:2, 17:−1, 9:1, 8:−1, 5:1, 4:−1, 3:−1, 1:1};w2: {9:1, 8:−1, 4:−1, 3:1};w3: {14:−1, 13:1, 8:−1, 7:1};w4: {15:1, 14:−1, 2:−1, 1:1};w5: {15:−1, 13:1, 9:1, 8:−1, 6:−1, 5:1};w6: {7:1, 6:−1, 3:1, 2:−1}. In some embodiments, the weights are re-weighted to eliminate negative weights from consideration. In some such embodiments, to re-weight the graph, the minimum of the determined weights emitting from any of the ee-output nodes is subtracted from each weight (e.g., the minimum of w1to w6is subtracted from each of the weights w1to w6). Such re-weighting does not affect the relative total weights across all paths, and thus maintains the prioritization of each path with respect to one another. Additionally, by re-weighting to ensure the weights only reflect positive weights, additional shortest path algorithms that require such conditions as an underlying property—such as Dijkstra's efficient shortest path algorithm—may be utilized as part of the algorithm for determining the total minimum weight across multiple paths of the graph1700, for example as a subroutine in the implementation of Suurballe's algorithm for processing the graph1700. As depicted, for example, the minimum weight (wmin) is w1. Thus, after re-weighting, such embodiments determine the modified weights as:w1=w1−wmin:{ }w2=w2−wmin: {21:1, 19:−2, 17:1, 5:−1, 3:2, 1:−1}w3=w3−wmin: {21:1, 19:−2, 17:1, 14:−1, 13:1, 9:−1, 7:1, 5:−1, 4:1, 3:1, 1:−1}w4=w4−wmin: {21:1, 19:−2, 17:1, 15:1, 14:−1, 9:−1, 8:1, 5:−1, 4:1, 3:1, 2:−1}w5=w5−wmin: {21:1, 19:−2, 17:1, 15:−1, 13:1, 6:−1, 4:1, 3:1, 1:−1}w6=w6−wmin: {21:1, 19:−2, 17:1, 9:−1, 8:1, 7:1, 6:−1, 5:−1, 4:1, 3:2, 2:−1, 1:−1} The fully weighted and constructed graph1700may subsequently be processed utilizing one or more graph theory algorithms to determine a shortest total path. In some embodiments, Suurballe's multiple paths shortest total path algorithm modified to perform based on weights embodied by a differential histogram of distances is utilized, as described herein, to find the paths from the ee-pairs of the ee-pairs set1754A to oo-pairs of the oo-pairs set1754B. In this regard, using the differential histogram of distances as weights, the implementation of Suurballe's algorithm will find the collection of paths from ee-to-oo that minimize the worst-case distance during a single step of parallel swap commands generated by these paths. In other words, in some embodiments, the modified Suurballe's algorithm is executed to generate a single step algorithm swap command (e.g., can be done in one phase of even-odd transposition sort) that tends to reduce the distance of the modified pairs it generates starting with the worst-case distance as its highest priority. It should be appreciated that some embodiments utilize histogram of distances to minimize the worst-case total parallel swap commands. As described, the histogram of distances enables determination of the worst-case cost (or benefit) to a particular path and/or otherwise performing a particular path of swaps. In circumstances where the worst-cases between two paths are tied, effects to the next-worst-case can be determined to continue to attempt to identify whether a swap effectuates a reduction in the overall worst-case total number of parallel swap commands. Other embodiments, however, may utilize other metrics for weighting edges of the graph1700as depicted without deviating from the scope and spirit of this disclosure. For example, in other embodiments, ease of implementation may be a factor such that the weight for a given edge is preferred to be simplistic to implement rather than most efficient. In this regard, a simple average, sum, or other calculation may be utilized to generate weights rather than the histogram of distances as depicted. Alternatively, a value representing only the worst-case (e.g., without values corresponding to next-worst-cases) may be utilized as edge weights to attempt to reduce the worst-case total number of swaps, even if not minimized, without incurring additional complexities in implementing the histogram of distances. Example visualizations of intermediary steps in execution of the modified Suurballe's algorithm based on histogram of distances are depicted inFIGS.19to23. In this regard, the modified Suurballe's algorithm begins with a first operation by finding the shortest path tree from the src node1702A to all nodes in the graph1700. Since all weights in the graph1700are non-negative, Dijkstra's algorithm (or similar shortest path algorithms) may be implemented for such purposes. In this regard, Dijkstra's algorithm returns both the shortest path tree from the src node1702A to all graph nodes and the distance (i.e. the sum of all weights along the path) from the source node to all graph nodes along the tree paths. In some embodiments, the edges traversed are recorded from the src node1702A to the tar node1702B for subsequent processing. As depicted, the path from src1702A to tar1702B is embodied by the traversal from src1702A to ee-input node1704A to ee-output node1704B to intermediary node1708A to oo-input node1706C (optionally through intermediary eo-nodes1710A-1710C) to oo-output node1706D and finally to tar1702B.FIG.19depicts a visualization of the first identified shortest path tree with the shortest path from src1702A to tar1702B identified in bold. Each graph node is associated with a particular distance (e.g., the shortest distance) from the src node1702A to the particular node. For example, distance d1 of 0 (e.g., an empty map) is the distance from src node1702A to pair node (7, 24)1708A, distance d2 of {21:1, 19:−2, 17:1, 15:−1, 13:1, 6:−1, 4:1, 3:1, 1:−1} equivalent to w5is the distance from src node1702A to pair node (15, 0)1708B, and distance d3 of {21:1, 19:−2, 17:1, 9:−1, 8:1, 7:1, 6:−1, 5:−1, 4:1, 3:2, 2:−1, 1:−1} equivalent to w6is the distance from src node1702A to pair node (21, 23)1706E. Because the other weights are zero after these particular nodes, d1 represents the distances to nodes1710A,1710B,1710C,1706C,1706D, and1702B. Similarly, d2 represents the distances to nodes1706A and1706B, and d3 represents the distance to node1706F. Such distances to all nodes in the graph (of which d1, d2, and d3 are examples) are utilized in re-weighting the edges of the graph1700as described further herein. Upon identification of the shortest path tree, in a second operation, the graph weights from each node (u) to subsequent node (v) are updated according to the following: weight(u→v)=weight(u→v)+dist(u)−dist(v), where dist(x) is the distance from src node1702A to node x in the shortest path tree.FIG.20depicts a visualization of the processed portion of the graph1700with updated weights based on the shortest path tree and shortest path identified as depicted and described above with respect toFIG.19. As depicted, all weights contained in the shortest path are updated to have zero weight. Edges not in the shortest path tree (depicted inFIG.20as bolded) are updated accordingly, including edges previously having a zero weight. As depicted, for example, weights w7and w8become non-zero and have the following values. Similarly, w2, w3, and w4are updated to have the following values:w2: {21:1, 19:−2, 17:1, 5:−1, 3:2, 1:−1}w3: {15:1, 14:−1, 9:−1, 7:1, 6:1, 5:−1}w4: {15:1, 14:−1, 7:−1, 6:1, 3:−1, 1:1}w7=d2−d1: {21:1, 19:−2, 17:1, 15:−1, 13:1, 6:−1, 4:1, 3:1, 1:−1}w8=d3−d1: {21:1, 19:−2, 17:1, 9:−1, 8:1, 7:1, 6:−1, 5:−1, 4:1, 3:2, 2:−1, 1:−1} Upon completion of re-weighting the edges in the graph1700, in a third operation, embodiments reverse the direction of the shortest path found in the most-recent performed operation for finding the shortest path tree, as depicted and described with respect toFIGS.18and19.FIG.21depicts a visualization of the processed portion of the graph1700with the first identified shortest path reversed as depicted and described with respect toFIG.18. As depicted, the newly reversed edges are depicted in bold. Based on the previously performed re-weighting, the weights for all edges along the reversed path remain zero. The algorithm then completes this iteration and continues by finding the next shortest path tree in the modified graph1700, recording the edges of the shortest path from src1702A to tar1702B, re-weighting the graph, and reversing the shortest path for the newly found shortest path until the desired number of shortest paths are determined (e.g., the number of ee-pairs in the ee-pairs set1754A). In the final iteration, it should be appreciated that some embodiments may not re-weight the graph nor reverse the shortest path. With respect to the ongoing example depicted and described with respect toFIGS.17A and17B,FIG.22depicts the next shortest path tree identified during the second iteration of Suurballe's algorithm. The shortest path from src1702A to tar1702B is similarly depicted in bold. Further, as depicted inFIG.22, the shortest path is associated with a distance d4 of {21:1, 19:−2, 17:1, 15:−1, 13:1, 6:−1, 4:1, 3:1, 1:−1}. This distance d4 is similarly utilized to update the edges of the previously-updated graph1700in the manner described herein. Subsequently, the edges of the path depicted inFIG.22are reversed as well in the manner described herein, and a final iteration begins. For purposes of brevity, the intermediate steps for re-weighting the graph and reversing the path are omitted, as they follow the same process depicted and described with respect toFIGS.20and21. FIG.23Adepicts the final shortest path tree identified during the third and final iteration of Suurballe's algorithm given an ee-pairs set1704A having three ee-pairs. The shortest path from src1702A to target1702B is similarly depicted in bold. The depicted shortest path traverses an edge reversed due to a path traversed in a previous iteration, specifically between oo-input node1706A associated with the oo-pair (1, 9) and intermediate node1708B associated with the oe-pair (15, 0), as well as between intermediate node1708B associated with the oe-pair (15, 0) and ee-output node1704F associated with the ee-pair (14, 20). Upon completing all iterations and finding all shortest path trees, embodiments consolidate the paths by removing all edges traversed in both directions the same number of times from the set of all edges traversed in all paths. As described in the ongoing example herein, edges associated with traversing from the oo-input node1706A associated with the oo-pair (1, 9) to the intermediate node1708B associated with the oe-pair (15, 0), and from the intermediate node1708B associated with the oe-pair (15, 0) to ee-output node1704F associated with the ee-pair (14, 20), are traversed in the reverse directions in the third path identified and previously traversed in the forward direction in the second path identified. Thus, such edges are removed from the set of edges. The remaining set of edges form a set of node-disjoint paths from src to tar with the property that the total sum of weights along all paths is a minimum weight. For purposes of processing at this stage, embodiments ignore the src node1702A and tar node1702B. The result is the unique pair-disjoint paths from each ee-pair to an oo-pair. FIG.23Billustrates a visualization of the consolidated paths for the example graph1700, in accordance with at least one example embodiment of the present disclosure. For example, with respect to the ongoing example for processing the starting positions pairing set1752, the set of paths comprises:{Path 1: (2, 6)→(7, 24)→(25, 4)→(5, 12)→(13, 10)→(3, 11),Path 2: (8, 22)→(1, 9)Path 3: (14, 20)→(21, 23)} The swaps indicated in each path forms a set of swap commands that may be performed within a single phase of even-odd transposition sort. In some embodiments, to further improve the worst-case scenario, embodiments further process each closed cycle path individually to determine if swapping the pairs represented in the closed cycle path reduces the differential cost of the newly formed pairs relative to the original pairs (e.g., thus improving the overall worst-case scenario efficiency for such sorting). FIG.24depicts an example visualization of processing a closed cycle path in accordance with at least one example embodiments of the present disclosure. As described herein, the closed cycle path comprises nodes1712A,1712B, and1712C corresponding to the oe-pairs (27, 16), (17, 18), and (19, 26) respectively representing the original pairs set2402A. In this regard, the closed cycle path is formed as each pair is adjacent to one another, and the last element in the last pair is also adjacent to the first element of the first pair and consistent with the phase of sort to be performed (e.g., the even phase, as depicted, but in other implementations odd phase is first phase of sort). In this regard, should the embodiment determine that the pairs in the closed cycle path should be swapped, every even position swaps with its next adjacent odd position in the cycle, including the last pair cycling back to the first pair. In this regard, the original pairs set2402A may be swapped to generate the corresponding swapped pairs set2402B. The weights associated with the elements of the closed cycle path may be determined in the same manner to those discussed above with respect to the remainder of the graph1700. As depicted, the original set of pairs2402A for the closed cycle path is associated with the set of distance calculations (e.g., weights) comprising {abs(27−16), abs(17−18), abs(19−26)} respectively, yielding the values {11, 1, 7}. The corresponding histogram of distances2406A is generated by mapping each occurrence of a particular value in the set of distance calculations2404A to the histogram, thus yielding {11:1, 7:1, 1:1}. The histogram of distances2406A for the original set of pairs2402A may be referred to as ho within the context of this FIG. The histogram of distances is similarly determined for after the swaps are performed (e.g., for the set of swapped pairs2402B). As depicted, the swapped set of pairs is represented by2402B, comprising {(26, 17), (16, 19), (18, 17)}. The distance for each segment of the swapped path is again determined utilizing the absolute values of the differences between the two positions in the swapped pair. Thus, the corresponding set of distance calculations2406B for the set of swapped pairs2402B are indicated as swapped sw1 to swapped sw3, comprising {abs(26−17), abs(16, 19), abs(18−27)} respectively, yielding the values {9, 3, 9}. The corresponding histogram of distances2406B is generated by mapping each occurrence of a particular value in the set of distance calculations2404B for the set of swapped pairs2402B to the histogram, thus yielding {9:2, 3:1} (e.g., the value 9 occurs 2 times, the value 3 occurs one time). This histogram corresponding to the swapped set of pairs2402B may be referred to as h1. Utilizing the histogram of distances for each of the set of pairs2402A and2402B, a resulting differential histogram of distances2408corresponding to the closed cycle path is determinable by subtracting the histogram h0from h1(e.g., differential histogram of distances2408=histogram of distances2406B−histogram of distances2406A). As depicted, the differential histogram of distances2408resulting therefrom comprises {11:−1, 9:2, 7:−1, 3:1, 1:−1}. From this differential histogram of distances2408, embodiments can determine that h<0 because the maximum value (11) has a negative count, making the path's overall improvement to the worst-case scenario beneficial. Thus, some such embodiments determine it is beneficial to perform such swaps, and will include such swap commands in the single step parallel swap command generated therefrom. It should be appreciated that each closed cycle path may be processed individually. In this regard, in a circumstance where multiple independent closed cycle paths are identified, some such embodiments may process each closed cycle path to determine whether the closed cycle path results in an improvement (e.g., negative h) or cost. Each closed cycle path resulting in an improvement to the worst-case scenario may be processed as part of the single-stage parallel swap command generated. FIG.25depicts an example visualization of an original starting positions pairing set, corresponding initial swap command, and a converted even-odd starting positions pairing set. In particular,FIG.25depicts the initial swap command generated for the original starting positions pairing set1702via the process depicted and described with respect toFIGS.17A,17B, and18-24. The initial swap command set2504includes all swaps included in the identified consolidated graph of shortest paths determined utilizing the modified Suurballe's algorithm implementation described herein. As depicted and described, such paths included a single-phase swap command consistent with the even phase (for example) of even-odd transposition sort comprising {(4, 5), (6, 7), (8, 9), (10, 11), (12, 13), (20, 21), (24, 25)}. The closed cycle path depicted and described with respect toFIG.24further was determined to yield improvements to the worst-case scenario, and thus {(16, 17), (18, 19), (26, 27)} swaps are further included in the set, yielding {(4, 5), (6, 7), (8, 9), (10, 11), (12, 13), (16, 17), (18, 19), (20, 21), (24, 25), (26, 27)}. In this regard, the initial swap command set2504includes all single-phase swaps that minimize the worst-case distance. Some embodiments herein may generate an algorithm swap command set that cause performance of the swaps indicated in the initial swap command set2504in a first step before sorting the starting positions pairing set1702. For example, the initial swap command converts the original starting positions pairing set1702into the even-odd starting positions pairing set2506. The even-odd pairing set thus satisfies all requirements for target assignment via any of the processes described herein, for example with respect toFIGS.15A,15B,16A, and16B. In this regard, the even-odd starting positions pairing set2506may thus subsequently be utilized for target assignment. In some embodiments, the initial swap command is applied to the targets vector (e.g., the target qubits position set) resulting from target assignment to undo the initial swap command needed to convert from an arbitrary starting positions pairing set (which includes any number of ee-pairs and oo-pairs) to the even-odd starting positions pairing set. The resulting targets vector after re-applying the initial swap command is subsequently sorted utilizing even-odd transposition sort, as described herein, to identify the algorithm swap command set that represents the steps to sort the targets vector. In some embodiments, as described herein, this process may be performed twice, once using even phase as the first phase of the even-odd transposition sort on the odd-only positions (e.g., embodied in the target_slots vector) and once using odd phase as the first phase of the even-odd transposition sort on the odd-only positions embodied in the target_slots vector. As described, both target vectors are sorted to obtain the steps to sort, and the targets vector with the smaller steps to sort may be selected. FIG.26illustrates an example flowchart of operations for target assignment, for example in an example process for instruction compilation for at least one time slice, in accordance with at least one example embodiments of the present disclosure. In this regard, the example operations are depicted and described with respect to the perspective of the controller30. In this regard, the controller30may be embodied by any number of computing devices, for example the apparatus200as depicted and described herein with respect toFIG.2. The apparatus200may be configured for communication with any number of other devices and/or systems, for example the computing entity10and/or other components of the quantum computer102. In this regard, each operation will be described from the perspective of the controller30embodied by the specially configured apparatus200. The illustrated process begins at operation2602. In some embodiments, the process begins as the first operation of any process. In other embodiments, the process begins after one or more of the operations depicted and/or described with respect to one of the other processes described herein. For example, in some embodiments as described, the process begins after execution of operation704. In this regard, the process may replace or supplement one or more blocks depicted and/or described with respect to one of the other processes described herein. Additionally or alternatively, in some embodiments, upon completion of the process depicted inFIG.26, the process ends. Alternatively, in other embodiments, flow may return to one or more of the operations of another process, for example the operation706. At operation2602, the apparatus200identifies a starting positions pairing set. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for identifying a starting positions pairing set. The starting positions pairing set may include the position pairs for qubits to be gated at a particular time slice, for example as determined from a quantum circuit. The starting positions pairing set may be in any of a myriad of formats, for example eo-exclusive, arbitrary (may include ee- or oo-pairs), and/or the like. At operation2604, the apparatus200determines the starting positions pairing set satisfies an even-odd only constraint. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for determining the starting positions pairing set satisfies an even-odd only constraint. In some embodiments, the apparatus200iterates through the starting positions pairing set and tests whether each position pair includes an even position and an odd position, thus embodying an eo-pair. At operation2606, the apparatus200generates a target slots vector based at least in part on the starting positions pairing set. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for generating a target slots vector based at least in part on the starting positions pairing set. In some embodiments, the apparatus200generates the target slots vector utilizing a particular slot determination algorithm, as described herein. In some embodiments, the apparatus200fixes even slots (and/or even positions) and assigns the target slots vector to odd slots and/or positions. In other embodiments, the apparatus200fixes odd slots (and/or odd positions) and assigns the target slots vector to even slots and/or positions. At operation2608, the apparatus200determines a target slots midpoint vector by sorting the target slots vector utilizing even-odd transposition sort. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for determining a target slots midpoint vector by sorting the target slots vector utilizing even-odd transposition sort. In some embodiments, the even-odd transposition sort is utilized to sort the target slots vector and identify the target slots midpoint vector in the manner described herein with respect toFIGS.13,14,15A,15B,16A, and16B. At operation2610, the apparatus200generates the target qubit position set based at least in part on the target slots midpoint vector. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for generating the target qubits position set based at least in part on the target slots midpoint vector. For example, in some embodiments, the apparatus200assigns values to the indices of the target qubit positions set embodying a targets vector based on the values in the target slots midpoint vector. Non-limiting examples of such target qubit position set generation are described herein with respect toFIGS.15A,15B,16A, and16B. FIG.27illustrates an example flowchart of operations for target assignment, for example in an example process for instruction compilation for at least one time slice, in accordance with at least one example embodiments of the present disclosure. In this regard, the example operations are depicted and described with respect to the perspective of the controller30. In this regard, the controller30may be embodied by any number of computing devices, for example the apparatus200as depicted and described herein with respect toFIG.2. The apparatus200may be configured for communication with any number of other devices and/or systems, for example the computing entity10and/or other components of the quantum computer102. In this regard, each operation will be described from the perspective of the controller30embodied by the specially configured apparatus200. The illustrated process begins at operation2702. In some embodiments, the process begins as the first operation of any process. In other embodiments, the process begins after one or more of the operations depicted and/or described with respect to one of the other processes described herein. For example, in some embodiments as described, the process begins after execution of operation2606. In this regard, the process may replace or supplement one or more blocks depicted and/or described with respect to one of the other processes described herein. Additionally or alternatively, in some embodiments, upon completion of the process depicted inFIG.27, the process ends. Alternatively, in other embodiments, flow may return to one or more of the operations of another process, for example the operation2610. At operation2702, the apparatus200assigns, for each even position in the starting positions pairing set, a fixed slot of an available slot set. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for assigning, for each even position in the starting positions pairing set, a fixed slot of an available slot set. The slots may be assigned utilizing a particular slot determination algorithm. Alternatively, in some embodiments, the apparatus200assigns, for each odd position in the start positions pairing set, a fixed slot of an available slot set. Non-limiting examples of slot assignment are described herein with respect toFIGS.14,15A, and16A. At operation2704, the apparatus200sorts the target slots vector to attain parity between the slot value for each odd position and the fixed slot for the even position associated with the odd position. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for sorting the target slots vector to attain parity between the slot value for each odd position and the fixed slot for the even position associated with the odd position. In some embodiments, the apparatus200sorts the target slots vector utilizing even-odd transposition sort, as described herein. Non-limiting examples of sorting the target slots vector to attain parity (e.g., equivalent to the common-column property) are described herein with respect toFIGS.15A and16A. FIG.28illustrates an example flowchart of operations for target assignment, for example in an example process for instruction compilation for at least one time slice, in accordance with at least one example embodiments of the present disclosure. In this regard, the example operations are depicted and described with respect to the perspective of the controller30. In this regard, the controller30may be embodied by any number of computing devices, for example the apparatus200as depicted and described herein with respect toFIG.2. The apparatus200may be configured for communication with any number of other devices and/or systems, for example the computing entity10and/or other components of the quantum computer102. In this regard, each operation will be described from the perspective of the controller30embodied by the specially configured apparatus200. The illustrated process begins at operation2802. In some embodiments, the process begins as the first operation of any process. In other embodiments, the process begins after one or more of the operations depicted and/or described with respect to one of the other processes described herein. For example, in some embodiments as described, the process begins after execution of operation2610. In this regard, the process may replace or supplement one or more blocks depicted and/or described with respect to one of the other processes described herein. Additionally or alternatively, in some embodiments, upon completion of the process depicted inFIG.28, the process ends. Alternatively, in other embodiments, flow may return to one or more of the operations of another process, for example the operation2610. At operation2802, the apparatus200identifies an original starting positions pairing set corresponding to the qubit pairing set. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, identifying an original starting positions pairing set corresponding to the qubit pairing set. Original starting positions pairing set may not satisfy one or more applicable constraints, for example as it may be based solely on the qubits to be paired at the time slice to be processed. The original starting positions pairing set may be parsed and/or otherwise extracted from a quantum circuit as described herein. In some embodiments, the original starting positions pairing set is embodied by the qubit pairing set. At operation2804, the apparatus200determines the original starting positions pairing set does not satisfy the even-odd only constraint. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for determining the original starting positions pairing set does not satisfy the even-odd only constraint. The apparatus200may check each positions pair in the original starting positions pairing set to identify one or more ee-pairs and/or oo-pairs. Alternatively or additionally, in some embodiments the apparatus200processes the original starting positions pairing set to determine that the original starting positions pairing set does not satisfy one or more prerequisite (e.g., additional) constraints, for example an evenness constraint or a fullness constraint as described herein. At operation2806, the apparatus200converts the original starting positions pairing set to the starting positions pairing set satisfying the even-odd only constraint by applying the original starting positions pairing set to a modified Suurballe's algorithm. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for converting the original starting positions pairing set to the starting positions pairing set satisfying the even-odd only constraint by applying the original starting positions pairing set to a modified Suurballe's algorithm. In some embodiments, the modified Suurballe's algorithm includes or is otherwise configured for generating a plurality of differential histogram of distances. Additionally or alternatively, in some embodiments, the modified Suurballe's algorithm includes or is configured for converting the original starting positions pairing set based on the plurality of differential histogram of distances. Non-limiting examples of converting the original starting positions pairing set utilizing a modified Suurballe's algorithm are described herein with respect toFIGS.17A,17B, and18-25. FIG.29illustrates an example flowchart of operations for target assignment, for example in an example process for instruction compilation for at least one time slice, in accordance with at least one example embodiments of the present disclosure. In this regard, the example operations are depicted and described with respect to the perspective of the controller30. In this regard, the controller30may be embodied by any number of computing devices, for example the apparatus200as depicted and described herein with respect toFIG.2. The apparatus200may be configured for communication with any number of other devices and/or systems, for example the computing entity10and/or other components of the quantum computer102. In this regard, each operation will be described from the perspective of the controller30embodied by the specially configured apparatus200. The illustrated process begins at operation2902. In some embodiments, the process begins as the first operation of any process. In other embodiments, the process begins after one or more of the operations depicted and/or described with respect to one of the other processes described herein. For example, in some embodiments as described, the process begins after execution of operation2602. In this regard, the process may replace or supplement one or more blocks depicted and/or described with respect to one of the other processes described herein. Additionally or alternatively, in some embodiments, upon completion of the process depicted inFIG.29, the process ends. Alternatively, in other embodiments, flow may return to one or more of the operations of another process, for example the operation2604. At operation2902, the apparatus200determines an original starting pairing set does not satisfy at least one additional constraint. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for determining an original starting positions pairing set does not satisfy at least one additional constraint. In some embodiments, the apparatus200performs one or more algorithmic process(es) to check whether the original starting positions pairing set satisfies an evenness constraint and/or a fullness constraint, as described herein. In circumstances where the apparatus200determines the original starting positions pairing set satisfies both additional constraints (and/or any other constraints required by the apparatus200), the apparatus200may continue to process the original starting positions pairing set, such as to determine whether the set satisfies an even-odd only constraint and/or to proceed with target assignment. At operation2904, the apparatus200updates the original starting positions pairing set to satisfy the at least one additional constraint. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for updating the original starting positions pairing set to satisfy the at least one additional constraint. In some embodiments, for example, the apparatus200assigns one or more new positions to be paired in the original starting positions pairing set. Such new positions may be utilized to satisfy a fullness constraint. Additionally or alternatively, in some embodiments, the original starting positions pairing set is updated with new position pairs. For example, the apparatus200may generate new position pairs for unused positions not previously present in the original starting positions pairing set to satisfy a fullness constraint. Alternatively or additionally, in some embodiments, a vacancy position may be created to satisfy an evenness constraint. FIG.30illustrates an example flowchart of operations for target assignment, for example in an example process for instruction compilation for at least one time slice, in accordance with at least one example embodiments of the present disclosure. In this regard, the example operations are depicted and described with respect to the perspective of the controller30. In this regard, the controller30may be embodied by any number of computing devices, for example the apparatus200as depicted and described herein with respect toFIG.2. The apparatus200may be configured for communication with any number of other devices and/or systems, for example the computing entity10and/or other components of the quantum computer102. In this regard, each operation will be described from the perspective of the controller30embodied by the specially configured apparatus200. The illustrated process begins at operation3002. In some embodiments, the process begins as the first operation of any process. In other embodiments, the process begins after one or more of the operations depicted and/or described with respect to one of the other processes described herein. For example, in some embodiments as described, the process begins after execution of operation2610. In this regard, the process may replace or supplement one or more blocks depicted and/or described with respect to one of the other processes described herein. Additionally or alternatively, in some embodiments, upon completion of the process depicted inFIG.30, the process ends. Alternatively, in other embodiments, flow may return to one or more of the operations of another process, for example the operation2610. At operation3002, the apparatus200determines a second target slots midpoint vector by sorting the target slots vector beginning at a second phase. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for determining a second target slots midpoint vector by sorting the target slots vector beginning at a second phase. The second phase may be opposite a first phase utilized to generate a first target qubit position set. For example, in some embodiments, the apparatus200generates a first target qubit position set by sorting the target slots vector beginning in an even phase, and then performs sorting beginning in the odd phase of sort. Alternatively, in other embodiments, the apparatus200generates a first target qubit position set by sorting the target slots vector beginning in an odd phase, and then performs sorting beginning in the even phase of sort. At operation3004, the apparatus200generates a second target qubit position set based at least in part on the target slots midpoint vector, the second target qubit position set additional to the first target qubit position set previously generated. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for generating a second target qubit position set based at least in part on the target slots midpoint vector, the second target qubit position set additional to the first target qubit position set previously generated. The first target qubit position set may have been previously generated at the opposite phase of sort from the second phase. It should be appreciated that the second target qubit position set is generated in a manner to that similarly described with respect to block2610utilizing the second target slots midpoint vector. Non-limiting examples of such two target slots vectors are described with respect toFIGS.15A,15B,16A, and16B. At operation3006, the apparatus200sorts the first target qubit position set utilizing even-odd transposition sort to generate a first algorithm swap command set. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for sorting the first target qubit position set utilizing even-odd transposition sort to generate a first algorithm swap command set. The first algorithm swap command set identifies all parallel swaps to be performed based on the even-odd transposition sort. Non-limiting examples of such sorting is described herein with respect toFIGS.4-12. At operation3008, the apparatus200sorts the second target qubit position set utilizing even-odd transposition sort to generate a second algorithm swap command set. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for sorting the second target qubit position set utilizing even-odd transposition sort to generate a second algorithm swap command set. It should be appreciated that the second target qubit position set is generated in a manner similar to that generated at block3008. However, the resulting first and second algorithm swap command sets differ in the parallel swaps required based on the differences between the first target qubit position set and the second target qubit position set. In this regard, each algorithm swap command set may include a different number of steps, as described herein. At operation3010, the apparatus200compares a first number of steps represented by the first algorithm swap command set and a second number of steps represented by the second swap command set. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for comparing a first number of steps represented by the first algorithm swap command set and a second number of steps represented by the second swap command set. In this regard, the apparatus200may determine which of the algorithm swap command sets includes less steps (e.g., less algorithm swap commands including parallel swaps). Non-limiting examples of the stages of even-odd transposition sort are described herein with respect toFIGS.5-12,15A, and15B. At operation3012, the apparatus200selects the first target qubit position set or the second target qubit position set based on the comparison between the first number of steps and the second number of steps. For example, the apparatus200includes means, such as the qubit instruction processing module210, input/output module206, communications module208, processor202, and/or the like, or a combination thereof, for selecting the first target qubit position set or the second target qubit position set based on the comparison between the first number of steps and the second number of steps. In some embodiments, the apparatus200selects the target qubit position set resulting in fewer number of steps (e.g., stages of the even-odd transposition sort). The apparatus200may subsequently implement instructions for executing the parallel swaps represented by the target qubit position set that is associated with the fewer number of steps. Non-limiting examples of selecting the target qubit position set based on the fewer number of steps are described herein with respect toFIGS.15A,15B,16A, and16B. CONCLUSION Although an example processing system has been described above, implementations of the subject matter and the functional operations described herein can be implemented in other types of digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter and the operations described herein can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described herein can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on computer storage medium for execution by, or to control the operation of, information/data processing apparatus. Alternatively, or in addition, the program instructions can be encoded on an artificially-generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, which is generated to encode information/data for transmission to suitable receiver apparatus for execution by an information/data processing apparatus. A computer storage medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them. Moreover, while a computer storage medium is not a propagated signal, a computer storage medium can be a source or destination of computer program instructions encoded in an artificially-generated propagated signal. The computer storage medium can also be, or be included in, one or more separate physical components or media (e.g., multiple CDs, disks, or other storage devices). The operations described herein can be implemented as operations performed by an information/data processing apparatus on information/data stored on one or more computer-readable storage devices or received from other sources. The term “data processing apparatus” encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or multiple ones, or combinations, of the foregoing. The apparatus can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). The apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a repository management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of them. The apparatus and execution environment can realize various different computing model infrastructures, such as web services, distributed computing and grid computing infrastructures. A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or information/data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. The processes and logic flows described herein can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input information/data and generating output. Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and information/data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive information/data from or transfer information/data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such devices. Devices suitable for storing computer program instructions and information/data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. To provide for interaction with a user, embodiments of the subject matter described herein can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information/data to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending data, files, documents, and/or the like, to and receiving data, files, documents, and/or the like, from a device that is used by the user; for example, by sending web pages to a web browser on a user's client device in response to requests received from the web browser. Embodiments of the subject matter described herein can be implemented in a computing system that includes a back-end component, e.g., as an information/data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface or a web browser through which a user can interact with an implementation of the subject matter described herein, or any combination of one or more such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital information/data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), an inter-network (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks). The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some embodiments, a server transmits information/data (e.g., an HTML page) to a client device (e.g., for purposes of displaying information/data to and receiving user input from a user interacting with the client device). Information/data generated at the client device (e.g., a result of the user interaction) can be received from the client device at the server. While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any disclosures or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular disclosures. Certain features that are described herein in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous. | 252,322 |
11861457 | DETAILED DESCRIPTION In many quantum algorithms, it is useful to perform the operation Rƒsuch that Rƒ|i|0=|i(√{square root over (1−ƒ(xi))}|0+√{square root over (ƒ(xi))}|1) where ƒ:Ω[0,1] is a reference function and xi∈Ω are discrete points chosen for representing the function. Here i Eis an integer indexing the discrete points. For qubit based quantum computing architecture, it can be represented as a bit string, namely its binary expansion i=Σk=0n2kik, ik∈{0,1}. If ƒ is efficiently computable classically, a common strategy for realizing Rƒexactly is by reversible circuit synthesis. But the cost of doing so would be poly(n) for n bits needed to represent i. The polynomial scaling is efficient in theory but in practice (particularly for near-term quantum computers) much more is to be desired in terms of low circuit cost. Embodiments of the present invention implement an alternative that is far more near-term friendly. Basic circuit scheme. In embodiments of the present invention, the quantum circuit ofFIG.5may be used to implement a parametrized transformation R({right arrow over (θ)})|i|0=|i(√{square root over (1−g(xi,{right arrow over (θ)}))}|0+eiϕ(xi,{right arrow over (θ)})√{square root over (g(xi,{right arrow over (θ)}))}|1) with parameters {right arrow over (θ)} being the 3n angles for the single qubit rotations, g(xi, {right arrow over (θ)}) a function bounded in the interval [0,1], and ϕ(xi, {right arrow over (θ)}) a phase factor whose contribution is immaterial for the ultimate goal of using the controlled rotation R({right arrow over (θ)}). For each control qubit ik, referred to herein jointly as the “control register”, there are three rotation operations applied onto the ancilla qubit, referred to herein also as the “target qubit”. The combination of the single-qubit gates does not necessarily need to be of the form Rx-Ry-Rz. Any alternatives that give rise to full SU(2) parametrization would do; any such alternative is referred to herein as an “SU(2) gate”. The values of B which satisfy the function g(xi, {right arrow over (θ)}) may be found, for example, either analytically, or variationally by any number of quantum circuit training routines. For example, some embodiments may use the “quasi-analytical” approach described for MPS in [Alcazar, et al, “Quantum algorithm for credit valuation adjustments”, arxiv:2105.12087, May 25, 2021] which is herein incorporated by reference. Furthermore, scaling constants may be introduced (see [Alcazar, et al.]) so that this method can be broadened to learn functions that are not bounded between 0 and 1. Embodiments of the present invention may also reverse the roles of |0and |1, or use any other reference state to replace the role of the computational basis states in the example ofFIG.5. Note that because quantum circuits may be noisy, some embodiments of the present invention may not implement the circuit ofFIG.5exactly. Any such approximations to the circuit ofFIG.5or equivalents to the circuit ofFIG.5also fall within the scope of embodiments of the present invention and within the scope of the claims herein. Embodiments of the present invention may tune the parameters {right arrow over (θ)} until a halting criterion is satisfied. For example, in some embodiments, an error metric may be minimized until a threshold is achieved where the error metric has the form, ∫Ω|g(x,{right arrow over (θ)})−ƒ(x)|2p(x)dx for some probability distribution p that can be efficiently sampled from (e.g., by a classical computer). A practical means for evaluating the metric would be to draw samples z1, z2, . . . , zm∈Ω from p and evaluate the sum 1m∑j=1mp(zj)❘"\[LeftBracketingBar]"g(zj,θ→"\[Rule]")-f(zj)❘"\[RightBracketingBar]"2. Since both ƒ and g are bounded in [0,1], the statistical error is at most 1m. Generalization to Multi-Variable Case. Consider multi-variate function ƒ:Ω1×Ω2× . . . ×Ωr[0,1]. The general circuit is shown inFIG.6. If the function ƒ is separable with respect to the variables ƒ(x1,x2, . . . ,xr)=ƒ1(x1)ƒ2(x2) . . . ƒr(xr) then U=I inFIG.6. Otherwise if the variables x1, . . . , xrare correlated in any fashion, embodiments of the present invention may try to construct U, by either variational training or analytical approaches depending on the nature of the problem at hand, that correlates them first before the chain of controlled rotations. Referring toFIG.6, a general circuit is shown for handling multi-variable cases of embodiments of the present invention. InFIG.6, the sequence of controlled rotation gates for each variable xi∈Ωiis the same as what is shown inFIG.5. In general the output state of U is of the form Σiαi|iwhere each basis state |i=|x1i|x2i. . . |xriis a product state of the basis states of individual registers representing each variable. Some embodiments may (e.g., exactly or approximately) generate a transformation of the form ❘"\[LeftBracketingBar]"x1〉❘"\[LeftBracketingBar]"x2〉…❘"\[LeftBracketingBar]"xr〉❘"\[LeftBracketingBar]"0〉↦∑iαi❘"\[LeftBracketingBar]"x1i〉❘"\[LeftBracketingBar]"x2i〉…❘"\[LeftBracketingBar]"xri〉(1-f1(x1i)f2(x2i)…fr(xri)❘"\[LeftBracketingBar]"0〉+f1(x1i)f2(x2i)…fr(xri)❘"\[LeftBracketingBar]"1〉). Upon post-selecting the state |1in the ancilla qubit, embodiments of the present invention may evaluate sums of the form ∑i❘"\[LeftBracketingBar]"αi❘"\[RightBracketingBar]"2f1(x1i)f2(x2i)…fr(xri), which is much more generic than the separable case. FIG.4shows a system400for performing a method402. The method402is performed on a quantum computer (which includes a qubit). The method402is for directing the qubit's amplitude to be proportional to the value of a function g416of N variables {right arrow over (xk)}418. The method402includes: (A) initializing M+1 qubits on the quantum computer (operation44), the M+1 qubits comprising: (1) a target qubit t, the target qubit t having an amplitude of a reference state; and (2) a control register with M qubits {ql}. The method402also includes changing the value of the amplitude of the reference state on the target qubit t (operation410). The changing includes: (B) (1) applying a sequence of SU (2) gates to the target qubit t, the sequence of SU (2) gates comprising M controlled quantum gates GLiand at least one rotation parameter, wherein at least one qubit of the control register acts as a control qubit for the controlled quantum gate GLi(operation406); and (B) (2) tuning the at least one rotation parameter until a halting criterion, the halting criterion being based on the amplitude of the reference state, is satisfied (operation408). A classical tensor network may aid in the tuning by simulating part of the tuning process, which may include updating at least one rotation parameter based on the output of the classical tensor network. It is to be understood that although the invention has been described above in terms of particular embodiments, the foregoing embodiments are provided as illustrative only, and do not limit or define the scope of the invention. Various other embodiments, including but not limited to the following, are also within the scope of the claims. For example, elements and components described herein may be further divided into additional components or joined together to form fewer components for performing the same functions. Various physical embodiments of a quantum computer are suitable for use according to the present disclosure. In general, the fundamental data storage unit in quantum computing is the quantum bit, or qubit. The qubit is a quantum-computing analog of a classical digital computer system bit. A classical bit is considered to occupy, at any given point in time, one of two possible states corresponding to the binary digits (bits) 0 or 1. By contrast, a qubit is implemented in hardware by a physical medium with quantum-mechanical characteristics. Such a medium, which physically instantiates a qubit, may be referred to herein as a “physical instantiation of a qubit,” a “physical embodiment of a qubit,” a “medium embodying a qubit,” or similar terms, or simply as a “qubit,” for ease of explanation. It should be understood, therefore, that references herein to “qubits” within descriptions of embodiments of the present invention refer to physical media which embody qubits. Each qubit has an infinite number of different potential quantum-mechanical states. When the state of a qubit is physically measured, the measurement produces one of two different basis states resolved from the state of the qubit. Thus, a single qubit can represent a one, a zero, or any quantum superposition of those two qubit states; a pair of qubits can be in any quantum superposition of 4 orthogonal basis states; and three qubits can be in any superposition of 8 orthogonal basis states. The function that defines the quantum-mechanical states of a qubit is known as its wavefunction. The wavefunction also specifies the probability distribution of outcomes for a given measurement. A qubit, which has a quantum state of dimension two (i.e., has two orthogonal basis states), may be generalized to a d-dimensional “qudit,” where d may be any integral value, such as 2, 3, 4, or higher. In the general case of a qudit, measurement of the qudit produces one of d different basis states resolved from the state of the qudit. Any reference herein to a qubit should be understood to refer more generally to a d-dimensional qudit with any value of d. Although certain descriptions of qubits herein may describe such qubits in terms of their mathematical properties, each such qubit may be implemented in a physical medium in any of a variety of different ways. Examples of such physical media include superconducting material, trapped ions, photons, optical cavities, individual electrons trapped within quantum dots, point defects in solids (e.g., phosphorus donors in silicon or nitrogen-vacancy centers in diamond), molecules (e.g., alanine, vanadium complexes), or aggregations of any of the foregoing that exhibit qubit behavior, that is, comprising quantum states and transitions therebetween that can be controllably induced or detected. For any given medium that implements a qubit, any of a variety of properties of that medium may be chosen to implement the qubit. For example, if electrons are chosen to implement qubits, then the x component of its spin degree of freedom may be chosen as the property of such electrons to represent the states of such qubits. Alternatively, the y component, or the z component of the spin degree of freedom may be chosen as the property of such electrons to represent the state of such qubits. This is merely a specific example of the general feature that for any physical medium that is chosen to implement qubits, there may be multiple physical degrees of freedom (e.g., the x, y, and z components in the electron spin example) that may be chosen to represent 0 and 1. For any particular degree of freedom, the physical medium may controllably be put in a state of superposition, and measurements may then be taken in the chosen degree of freedom to obtain readouts of qubit values. Certain implementations of quantum computers, referred as gate model quantum computers, comprise quantum gates. In contrast to classical gates, there is an infinite number of possible single-qubit quantum gates that change the state vector of a qubit. Changing the state of a qubit state vector typically is referred to as a single-qubit rotation, and may also be referred to herein as a state change or a single-qubit quantum-gate operation. A rotation, state change, or single-qubit quantum-gate operation may be represented mathematically by a unitary 2×2 matrix with complex elements. A rotation corresponds to a rotation of a qubit state within its Hilbert space, which may be conceptualized as a rotation of the Bloch sphere. (As is well-known to those having ordinary skill in the art, the Bloch sphere is a geometrical representation of the space of pure states of a qubit.) Multi-qubit gates alter the quantum state of a set of qubits. For example, two-qubit gates rotate the state of two qubits as a rotation in the four-dimensional Hilbert space of the two qubits. (As is well-known to those having ordinary skill in the art, a Hilbert space is an abstract vector space possessing the structure of an inner product that allows length and angle to be measured. Furthermore, Hilbert spaces are complete: there are enough limits in the space to allow the techniques of calculus to be used.) A quantum circuit may be specified as a sequence of quantum gates. As described in more detail below, the term “quantum gate,” as used herein, refers to the application of a gate control signal (defined below) to one or more qubits to cause those qubits to undergo certain physical transformations and thereby to implement a logical gate operation. To conceptualize a quantum circuit, the matrices corresponding to the component quantum gates may be multiplied together in the order specified by the gate sequence to produce a 2n×2n complex matrix representing the same overall state change on n qubits. A quantum circuit may thus be expressed as a single resultant operator. However, designing a quantum circuit in terms of constituent gates allows the design to conform to a standard set of gates, and thus enable greater ease of deployment. A quantum circuit thus corresponds to a design for actions taken upon the physical components of a quantum computer. A given variational quantum circuit may be parameterized in a suitable device-specific manner. More generally, the quantum gates making up a quantum circuit may have an associated plurality of tuning parameters. For example, in embodiments based on optical switching, tuning parameters may correspond to the angles of individual optical elements. In certain embodiments of quantum circuits, the quantum circuit includes both one or more gates and one or more measurement operations. Quantum computers implemented using such quantum circuits are referred to herein as implementing “measurement feedback.” For example, a quantum computer implementing measurement feedback may execute the gates in a quantum circuit and then measure only a subset (i.e., fewer than all) of the qubits in the quantum computer, and then decide which gate(s) to execute next based on the outcome(s) of the measurement(s). In particular, the measurement(s) may indicate a degree of error in the gate operation(s), and the quantum computer may decide which gate(s) to execute next based on the degree of error. The quantum computer may then execute the gate(s) indicated by the decision. This process of executing gates, measuring a subset of the qubits, and then deciding which gate(s) to execute next may be repeated any number of times. Measurement feedback may be useful for performing quantum error correction, but is not limited to use in performing quantum error correction. For every quantum circuit, there is an error-corrected implementation of the circuit with or without measurement feedback. Some embodiments described herein generate, measure, or utilize quantum states that approximate a target quantum state (e.g., a ground state of a Hamiltonian). As will be appreciated by those trained in the art, there are many ways to quantify how well a first quantum state “approximates” a second quantum state. In the following description, any concept or definition of approximation known in the art may be used without departing from the scope hereof. For example, when the first and second quantum states are represented as first and second vectors, respectively, the first quantum state approximates the second quantum state when an inner product between the first and second vectors (called the “fidelity” between the two quantum states) is greater than a predefined amount (typically labeled E). In this example, the fidelity quantifies how “close” or “similar” the first and second quantum states are to each other. The fidelity represents a probability that a measurement of the first quantum state will give the same result as if the measurement were performed on the second quantum state. Proximity between quantum states can also be quantified with a distance measure, such as a Euclidean norm, a Hamming distance, or another type of norm known in the art. Proximity between quantum states can also be defined in computational terms. For example, the first quantum state approximates the second quantum state when a polynomial time-sampling of the first quantum state gives some desired information or property that it shares with the second quantum state. Not all quantum computers are gate model quantum computers. Embodiments of the present invention are not limited to being implemented using gate model quantum computers. As an alternative example, embodiments of the present invention may be implemented, in whole or in part, using a quantum computer that is implemented using a quantum annealing architecture, which is an alternative to the gate model quantum computing architecture. More specifically, quantum annealing (QA) is a metaheuristic for finding the global minimum of a given objective function over a given set of candidate solutions (candidate states), by a process using quantum fluctuations. FIG.2Bshows a diagram illustrating operations typically performed by a computer system250which implements quantum annealing. The system250includes both a quantum computer252and a classical computer254. Operations shown on the left of the dashed vertical line256typically are performed by the quantum computer252, while operations shown on the right of the dashed vertical line256typically are performed by the classical computer254. Quantum annealing starts with the classical computer254generating an initial Hamiltonian260and a final Hamiltonian262based on a computational problem258to be solved, and providing the initial Hamiltonian260, the final Hamiltonian262and an annealing schedule270as input to the quantum computer252. The quantum computer252prepares a well-known initial state266(FIG.2B, operation264), such as a quantum-mechanical superposition of all possible states (candidate states) with equal weights, based on the initial Hamiltonian260. The classical computer254provides the initial Hamiltonian260, a final Hamiltonian262, and an annealing schedule270to the quantum computer252. The quantum computer252starts in the initial state266, and evolves its state according to the annealing schedule270following the time-dependent Schrödinger equation, a natural quantum-mechanical evolution of physical systems (FIG.2B, operation268). More specifically, the state of the quantum computer252undergoes time evolution under a time-dependent Hamiltonian, which starts from the initial Hamiltonian260and terminates at the final Hamiltonian262. If the rate of change of the system Hamiltonian is slow enough, the system stays close to the ground state of the instantaneous Hamiltonian. If the rate of change of the system Hamiltonian is accelerated, the system may leave the ground state temporarily but produce a higher likelihood of concluding in the ground state of the final problem Hamiltonian, i.e., diabatic quantum computation. At the end of the time evolution, the set of qubits on the quantum annealer is in a final state272, which is expected to be close to the ground state of the classical Ising model that corresponds to the solution to the original optimization problem. An experimental demonstration of the success of quantum annealing for random magnets was reported immediately after the initial theoretical proposal. The final state272of the quantum computer252is measured, thereby producing results276(i.e., measurements) (FIG.2B, operation274). The measurement operation274may be performed, for example, in any of the ways disclosed herein, such as in any of the ways disclosed herein in connection with the measurement unit110inFIG.1. The classical computer254performs postprocessing on the measurement results276to produce output280representing a solution to the original computational problem258(FIG.2B, operation278). As yet another alternative example, embodiments of the present invention may be implemented, in whole or in part, using a quantum computer that is implemented using a one-way quantum computing architecture, also referred to as a measurement-based quantum computing architecture, which is another alternative to the gate model quantum computing architecture. More specifically, the one-way or measurement based quantum computer (MBQC) is a method of quantum computing that first prepares an entangled resource state, usually a cluster state or graph state, then performs single qubit measurements on it. It is “one-way” because the resource state is destroyed by the measurements. The outcome of each individual measurement is random, but they are related in such a way that the computation always succeeds. In general the choices of basis for later measurements need to depend on the results of earlier measurements, and hence the measurements cannot all be performed at the same time. Any of the functions disclosed herein may be implemented using means for performing those functions. Such means include, but are not limited to, any of the components disclosed herein, such as the computer-related components described below. Referring toFIG.1, a diagram is shown of a system100implemented according to one embodiment of the present invention. Referring toFIG.2A, a flowchart is shown of a method200performed by the system100ofFIG.1according to one embodiment of the present invention. The system100includes a quantum computer102. The quantum computer102includes a plurality of qubits104, which may be implemented in any of the ways disclosed herein. There may be any number of qubits104in the quantum computer102. For example, the qubits104may include or consist of no more than 2 qubits, no more than 4 qubits, no more than 8 qubits, no more than 16 qubits, no more than 32 qubits, no more than 64 qubits, no more than 128 qubits, no more than 256 qubits, no more than 512 qubits, no more than 1024 qubits, no more than 2048 qubits, no more than 4096 qubits, or no more than 8192 qubits. These are merely examples, in practice there may be any number of qubits104in the quantum computer102. There may be any number of gates in a quantum circuit. However, in some embodiments the number of gates may be at least proportional to the number of qubits104in the quantum computer102. In some embodiments the gate depth may be no greater than the number of qubits104in the quantum computer102, or no greater than some linear multiple of the number of qubits104in the quantum computer102(e.g., 2, 3, 4, 5, 6, or 7). The qubits104may be interconnected in any graph pattern. For example, they be connected in a linear chain, a two-dimensional grid, an all-to-all connection, any combination thereof, or any subgraph of any of the preceding. As will become clear from the description below, although element102is referred to herein as a “quantum computer,” this does not imply that all components of the quantum computer102leverage quantum phenomena. One or more components of the quantum computer102may, for example, be classical (i.e., non-quantum components) components which do not leverage quantum phenomena. The quantum computer102includes a control unit106, which may include any of a variety of circuitry and/or other machinery for performing the functions disclosed herein. The control unit106may, for example, consist entirely of classical components. The control unit106generates and provides as output one or more control signals108to the qubits104. The control signals108may take any of a variety of forms, such as any kind of electromagnetic signals, such as electrical signals, magnetic signals, optical signals (e.g., laser pulses), or any combination thereof. For example:In embodiments in which some or all of the qubits104are implemented as photons (also referred to as a “quantum optical” implementation) that travel along waveguides, the control unit106may be a beam splitter (e.g., a heater or a mirror), the control signals108may be signals that control the heater or the rotation of the mirror, the measurement unit110may be a photodetector, and the measurement signals112may be photons.In embodiments in which some or all of the qubits104are implemented as charge type qubits (e.g., transmon, X-mon, G-mon) or flux-type qubits (e.g., flux qubits, capacitively shunted flux qubits) (also referred to as a “circuit quantum electrodynamic” (circuit QED) implementation), the control unit106may be a bus resonator activated by a drive, the control signals108may be cavity modes, the measurement unit110may be a second resonator (e.g., a low-Q resonator), and the measurement signals112may be voltages measured from the second resonator using dispersive readout techniques.In embodiments in which some or all of the qubits104are implemented as superconducting circuits, the control unit106may be a circuit QED-assisted control unit or a direct capacitive coupling control unit or an inductive capacitive coupling control unit, the control signals108may be cavity modes, the measurement unit110may be a second resonator (e.g., a low-Q resonator), and the measurement signals112may be voltages measured from the second resonator using dispersive readout techniques.In embodiments in which some or all of the qubits104are implemented as trapped ions (e.g., electronic states of, e.g., magnesium ions), the control unit106may be a laser, the control signals108may be laser pulses, the measurement unit110may be a laser and either a CCD or a photodetector (e.g., a photomultiplier tube), and the measurement signals112may be photons.In embodiments in which some or all of the qubits104are implemented using nuclear magnetic resonance (NMR) (in which case the qubits may be molecules, e.g., in liquid or solid form), the control unit106may be a radio frequency (RF) antenna, the control signals108may be RF fields emitted by the RF antenna, the measurement unit110may be another RF antenna, and the measurement signals112may be RF fields measured by the second RF antenna.In embodiments in which some or all of the qubits104are implemented as nitrogen-vacancy centers (NV centers), the control unit106may, for example, be a laser, a microwave antenna, or a coil, the control signals108may be visible light, a microwave signal, or a constant electromagnetic field, the measurement unit110may be a photodetector, and the measurement signals112may be photons.In embodiments in which some or all of the qubits104are implemented as two-dimensional quasiparticles called “anyons” (also referred to as a “topological quantum computer” implementation), the control unit106may be nanowires, the control signals108may be local electrical fields or microwave pulses, the measurement unit110may be superconducting circuits, and the measurement signals112may be voltages.In embodiments in which some or all of the qubits104are implemented as semiconducting material (e.g., nanowires), the control unit106may be microfabricated gates, the control signals108may be RF or microwave signals, the measurement unit110may be microfabricated gates, and the measurement signals112may be RF or microwave signals. Although not shown explicitly inFIG.1and not required, the measurement unit110may provide one or more feedback signals114to the control unit106based on the measurement signals112. For example, quantum computers referred to as “one-way quantum computers” or “measurement-based quantum computers” utilize such feedback114from the measurement unit110to the control unit106. Such feedback114is also necessary for the operation of fault-tolerant quantum computing and error correction. The control signals108may, for example, include one or more state preparation signals which, when received by the qubits104, cause some or all of the qubits104to change their states. Such state preparation signals constitute a quantum circuit also referred to as an “ansatz circuit.” The resulting state of the qubits104is referred to herein as an “initial state” or an “ansatz state.” The process of outputting the state preparation signal(s) to cause the qubits104to be in their initial state is referred to herein as “state preparation” (FIG.2A, section206). A special case of state preparation is “initialization,” also referred to as a “reset operation,” in which the initial state is one in which some or all of the qubits104are in the “zero” state i.e. the default single-qubit state. More generally, state preparation may involve using the state preparation signals to cause some or all of the qubits104to be in any distribution of desired states. In some embodiments, the control unit106may first perform initialization on the qubits104and then perform preparation on the qubits104, by first outputting a first set of state preparation signals to initialize the qubits104, and by then outputting a second set of state preparation signals to put the qubits104partially or entirely into non-zero states. Another example of control signals108that may be output by the control unit106and received by the qubits104are gate control signals. The control unit106may output such gate control signals, thereby applying one or more gates to the qubits104. Applying a gate to one or more qubits causes the set of qubits to undergo a physical state change which embodies a corresponding logical gate operation (e.g., single-qubit rotation, two-qubit entangling gate or multi-qubit operation) specified by the received gate control signal. As this implies, in response to receiving the gate control signals, the qubits104undergo physical transformations which cause the qubits104to change state in such a way that the states of the qubits104, when measured (see below), represent the results of performing logical gate operations specified by the gate control signals. The term “quantum gate,” as used herein, refers to the application of a gate control signal to one or more qubits to cause those qubits to undergo the physical transformations described above and thereby to implement a logical gate operation. It should be understood that the dividing line between state preparation (and the corresponding state preparation signals) and the application of gates (and the corresponding gate control signals) may be chosen arbitrarily. For example, some or all the components and operations that are illustrated inFIGS.1and2A-2Bas elements of “state preparation” may instead be characterized as elements of gate application. Conversely, for example, some or all of the components and operations that are illustrated inFIGS.1and2A-2Bas elements of “gate application” may instead be characterized as elements of state preparation. As one particular example, the system and method ofFIGS.1and2A-2Bmay be characterized as solely performing state preparation followed by measurement, without any gate application, where the elements that are described herein as being part of gate application are instead considered to be part of state preparation. Conversely, for example, the system and method ofFIGS.1and2A-2Bmay be characterized as solely performing gate application followed by measurement, without any state preparation, and where the elements that are described herein as being part of state preparation are instead considered to be part of gate application. The quantum computer102also includes a measurement unit110, which performs one or more measurement operations on the qubits104to read out measurement signals112(also referred to herein as “measurement results”) from the qubits104, where the measurement results112are signals representing the states of some or all of the qubits104. In practice, the control unit106and the measurement unit110may be entirely distinct from each other, or contain some components in common with each other, or be implemented using a single unit (i.e., a single unit may implement both the control unit106and the measurement unit110). For example, a laser unit may be used both to generate the control signals108and to provide stimulus (e.g., one or more laser beams) to the qubits104to cause the measurement signals112to be generated. In general, the quantum computer102may perform various operations described above any number of times. For example, the control unit106may generate one or more control signals108, thereby causing the qubits104to perform one or more quantum gate operations. The measurement unit110may then perform one or more measurement operations on the qubits104to read out a set of one or more measurement signals112. The measurement unit110may repeat such measurement operations on the qubits104before the control unit106generates additional control signals108, thereby causing the measurement unit110to read out additional measurement signals112resulting from the same gate operations that were performed before reading out the previous measurement signals112. The measurement unit110may repeat this process any number of times to generate any number of measurement signals112corresponding to the same gate operations. The quantum computer102may then aggregate such multiple measurements of the same gate operations in any of a variety of ways. After the measurement unit110has performed one or more measurement operations on the qubits104after they have performed one set of gate operations, the control unit106may generate one or more additional control signals108, which may differ from the previous control signals108, thereby causing the qubits104to perform one or more additional quantum gate operations, which may differ from the previous set of quantum gate operations. The process described above may then be repeated, with the measurement unit110performing one or more measurement operations on the qubits104in their new states (resulting from the most recently-performed gate operations). In general, the system100may implement a plurality of quantum circuits as follows. For each quantum circuit C in the plurality of quantum circuits (FIG.2A, operation202), the system100performs a plurality of “shots” on the qubits104. The meaning of a shot will become clear from the description that follows. For each shot S in the plurality of shots (FIG.2A, operation204), the system100prepares the state of the qubits104(FIG.2A, section206). More specifically, for each quantum gate G in quantum circuit C (FIG.2A, operation210), the system100applies quantum gate G to the qubits104(FIG.2A, operations212and214). Then, for each of the qubits Q104(FIG.2A, operation216), the system100measures the qubit Q to produce measurement output representing a current state of qubit Q (FIG.2A, operations218and220). The operations described above are repeated for each shot S (FIG.2A, operation222), and circuit C (FIG.2A, operation224). As the description above implies, a single “shot” involves preparing the state of the qubits104and applying all of the quantum gates in a circuit to the qubits104and then measuring the states of the qubits104; and the system100may perform multiple shots for one or more circuits. Referring toFIG.3, a diagram is shown of a hybrid quantum-classical (HQC) computer (HQC)300implemented according to one embodiment of the present invention. The HQC300includes a quantum computer component102(which may, for example, be implemented in the manner shown and described in connection withFIG.1) and a classical computer component306. The classical computer component may be a machine implemented according to the general computing model established by John Von Neumann, in which programs are written in the form of ordered lists of instructions and stored within a classical (e.g., digital) memory310and executed by a classical (e.g., digital) processor308of the classical computer. The memory310is classical in the sense that it stores data in a storage medium in the form of bits, which have a single definite binary state at any point in time. The bits stored in the memory310may, for example, represent a computer program. The classical computer component304typically includes a bus314. The processor308may read bits from and write bits to the memory310over the bus314. For example, the processor308may read instructions from the computer program in the memory310, and may optionally receive input data316from a source external to the computer302, such as from a user input device such as a mouse, keyboard, or any other input device. The processor308may use instructions that have been read from the memory310to perform computations on data read from the memory310and/or the input316, and generate output from those instructions. The processor308may store that output back into the memory310and/or provide the output externally as output data318via an output device, such as a monitor, speaker, or network device. The quantum computer component102may include a plurality of qubits104, as described above in connection withFIG.1. A single qubit may represent a one, a zero, or any quantum superposition of those two qubit states. The classical computer component304may provide classical state preparation signals332to the quantum computer102, in response to which the quantum computer102may prepare the states of the qubits104in any of the ways disclosed herein, such as in any of the ways disclosed in connection withFIGS.1and2A-2B. Once the qubits104have been prepared, the classical processor308may provide classical control signals334to the quantum computer102, in response to which the quantum computer102may apply the gate operations specified by the control signals332to the qubits104, as a result of which the qubits104arrive at a final state. The measurement unit110in the quantum computer102(which may be implemented as described above in connection withFIGS.1and2A-2B) may measure the states of the qubits104and produce measurement output338representing the collapse of the states of the qubits104into one of their eigenstates. As a result, the measurement output338includes or consists of bits and therefore represents a classical state. The quantum computer102provides the measurement output338to the classical processor308. The classical processor308may store data representing the measurement output338and/or data derived therefrom in the classical memory310. The steps described above may be repeated any number of times, with what is described above as the final state of the qubits104serving as the initial state of the next iteration. In this way, the classical computer304and the quantum computer102may cooperate as co-processors to perform joint computations as a single computer system. Although certain functions may be described herein as being performed by a classical computer and other functions may be described herein as being performed by a quantum computer, these are merely examples and do not constitute limitations of the present invention. A subset of the functions which are disclosed herein as being performed by a quantum computer may instead be performed by a classical computer. For example, a classical computer may execute functionality for emulating a quantum computer and provide a subset of the functionality described herein, albeit with functionality limited by the exponential scaling of the simulation. Functions which are disclosed herein as being performed by a classical computer may instead be performed by a quantum computer. The techniques described above may be implemented, for example, in hardware, in one or more computer programs tangibly stored on one or more computer-readable media, firmware, or any combination thereof, such as solely on a quantum computer, solely on a classical computer, or on a hybrid quantum-classical (HQC) computer. The techniques disclosed herein may, for example, be implemented solely on a classical computer, in which the classical computer emulates the quantum computer functions disclosed herein. The techniques described above may be implemented in one or more computer programs executing on (or executable by) a programmable computer (such as a classical computer, a quantum computer, or an HQC) including any combination of any number of the following: a processor, a storage medium readable and/or writable by the processor (including, for example, volatile and non-volatile memory and/or storage elements), an input device, and an output device. Program code may be applied to input entered using the input device to perform the functions described and to generate output using the output device. Embodiments of the present invention include features which are only possible and/or feasible to implement with the use of one or more computers, computer processors, and/or other elements of a computer system. Such features are either impossible or impractical to implement mentally and/or manually. For example, embodiments of the present invention manipulate qubits on a quantum computer, which cannot be performed mentally or manually by a human. Any claims herein which affirmatively require a computer, a processor, a memory, or similar computer-related elements, are intended to require such elements, and should not be interpreted as if such elements are not present in or required by such claims. Such claims are not intended, and should not be interpreted, to cover methods and/or systems which lack the recited computer-related elements. For example, any method claim herein which recites that the claimed method is performed by a computer, a processor, a memory, and/or similar computer-related element, is intended to, and should only be interpreted to, encompass methods which are performed by the recited computer-related element(s). Such a method claim should not be interpreted, for example, to encompass a method that is performed mentally or by hand (e.g., using pencil and paper). Similarly, any product claim herein which recites that the claimed product includes a computer, a processor, a memory, and/or similar computer-related element, is intended to, and should only be interpreted to, encompass products which include the recited computer-related element(s). Such a product claim should not be interpreted, for example, to encompass a product that does not include the recited computer-related element(s). In embodiments in which a classical computing component executes a computer program providing any subset of the functionality within the scope of the claims below, the computer program may be implemented in any programming language, such as assembly language, machine language, a high-level procedural programming language, or an object-oriented programming language. The programming language may, for example, be a compiled or interpreted programming language. Each such computer program may be implemented in a computer program product tangibly embodied in a machine-readable storage device for execution by a computer processor, which may be either a classical processor or a quantum processor. Method steps of the invention may be performed by one or more computer processors executing a program tangibly embodied on a computer-readable medium to perform functions of the invention by operating on input and generating output. Suitable processors include, by way of example, both general and special purpose microprocessors. Generally, the processor receives (reads) instructions and data from a memory (such as a read-only memory and/or a random access memory) and writes (stores) instructions and data to the memory. Storage devices suitable for tangibly embodying computer program instructions and data include, for example, all forms of non-volatile memory, such as semiconductor memory devices, including EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROMs. Any of the foregoing may be supplemented by, or incorporated in, specially-designed ASICs (application-specific integrated circuits) or FPGAs (Field-Programmable Gate Arrays). A classical computer can generally also receive (read) programs and data from, and write (store) programs and data to, a non-transitory computer-readable storage medium such as an internal disk (not shown) or a removable disk. These elements will also be found in a conventional desktop or workstation computer as well as other computers suitable for executing computer programs implementing the methods described herein, which may be used in conjunction with any digital print engine or marking engine, display monitor, or other raster output device capable of producing color or gray scale pixels on paper, film, display screen, or other output medium. Any data disclosed herein may be implemented, for example, in one or more data structures tangibly stored on a non-transitory computer-readable medium (such as a classical computer-readable medium, a quantum computer-readable medium, or an HQC computer-readable medium). Embodiments of the invention may store such data in such data structure(s) and read such data from such data structure(s). | 45,194 |
11861458 | DESCRIPTION OF EXAMPLE EMBODIMENTS In the following description, various embodiments will be described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the embodiments. However, it will also be apparent to one skilled in the art that the embodiments may be practiced without the specific details. Furthermore, well-known features may be omitted or simplified in order not to obscure the embodiment being described. In addition, the embodiments disclosed herein are only examples, and the scope of this disclosure is not limited to them. Particular embodiments may include all, some, or none of the components, elements, features, functions, operations, or steps of the embodiments disclosed above. Embodiments according to the invention are in particular disclosed in the attached claims directed to a method, a storage medium, a system and a computer program product, wherein any feature mentioned in one claim category, e.g., method, can be claimed in another claim category, e.g., system, as well. The dependencies or references back in the attached claims are chosen for formal reasons only. However, any subject matter resulting from a deliberate reference back to any previous claims (in particular multiple dependencies) can be claimed as well, so that any combination of claims and the features thereof are disclosed and can be claimed regardless of the dependencies chosen in the attached claims. The subject-matter which can be claimed comprises not only the combinations of features as set out in the attached claims but also any other combination of features in the claims, wherein each feature mentioned in the claims can be combined with any other feature or combination of other features in the claims. Furthermore, any of the embodiments and features described or depicted herein can be claimed in a separate claim and/or in any combination with any embodiment or feature described or depicted herein or with any of the features of the attached claims. A vehicle system may collect vast amount of data from any number of sensors (e.g., speed sensors, steering angle sensors, braking pressure sensors, a GPS, cameras, LiDAR, radars, etc.) associated with the vehicle. The collected data may be used in many applications, such as training a machine-learning (ML) model for driving autonomous vehicles or assisting human driving. The vehicle system may store the collected data in an on-board storage or upload the data to a cloud through a wireless connection. However, since the vehicle system has limited on-board storage space and wireless connection bandwidth, storing or uploading all the collected data is infeasible. While the vehicle system may pre-process the collected data and only store or upload the processed, representative results (e.g., an object list from object detection results rather than the raw image data from which the object list is generated), such approach would result in a suboptimal amount of data being collected for scenarios where richer data is needed. For example, anomalous events, such as responses to unusual conditions (e.g., anomalous trajectories or aggressive movements of other vehicles) or accidents, may constitute important edge cases that a machine-learning model of the vehicle system would need to learn to handle. A suboptimal amount of data about the edge cases may lack enough details to effectively train the machine-learning model to be sufficiently robust to handle such edge cases. To solve the problems caused by the limited storage space and wireless connection bandwidth, particular embodiments of the vehicle system may pre-process the collected data (e.g., object identification, compression, etc.) and store/upload the pre-processed result (e.g., an identified object list, compressed data, etc.) which has a smaller size than the data before pre-processing and needs less storage space and transmission bandwidth. To capture a richer set of edge-case data, particular embodiments of the vehicle system may use edge computing to detect events of interest in real-time and, upon detecting such events, stores/uploads a richer set of corresponding data than would otherwise be stored/uploaded. The events of interest may be anomalous events that deviate from predictions (e.g., based on pre-recorded historical data) of the vehicle system by a threshold. The richer set of data may be high-resolution data including more information details than the data (e.g., the pre-processed, compressed data) stored/uploaded for non-anomalous events. The richer set of data may be, for example, raw data, full-resolution data, or data with higher resolution (e.g., more pixels, higher sampling rates) than the data stored/uploaded for non-anomalous events. The edge computation may use machine-learning models or/and rule-based algorithms that are designed for detecting or classifying anomalous events. For example, the system may compare the current driving data with predicted driving behaviors (e.g., using a machine-learning model) under current situation and may identity an anomalous event when the current driving data is inconsistent with the prediction. When an anomalous event is detected, the system may store/upload a richer set of data related to the detected event. Particular embodiments reduce the system demand on storage and bandwidth resources by selectively storing and uploading data based on the identified events and pro-processing other data not related to the identified events. For example, the vehicle system can effectively collect data including both edge-case data related to anomalous events and normal operation data for machine-learning training in spite of storage and transmission bandwidth limitations of the vehicle system. Furthermore, particular embodiments of the vehicle system provide a richer edge-case data set and better data quality for subsequent downstream use, such as training a machine-learning model for driving vehicles or assisting human driving. For example, the collected edge-case data may include high-resolution data of detected events with no loss from compression or pre-processing, and can, therefore, be more effectively used to train machine-learning models. In particular embodiments, the vehicle system may have any number of sensors for monitoring the vehicle (e.g., speeds, steering angles, braking pressure, etc.), the vehicle path (e.g., trajectories, locations, etc.), the human driver (e.g., eye movement, head movement, etc.), and the environment of the vehicle (e.g., identified objects with bounding boxes, other vehicles, pedestrians, etc.). The vehicle system may include one or more computing systems (e.g., a data collection device, a mobile phone, a tablet, a mobile computer, a high-performance computer) to collect the contextual data of the vehicle. In particular embodiments, the contextual data of the vehicle may include one or more parameters associated with the human driver, for example, but not limited to, a head position, a head movement, a hand position, a hand movement, a foot position, a foot movement, a gazing direction, a gazing point, an image of the human driver, a gesture, a voice, etc. The parameters associated with the human drive may be measured using one or more driver-facing cameras and microphones associated with the vehicle (e.g., a dash camera with microphones) or associated with a computing system (e.g., a data collection device, a mobile phone) of the vehicle. In particular embodiments, the contextual data of the vehicle may include one or more parameters associated with the vehicle, for example, a speed, a moving direction, a trajectory, a GPS coordination, an acceleration (e.g., based on IMU outputs), a rotation rate (e.g., based on IMU/gyroscope outputs), a pressure on the braking pedal, a pressure on the acceleration pedal, a steering force on the steering wheel, a wheel direction, a signal status, etc. The parameters associated with vehicle may be determined based on one or more sensors of the vehicle system. In particular embodiments, the contextual data of the vehicle may include navigation data of the vehicle, for example, a navigation map, a navigating target place, a route, an estimated time, a detour, etc. In particular embodiments, the contextual data of the vehicle may include camera-based localization data including, for example, but not limited to, a point cloud, a depth of view, a two-dimensional profile of environment, a three-dimensional profile of environment, stereo images of a scene, a relative position (e.g., a distance, an angle) to an environmental object, a relative position (e.g., a distance, an angle) to road lines, a relative position in the current environment, etc. In particular embodiments, the contextual data of the vehicle may include one or more metrics associated with the vehicle environment. The environmental metrics may include, for example, but are not limited to, a distance to another vehicle, a relative speed to another vehicle, a distance to a pedestrian, a relative speed to a pedestrian, a traffic signal status, a distance to a traffic signal, a distance to an intersection, a road sign, a distance to a road sign, a distance to curb, a relative position to a road line, an object in a field of view of the vehicle, a traffic status (e.g., high traffic, low traffic), trajectories of other vehicles, motions of other traffic agents, speeds of other traffic agents, moving directions of other traffic agents, signal statuses of other vehicles, positions of other traffic agents, aggressiveness metrics of other vehicles, etc. The one or more metrics associated with the environment of the vehicle may be determined using on one or more cameras, LiDAR systems, radar systems, etc. As an example and not by way of limitation, the vehicle system may track relative positions of the vehicle to one or more road lines to precisely determine the location of the vehicle in addition to a navigation map. As another example, the vehicle system may evaluate the aggressiveness of other vehicles by tracking their velocities, moving directions, accelerations, trajectories, relative distances and relative positions to other objects or vehicles. FIG.1illustrates an example vehicle system100with limited storage space and wireless connection bandwidth. The vehicle system100may include one or more processors110, a communication module140, an on-board storage120with limited storage space (e.g., gigabytes or terabytes), a wireless connection with limited bandwidth152to a cloud150, etc. The vehicle system100may collect vast amount of data160from one or more sensors (e.g., speed sensors, steering angle sensors, braking pressure sensors, a GPS, cameras, LiDAR, radars, etc.) of the vehicle. In particular embodiments, the vehicle system100may collect contextual data of vehicles driven by human drivers and the collected data may be used to train a machine-learning (ML) model for driving vehicles (e.g., including driving an autonomous vehicle or assisting a human driver, such as providing safety warnings and automatic braking). The training of the machine-learning models may need data that covers vast driving scenarios and driving conditions. The training may be in the training system190coupled to the cloud152. The collected data160may exceed the limitations of the storage space120and transmission bandwidth152. The vehicle system100may directly store and upload a portion of the collected raw data to the cloud150to train the machine learning model in the training system190. However, due to the limitations of the storage space and transmission bandwidth, the amount of data that can be stored or/and uploaded is very limited, relative to the large size of the raw data, and therefore may not be adequate for training the machine-learning models. In particular embodiments, the vehicle system100may pre-process the collected data to a condense form before saving the data to non-volatile storage or uploading the data to a cloud through a wired or wireless connection. As an example and not by way of limitation, the vehicle system100may include one or more agent modelers (e.g., object detectors, object classifiers) to detect traffic agents (e.g., other vehicles, pedestrians, moving objects) in the environment of the vehicle. The agent modelers may be based on one or more machine-learning models (e.g., neural networks). The vehicle system100may use two-dimensional (e.g., based on cameras) and/or three-dimensional (e.g., based on LiDAR or stereo cameras) perceptions of the environment to detect and track the traffic agents (e.g., putting a 3D bounding box for each detected traffic agent, marking each traffic agent with velocity and moving direction). The vehicle system100may generate pre-process result data that represents information captured by the raw data in a condense form, for example, a detected object list including any number of detected objects. Each detected object in the list may include any number of components including, for example, but not limited to, an object profile, an object image segmentation, a semantic text description, a velocity, a moving direction, a position, etc. The data including information associated with the detected object may have a smaller size than the corresponding raw data (e.g., an object image). The vehicle system100may further generate a semantic map including the detected objects (e.g., other vehicle, pedestrians, moving objects) and their related parameters. Instead of saving or sending the raw data, the vehicle system100may save or/and upload the pre-processed result (e.g., an object list, a semantic map), which requires smaller storage space and less transmission bandwidth than the raw data. The pre-processed results may then be used for any downstream application, such as training a machine-learning model, building a statistical model, or being subject to human analysis. In particular embodiments, the vehicle system100may compress the collected data (e.g., high-resolution raw images) to one or more compressed formats (e.g., JPEG, PNG) to reduce the requirement on storage space and transmission bandwidth. In particular embodiments, the vehicle system100may further compress the pre-processed result data to an even smaller size to reduce the requirement on storage space and transmission bandwidth. The vehicle system100may save the compressed data into non-volatile storage or/and upload to a cloud in real-time or at a later time. In particular embodiments, the vehicle system100may use the pre-processed data or/and the compressed data to train the machine-learning models to learn vehicle driving. While the pre-processed data and the compressed data may carry a lot of useful information for training the machine-learning models, they may lack enough details for anomalous events (e.g., accidents, unusual driving conditions, operations deviating from predictions based on historical data, etc.), which may need higher level of detail than the pre-processed or compressed data. The anomalous events may include critical edge-case data for training the machine-learning models. Therefore, such one-size-fit-all approaches (e.g., pre-processing data, compressing data) may result in a suboptimal amount of data being collected for scenarios where richer data is needed. In particular embodiments, the vehicle system may use one or more computing systems (e.g., a data collection device, a high-performance computer, a tablet, a mobile phone, etc.) to selectively collect contextual data of the vehicle based on one or more detected events of interest.FIG.2illustrates an example time sequence200for determining an event of interest based on predicted operations of human driver. The vehicle system may continuously collect the contextual data of the vehicle and store the latest contextual data206in a volatile memory of the vehicle system. The latest contextual data206stored in the volatile memory may include data gathered within a pre-determined period of time TP2202(e.g., 2 minutes, 5 minutes, 10 minutes) before a current time T0. The contextual data206stored in the volatile memory may include high-resolution data from one or more sensors, for example, a series of full-resolution raw images or other non-compressed full-resolution raw data from one or more cameras. The non-volatile memory may be repeatedly overwritten with newer data and only store the high-resolution of the latest time period (e.g., 2 minutes, 5 minutes, 10 minutes) to accommodate the size limitation of the memory. In particular embodiments, the vehicle system may access the contextual data206of the vehicle stored in the volatile memory and use a prediction model to predict one or more parameters related to the predicted operations208of the human driver in a time period TP1204(e.g., 0.1 seconds, 0.2 seconds, 2 seconds, 5 seconds) at and after the current time T0. The parameters related to the predicted operations208may include, for example, but are not limited to, steering changes, pedal actions, breaking actions, signal changes, etc. The prediction model may predict one or more parameters related to the vehicle information, the vehicle path, or/and the environment of the vehicle. For example, the prediction model may predict, for the vehicle or/and other traffic agents, speeds, moving directions, accelerations, positions, trajectories, relative positions to road lines, etc. The prediction model may be trained by large amount (e.g., hundreds or thousands of training samples) of pre-recorded contextual data associated with a large number of human-driven vehicles (e.g., driven by a fleet of human drivers) or autonomous vehicles. The prediction model may be trained by pre-recorded vehicle operations associated with a large number of vehicles (e.g., human driven vehicles or autonomous vehicles). In particular embodiments, the prediction model may be an inference model of a machine-learning model (e.g., an artificial neural network, a recurrent neural network). The machine-learning model may be trained by the pre-recorded contextual data of a large number of human drivers. In particular embodiments, the vehicle system may predict the predicted operations of the human driver and the vehicle status based on pre-processed contextual data, compressed contextual data, or high-resolution contextual data. In particular embodiments, the vehicle system may continue to collect the contextual data of the vehicle for the time period TP1204(e.g., 0.1 seconds, 0.2 seconds, 2 seconds, 5 seconds) and determine parameters related to the actual operations210of the human driver during the time period TP1204. For example, the vehicle system may determine the vehicle information, the vehicle path information, and the environment information for the time period TP1204. The vehicle system may compare the actual operations210and the predicted operations208of the human driver during the time period TP1204to determine whether an event of interest has happened during that time period. The vehicle system may determine that an event of interest has happened when the actual operations210of the human driver deviate from the predicted operations208for a pre-determined threshold. The vehicle system may determine that the latest contextual data206is associated with the detected anomalous event. For example, the prediction model may predict that the vehicle should be driving at a relative low speed (e.g., 10 mph to 30 mph) based on current driving situations, but the vehicle system finds that the vehicle is actually driving at a speed higher than 60 mph and the human driver is still hitting the accelerating pedal. As a result, the vehicle system may flag that as an anomalous event (e.g., at the time TE212) and store the high-resolution data206(e.g., full-resolution raw data) related to that anomalous event. In particular embodiments, upon the determination that an event of interest has occurred (e.g., at the time TE212), the vehicle system may store the high-resolution contextual data (e.g., the contextual data206) of the vehicle associated with the event of interest into a non-volatile storage of the vehicle system. As an example and not by way of limitation, the vehicle system may move the contextual data206in the volatile memory into the non-volatile storage of the vehicle system. The stored contextual data206may include the high-resolution data (e.g., a series of full-resolution raw images or raw sensor data without any compression) and therefore capture the richer details related to the event of interest. The vehicle system may further store high-resolution data corresponding to an additional time period TP3214(e.g., several seconds to several minutes) after the event of interest (e.g., at the time TE212) so that the system may capture the event details both before (e.g., the time period TP4216) and after the event (e.g., the time period TP3214). The stored high-resolution data may be uploaded to a cloud through a wired or wireless connection in real-time or may be stored in the non-volatile storage for offline process at a later time. By selectively storing high-resolution data for only events of interest, particular embodiments use less storage and bandwidth resources to capture a richer data set for edge cases related to one or more driving conditions of the vehicle. The high-resolution data may be used to train the machine-learning models to account for such edge cases. The edge-case data captured based on the events of interest may be critical for training vehicle driving models and for evaluating and testing the readiness of the driving models for autonomous vehicles. In particular embodiments, the vehicle system may select the high-resolution contextual data to be stored based on the determination that the event of interest is associated with the contextual data. The high-resolution contextual data may comprise more information or may correspond to a longer time period than data normally stored when corresponding contextual data is determined to be unassociated with the event of interest. In particular embodiments, the vehicle system may flag (e.g., using digital marks) the high-resolution contextual data to be associated with the event of interest to be reviewed or analyzed at a later time. In particular embodiments, the high-resolution data stored/uploaded by the vehicle system may include more information details than the low-resolution data (e.g., the pre-processed, compressed data) that is collected for non-anomalous events. In particular embodiments, the high-resolution data may be raw data from one or more sensors without pre-processing or compression. In particular embodiments, the high-resolution data may include high-resolution images which may have more pixels in each image than regular or low-resolution images. The high-resolution images may be full-resolution images using all the pixels available in an image sensor of a camera. In particular embodiments, the high-resolution data may be data generated by sensors using a higher sampling rate and therefore captures more information details of an event. In particular embodiments, the high-resolution data may be data generated by sensors with greater fields of view to capture larger scenes. In particular embodiments, the high-resolution contextual data may be customized data collected based on the attention of the human driver. The vehicle system may dynamically allocate resources (e.g., time, sensors, cameras, transmission bandwidth, storage space) based on attention of the human driver. The vehicle system may determine one or more areas of interest where the human driver is paying attention based on the human driver's status or behaviors (e.g., head position, head movement, gazing direction). The vehicle system may allocate more resources (e.g., times, sensors, cameras, transmission bandwidth, storage space) to those areas of interest to capture a richer set of data that is more relevant to the current conditions. The vehicle system may select a contextual data set associated with the areas where the human driver is paying attention to be included in the high-resolution contextual data that will be stored. As an example and not by way of limitation, when the human driver looks at a particular direction while driving the vehicle, the vehicle system may allocate more cameras and bandwidth resources to the direction that the human driver is looking at. As another example, when the human driver looks at a particular direction while driving the vehicle, the vehicle system may configure cameras pointed to that direction to capture images with a higher resolution or/and a higher sampling rate. In particular embodiments, the vehicle system may use edge computing to detect and classify events of interests in real-time. Edge computing may refer to computation carried out in local computing systems (e.g., a data collection device, a high-performance computer) of the vehicle system instead of in a cloud. For example, the vehicle system may include machine-learning models running in local processors (e.g., GPUs, CPUs, ML specific processors) to detect and classify anomalous events that deviate from predictions based on historical data. By using edge computing, particular embodiments may allow the vehicle system to selectively collect contextual data of the vehicle without real-time support from servers in a cloud and therefore, reduce the requirement on the communication bandwidth of the vehicle system. By using the localized computation for detecting the anomalous events, particular embodiments may have shorter response time to detecting normal and anomalous operation events by eliminating the delay time caused by communicating with a cloud. FIG.3illustrates an example edge computing diagram300for detecting and classifying anomalous events. In particular embodiments, the vehicle system310may include a prediction model320A which may be a machine-learning model running locally in the vehicle system310. In particular embodiments, the prediction model may be trained using pre-recorded contextual data collected from a large number of human drivers. For example, the prediction model320B, which is a copy of the prediction model320A, may be trained and made available through the cloud340using the normal operation database342and the anomalous event database344. The training databases342and344may include contextual data covering a large number of normal events and a large number of anomalous events, respectively. The normal events may include operations that are consistent with predictions based on historical data. The operations related to normal events may be predictable by the prediction model of the vehicle (e.g., within a threshold to the predicted operations). The training databases342and344may include an initial data set of normal and anomalous events which are labeled by human and/or another data set of normal and anomalous events automatically classified by machine-learning models. The training data may be constructed and optimized by weighting normal operation data and edge-case data differently, since edge-case data are typically sparse relative to normal operation data. For example, data related to edge cases may be assigned greater weights than data related to normal operations. The machine-learning models trained by weighted normal operation data and edge-case data may appropriately handle both the normal operation conditions and edge-case conditions. The training result may be synchronized from the cloud340to the local prediction model320A in the vehicle system310through a wired or wireless connection. In particular embodiments, the prediction model320A may determine the predicted operations of the vehicle based on the contextual data302captured during a pre-determined time period (e.g., latest 5 minutes) or/and other pre-processed or compressed contextual data. The driving model may process the real-time or/and semi-real-time contextual data and generate predicted driving operations322for a future time period or/and a current time. The predicted driving operations (e.g., instructions for steering, braking, accelerating, parking, parameters related to the vehicle, the vehicle path, the human driver, or/and the environment) may be compared to the actual operations306of the human driver by a comparator315to determine anomalous events. The comparator315may identify an event as an anomalous event317when the actual operations206of the human driver deviate from the predicted operations322by a threshold amount. Upon a determination of an anomalous event, the vehicle system310may store the high-resolution contextual data352related to the detected anomalous event in non-volatile storage or/and upload the high-resolution contextual data to a cloud in real-time or at a later time. As an example and not by way of limitation, when the vehicle makes a turn at an intersection, the prediction model320A may predict a trajectory for the vehicle based on historical data. The vehicle system310may track the vehicle's location using a GPS and determine the vehicle's relative position to surrounding objects using LiDAR, cameras, etc. The comparator315may determine that the vehicle position deviates from the predicted trajectory by a distance greater than a pre-determined threshold distance (e.g., 5 meters, 10 meters, 15 meters). The comparator315may identify that as an anomalous event. Upon detection of the anomalous event, the vehicle system310may store the high-resolution contextual data related to the identified anomalous event in non-volatile storage or/and upload the high-resolution data into the cloud340. In particular embodiments, the vehicle system310may include an event classifier330A to classify each detected anomalous event317according to one or more identified categories of the previously detected anomalous events and one or more characteristics of the currently detected event of interest. For example, the event classifier330A may classify an event related to anomalous speeds as an anomalous speed event. As another example, the event classifier330A may classify an event related to an anomalous trajectory as an anomalous trajectory event. The event classifier300A may further determine an interest score for each detected anomalous event. The event classifier330A may be another machine-learning model running locally on the vehicle system310. In particular embodiments, the event classifier330A may be a copy of an event classifier330B, which may be trained and made available through the cloud340. The event classifier330B may be trained using the anomalous event database344, which may include training samples of anomalous events labeled with the appropriate classifications. The training result may be synchronized from the cloud340to the local prediction model330A in the vehicle system310through a wired or wireless connection. In particular embodiments, the event classifier330A may classify the detected event based on one or more parameters (e.g., speeds, trajectories, locations, surrounding objects, accelerations, etc.) determined based on the contextual data related to the detected event. The event classifier330A may further determine a confidence score indicating a confidence level that the detected event belongs to a particular category. In particular embodiments, the event classifier330A may further determine an interest score for a detected anomalous event to indicate the degree of interest of the detected event. The event classifier330A may calculate the interest score based on the confidence score of the detected event belonging to the category and the corresponding interest score of that category. For example, if the detected event has a confidence score of x for belonging to a category and that category has an interest score of y (indicating degree of interest), the interest score of the detected event may be determined by a product of x and y. In particular embodiments, the interest score of an initial set of anomalous events may be manually determined and labelled by human to train the event classifier330B. The event classifier330A may determine interest scores for newly detected anomalous events based on the initial data set and other previously detected anomalous event data. In particular embodiments, the vehicle system310may store/upload the high-resolution contextual data related to each detected anomalous event317identified by the comparator315. In particular embodiments, the vehicle system310may determine whether to store/upload the high-resolution contextual data related to an anomalous event based on the event's interest score determined by the event classifier330A. For example, the vehicle system310may store/upload the high-resolution contextual data related to an anomalous event only when the interest score is higher than a threshold value. In particular embodiments, the vehicle system310may determine the information detail levels of the contextual data to be stored/uploaded based on the interest score of the related anomalous event. For example, the vehicle system310may store/upload contextual data with higher resolutions for the anomalous events having higher interest scores than for the anomalous events having lower interest scores. In particular embodiments, the event classifier may fail to classify a detected anomalous event because the detected anomalous event is not similar to any previously detected event (e.g., indicated by a low confidence score to any known anomalous event category). In this situation, the event classifier may create a new category based on the detected event and assign a high interest score to the detected event since being non-similar to all known anomalous events is an indication of an anomaly itself. The vehicle system may collect and save related high-resolution data related to any unclassifiable events. For example, the vehicle system may identify a rolling tire on the road within a distance to the vehicle. The event classifier may fail to classify the rolling tire event as any known categories. The event classifier may identify that as a new type of anomalous event and assign a high interest score to that event. In particular embodiments, the prediction model320B and/or event classifier330B may be updated based on newly gathered data. In particular embodiments, the initial training data set for normal operations and anomalous events may be labelled by human. When the vehicle system collects new contextual data, the newly collected data may be uploaded to the training database342,344. For example, the vehicle system310may collect high-resolution contextual data352related to anomalous event317and upload the collected high-resolution contextual data352to the anomalous event database344in the cloud340. Similarly, contextual data determined to be related to normal events may be uploaded to the normal operation database342. The machine-learning models including both the prediction model320B and the event classifier330B may be further trained by the newly collected data and therefore, both improve over time the capability for handling anomalous events. The trained prediction model320B and event classifier330B may be synchronized to the corresponding prediction model320A and event classifier330A which run locally on the vehicle system310. FIG.4Aillustrates an example situation400A for detecting anomalous events of a vehicle. The vehicle402A may approach an intersection490having other traffic agents (e.g.,402B,402C), one or more stop lines (e.g.,404A,404B), multiple traffic signals (e.g.,410A,410B,410C,410D), one or more crosswalks406, curbs430, road lines440A-C, etc. The vehicle402A driven by a human driver may include a computing system which may map the environment of the vehicle using one or more sensors and use the real-time sensor information to localize the map. The computing system may monitor the vehicle information, for example, the velocity, the moving direction, the acceleration, the distance to stop line404, the distance to the road line440A, etc. The computing system may collect the contextual data of the vehicle and predict the vehicle operations based on the collected contextual data. As an example and not by way of limitation, the computing system may monitor the planned route of the vehicle through a navigation device (e.g., a mobile phone, a GPS). The prediction model may infer that the vehicle402A will make a left turn at this intersection490based on the target location of the navigating route and the turning signal status of the vehicle (e.g., accessed through the CAN bus of the vehicle). As another example, the prediction model may infer that the vehicle402A will make a left turn at the intersection490based on activities of the human driver (e.g., the driver is looking toward the left-front direction corresponding to a left turn) and other environment factors (e.g., other traffic agents are stationary obeying traffic lights, no pedestrians etc.). As an example and not by way of limitation, the computing system may predict that the vehicle402A will make a left turn at the intersection490. The computing system may use a prediction model to predict that the vehicle402A will likely have a trajectory between the lines420A and420B. The prediction model may be trained by the historical data related to left turns made by vehicles at this intersection490or other intersections. For example, the typical trajectories for making a left turn may be the trajectory422A or422B depending on which lane the driver plans to turn into. The computing system may continue to monitor the operations of the human drive and the status of the vehicle402A. During the actual left-turning process, the computing system may detect that the vehicle402A is making a left turn using a trajectory422C, which is beyond the predicted boundary lines of420A and420B. The computing system may identify that as an anomalous event and save the related high-resolution data as new edge-case data. The computing system may further use the event classifier to classify the detected anomalous event as an anomalous trajectory event and assign a high interest score to the event. As another example, the computing system may detect (e.g., using one or more agent modelers) that a traffic agent (e.g., a car, a truck) or a person (e.g., walking or riding a bicycle on the crosswalk406) is in front of the vehicle402A while the vehicle is approaching at a high speed. The computing system may include a prediction model trained by historical data related to slowing-down processes made by vehicles when facing obstacle objects. The computing system may predict, using the prediction model, that the vehicle402A will slow down beyond a threshold distance to the detected traffic agent or person. However, the computing system detects that the vehicle402A is approaching the traffic agent or person at a high speed after the vehicle is within the threshold distance to the traffic agent or person. The computing system may identify that as an anomalous event and store the related high-resolution data. The event classifier may classify this anomalous event as an anomalous speed event and assign a high interest score to the event. As another example, the computing system may detect the traffic signal for the vehicle402A has just turn green while the vehicle402A is stopping at the intersection490waiting for the left turn signal. The computing system may use a prediction model to predict that the vehicle402A will proceed to turn left with a threshold time period (e.g., 1 seconds, 2 seconds) after the traffic signal has turned green. The prediction model may be trained by the historical data related to left turns made by vehicles at this intersection490or other intersections. However, the computing system detects that the vehicle402A keeps stopping at the intersection490for a period of time (e.g., 5 seconds, 10 seconds, 20 seconds, 30 seconds) longer than the threshold time period (e.g., 1 seconds, 2 seconds) after the traffic signal has turned green. The computing system may identify that as an anomalous event and store the related high-resolution data. The event classifier may classify this event as an anomalous stop event and assign a high interest score to the event. In particular embodiments, the computing system may use rule-based algorithms to detect anomalous events. For example, the computing system may detect that the human driver is hitting the braking pedal unusually hard and may identify that as an anomalous event. As another example, the computing system may determine that the vehicle has arrived at a wrong location different from the navigation target and may identify that as an anomalous event. As another example, the computing system may determine that a collision accident has happened (e.g., based on an IMU output, an airbag status) and identify that as an anomalous event. In particular embodiments, the computing system may adopt a hybrid approach of ruled-based detection and model-based detection for detecting and classifying anomalous events. In particular embodiments, the computing system may use one or more traffic agent modelers to detect and analyze other traffic agents (e.g.,402B,402C) in the environment. The agent modelers may detect and identify other traffic agents (e.g., cars, buses, pedestrians), predict their behaviors (e.g., speeds, trajectories, positions), and evaluate the aggressiveness of their behaviors. In particular embodiments, the agent modelers may be one or more machine-learning models trained to detect and analyze different traffic agents. The agent modelers may further analyze and predict the interaction between other traffic agents (e.g.,402B,402C) and the hosting vehicle (e.g.,402A). FIG.4Billustrates an example situation400B for predicting other traffic agent behaviors. The vehicle402A may approach the intersection490and will make a left turn (e.g., along a trajectory450). The agent modeler may predict a behavior of a traffic agent based on the lane that the traffic agent is in, the distance between the traffic agent to a curb or center line, the turning signal status of that traffic agent, etc. As an example and not by way of limitation, the agent modeler may detect that the traffic agent402B is within the right lane of the road and is very close to the curb430. The agent modeler may predict that the traffic agent402B is likely to turn right along the trajectory452. However, the agent modeler may detect that the traffic agent402B has its left-turning signal flashing. The computing system may identify that as an anomalous event. As another example, the agent modeler may detect that the traffic agent402C is within the left lane and has left-turning signal flashing. The agent modeler may infer that the traffic agent402C would likely either turn left along the trajectory454or make a U-turn along the trajectory456. However, the agent modeler may detect that the traffic agent402C moves straight forward (e.g., along the path458) instead of turning left and may identify that as an anomalous event. As another example, when the vehicle402A is approaching the intersection490, the computing system of the vehicle402A may use agent modelers to detect that the traffic agent402B (e.g., a car) is approaching the stop line404B at an unusual high speed. The agent modelers may predict that although the traffic agent402B is slowing down, it is unlikely to make a safe stop at the stop line404B because of its high speed and the short distance between the traffic agent402B and the stop line404B. The computing system may identify this as an anomalous event and classify this event as an aggressive traffic agent event. As another example, the agent modelers may detect a traffic agent or object that cannot be recognized or classified. The computing system may identify the unrecognizable traffic agent or object as an anomalous event. In particular embodiments, the computing system may use multi-channel images to predict a discretized view of the environment of the vehicle. For example, the computing system may generate (e.g. using prediction models, traffic agent modelers, machine-learning models) to generate a series of multi-channel images for predicting the vehicle environment (e.g., other traffic agents, pedestrians, etc.) and the vehicle status (e.g., locations, speeds, moving directions, relative positions to road lines, relative positions to surrounding objects, etc.). The computing system may predict where the vehicle is going to be and how the environment looks like in a short time period (e.g., 0.1 seconds, 0.2 seconds, 2 seconds, 5 seconds, 10 seconds, etc.). The computing system may predict the vehicle's speed and moving direction based on a set of hypotheses with corresponding probability. The potential hypothesis may be generated by convolutional neural networks or re-current neural networks which may feed new information to the network. The hypothesis may be based on both the current view of the road and earlier view of the road. For example, the computing system may generate multiple channel images for a current time T or/and for a previous time (e.g., T−0.5 seconds, T−1 second). In particular embodiments, the computing system may predict vehicle operations based at least in part on the predicted discretized view of the environment of the vehicle. In particular embodiments, the computing system may use a combination of features related to the vehicle, the environment, or/and other traffic agents to predict the environment of the vehicle (e.g., in a discretized or non-discretized view). The combination of the features may include one or more of, for example, but are not limited to, a current position of the vehicle, a past position of the vehicle, a predicted position of the vehicle, a current velocity of the vehicle, a past velocity of the vehicle, a predicted velocity of the vehicle, velocities and orientations of other traffic agents relative to the vehicle, velocities and orientations of other traffic agents relative to each other, velocities and orientations of other traffic agents relative to one or more map elements (e.g., lane markings, stop lines, pedestrian crossings, signals, road signs, intersections, road edges, buildings, road barriers), etc. The computing system may generate a combination of one or more features related to the vehicle, the environment, or/and other traffic agents and predict a discretized or non-discretized view of the vehicle environment based on the combination of the features. In particular embodiments, the computing system may predict vehicle operations based at least in part on the predicted view of the vehicle environment. In particular embodiments, the computing system may look at each individual position of the traffic agents to predict possible environment situations in a short period of time. The computing system may use agent modelers to identify the traffic agents and other objects near the vehicle and use a prediction model to predict where the traffic agents might be going (e.g., locations, speeds, moving directions, relative positions to road lines, relative positions to surrounding objects, etc.). The computing system may collect the contextual data of the vehicle related to the human driver's operations in response those traffic agents and predict the vehicle status (e.g., locations, speeds, moving directions, relative positions to road lines, relative positions to surrounding objects, etc.) based on the collected contextual data of the vehicle and the operations of the human driver. In particular embodiments, the traffic agent modelers and prediction models may be machine-learning models trained by historical contextual data of the vehicle. In particular embodiments, the prediction model may be trained by historical multi-channel images comprising multi-layer information about the vehicle and the environment. In particular embodiments, the computing system may generate one or more multi-channel images for the vehicle environment (e.g., an intersection) including the vehicle itself, stop lines, road lines, other traffic actors or agents, etc. Each multi-channel image may be a top view environmental image and may have multiple channels for different layers of information for the environment. A first channel of the image may include the road information indicating the boundary of the road (e.g., which areas belong to road and which areas are not roads). For example, the first channel of the image may include, but are not limited to, road lines, crosswalks, curbs, sidewalks, road edge areas beyond the road, etc. A second channel of the image may include information associated the traffic and the road, for example, the vehicle itself (e.g., locations, relative positions to surrounding objects), other traffic agents (e.g., locations, relative positions to surrounding objects), stop lines, traffic signals, road signs, etc. A third channel may include information related to traffic agents, for example, velocities, moving directions, accelerations, turning signal statuses, interactions, etc. The machine-learning models may use multi-channel images to predict how the exact scene will be look like in a short period of time (e.g., 0.1 second, 0.2 second) in a discretized view of world. The computing system may generate a series of top view of the environment to predict a series of future scenes of the environment. In particular embodiments, the computing system may compare the precited vehicle and environment status to the actual vehicle and environment status. The computing system may generate a series of multi-change images for the actual top view of the environment based on the actual vehicle and environment status determined using the latterly collected contextual data of the vehicle. The computing system may compare the predicted top view images and the actual top view images and may determine an anomalous event when an actual top view image deviates from its corresponding predicted top view image with a difference greater than a threshold. The computing system may use one or more information layers of the multi-channel images for the comparison between the predicted and actual top view images of the environment. As an example and not by way of limitation, the computing system may determine, based on the actual and precited environment top view images, that the vehicle location deviates from a precited location by a distance greater than a threshold distance (e.g., 5 meters, 10 meters, 15 meters). The computing system may determine that as an anomalous event and may store/upload high-resolution data related to the detected anomalous event. As another example, the computing system may determine, based on the actual and precited environment top view images, that another vehicle deviates from a precited trajectory of that vehicle by a distance greater than a threshold distance (e.g., 5 meters, 10 meters, 15 meters, 30 meters). The computing system may determine that as an anomalous event and store/upload high-resolution data related to the identified anomalous event. FIG.5illustrates an example method of detecting an event of interest and storing high-resolution data associated with the event. At step510, the vehicle system may collect the contextual data of the vehicle based on one or more sensors associated with the vehicle system. The collected contextual data may include high-resolution data (e.g., full-resolution raw data without compression or pre-processing) from the sensors for monitoring the vehicle, the vehicle path, the human driver, and the environment. At step520, the vehicle system may store the latest high-resolution data (e.g., 5-minute worth of data) in a volatile memory. The high-resolution data in the volatile memory may be overwritten by newer data and volatile memory may only store the latest 5-minute high-resolution to accommodate to its size limitation. At step530, the vehicle system may store low-resolution data in a non-volatile storage of the vehicle system or upload the low-resolution data to a cloud in real-time. The low-resolution data may be pre-processed data (e.g., object identification results) or compressed data generated based on the high-resolution contextual data. At step540, the vehicle system may use a prediction model to predict the future operations of the human driver for a time period (e.g., 0.1 seconds, 0.2 seconds, 2 seconds, 5 seconds). The prediction model may be a machine-learning model trained using historical data. The vehicle system may continue to monitor the vehicle status and collect contextual data of the vehicle. At step550, the vehicle system may determine the actual operations of the human driver based on the collected data of the vehicle during that time period (e.g., 0.1 seconds, 0.2 seconds, 2 seconds, 5 seconds). At step560, the vehicle system may compare the predicted operations and the actual operations of the human driver to determine whether an event of interest has happened. At step570, when the actual operations of the human driver deviate from the predicted operations for a pre-determined threshold, the vehicle system may identify an anomalous event. When the actual operations of the human driver are consistent with the predicted operations (e.g., within a pre-determined threshold), the vehicle system may jump to step510and continue to collect contextual data of the vehicle. At step580, the vehicle system may store the high-resolution data related to the identified event of interest into a non-volatile storage. For example, the vehicle system may move the high-resolution data in the volatile memory into a non-volatile storage (or upload the data to a cloud). The high-resolution data in the volatile memory may include a richer set of data of a pre-determined time period (e.g., 5 minutes) before the event of interest. In particular embodiments, the vehicle system may further collect and store high-resolution data for a second period of time (e.g., several seconds to several minutes) after the event of interest has happened. At step590, the vehicle system may use an event classifier to classify the detected event of interest (e.g., an anomalous event) and determine an interest score indicating the importance and degree of interest of the detected event. Particular embodiments may repeat one or more steps of the method ofFIG.5, where appropriate. Although this disclosure describes and illustrates particular steps of the method ofFIG.5as occurring in a particular order, this disclosure contemplates any suitable steps of the method ofFIG.5occurring in any suitable order. Moreover, although this disclosure describes and illustrates an example method for detecting an event of interest and storing high resolution data associated the event including the particular steps of the method ofFIG.5, this disclosure contemplates any suitable method for detecting an event of interest and storing high resolution data associated the event including any suitable steps, which may include all, some, or none of the steps of the method ofFIG.5, where appropriate. Furthermore, although this disclosure describes and illustrates particular components, devices, or systems carrying out particular steps of the method ofFIG.5, this disclosure contemplates any suitable combination of any suitable components, devices, or systems carrying out any suitable steps of the method ofFIG.5. FIG.6Aillustrates a block diagram of various components of an example data collection device660. The data collection device660may also be referred as a transportation management vehicle device. In particular embodiments, the data collection device660may be integrated with the vehicle as a built-in device or may be associated with the vehicle as a detachable system. In particular embodiments, the data collection device660may include a number of sub-systems and modules including, for example, a logic control module (e.g., a processor618, input/output (I/O) interface626), a data storage module (a volatile memory628, a non-volatile storage620), a sensing module (e.g., an inertial measurement unit632, cameras634, sensors636), a communication module624, a display module (e.g., a front display604, a rear display610, a lighting controller622), etc. In particular embodiments, the processor618may control the I/O interface626to collect data from both of the integrated sensors (e.g., IMU632, cameras634, sensors636) that are integrated with the data collection device660and the vehicle sensors (e.g., a GPS642, cameras644, sensors646) that are associated with the vehicle and communicate with the data collection device660. The data collection device660may store the collected data in the volatile memory628(e.g., a random-access memory (RAM)) or/and in the non-volatile storage620(e.g., a hard disk drive, a solid-state drive, a flash drive, a compact disk, etc.). The data collection device660may also upload the collected data to a cloud650using the communication module624and through a wired or wireless connection652in real-time or at a later time. In particular embodiments, the data collection device660may include one or more machine-learning models (e.g., prediction models, driving models, event classifier, traffic agent modelers, etc.) which may require considerable computational resources. In particular embodiments, the data collection device660may cooperate with another computing system (e.g., a mobile phone, a tablet, a mobile computer, a high-performance computer) for collecting and processing the data (e.g., running traffic agent modelers). In particular embodiments, the data collection device660may be implemented on a mobile phone or mobile computer using the API of that mobile phone or mobile computer. In particular embodiments, the data collection device660may be implemented on an embedded system platform including one or more GPUs or other processors which are specifically configured to run machine-learning models (e.g., neural networks). In particular embodiments, the vehicle system600may include one or more sensors for monitoring the vehicle information (e.g., speeds, steering angles, braking pressure, etc.), the vehicle path information (e.g., trajectories, locations, etc.), the human driver (e.g., eye movement, head movement, etc.), and the environment of the vehicle (e.g., identified objects with bounding boxes, other vehicles, pedestrians, etc.). In particular embodiments, the data collection device660may include one or more integrated sensors, for example, an inertial measurement unit632, cameras634, sensors636, etc. The data collection device660may communicate with one or more sensors (e.g., a GPS642, cameras644, sensors646, etc.) that are associated with the vehicle but are external to the data collection device660. The vehicle system600may further include other sensing systems like LiDAR and radar systems. The sensors or sensing systems may monitor both the internal status (e.g., the vehicle itself and the passenger compartment area of a vehicle designed and intended for the seating of the driver and other passengers) and the external environment of the vehicle. For example, the data collection device660may include a rear-facing wide-angle camera that captures the passenger compartment and any passengers therein. As another example, the data collection device660may include a microphone that captures conversation and/or sounds in the passenger compartment. The data collection device may also include an infrared sensor capable of detecting motion and/or temperature of the passengers. Other examples of sensors may include, for example, but are not limited to: cameras for capturing visible data; microphones for capturing audible data; infrared sensors for detecting heat emitted by passengers; gyroscopes and accelerometers for detecting vehicle motion; speed sensors for detecting vehicle speed; steering sensors for measuring steering operations; pressure sensors for measuring pressure applied on braking pedal and acceleration pedal; a GPS for tracking vehicle location; and any other sensors or sensing systems (e.g., radar and LiDAR systems) suitable for monitoring the vehicle, the human driver, and the environment. In particular embodiments, such sensors may be integrated with the vehicle system600which may be a human-driven vehicle or an autonomous vehicle. The sensors may be located at any suitable location, such as in the upper corners of the passenger compartment, the dashboard, seats, side doors, ceiling, rear view mirror, central console, floor, roof, lid, or any other locations where the sensor would be effective in detecting the type of signals it is designed for. In particular embodiments, such sensors may be integrated with a detachable computing device (e.g., a mobile phone, a tablet, a GPS, a dash camera) attached to the vehicle (e.g., on dashboard). In particular embodiments, the communication module624may manage communications of the data collection device660with other systems including, for example, the cloud650, a detachable computing device (e.g., a mobile phone, a tablet), a vehicle, the transportation management system, and third-party systems (e.g., music, entertainment, traffic, and/or maps providers). In particular embodiments, communication module624may be configured to communicate over WI-FI, Bluetooth, NFC, RF, LTE, 3G/4G/5G broadband cellular network or any other wired or wireless communication networks or protocols. In particular embodiments, the data collection device660may communicate with the vehicle through the communication module624to collected data from the sensors of the vehicle. In particular embodiments, the data collection device660may communicate with the cloud650through the communication module624for uploading data to the cloud650and synchronizing parameters related to one or more machine-learning models trained in the cloud650. In particular embodiments, the data collection device624may be configured to physically connect to the vehicle (e.g., through a connector616inFIG.6C) for communicating with and getting power from the vehicle. For example, the connector616may implement the controller area network (CAN) bus interface or any other suitable communication interface or protocol for communicating with a vehicle. The CAN bus interface may interface with an on-board diagnostics (OBD) port (e.g., an OBD-I port, an OBD-II port, etc.) of the vehicle. In particular embodiments, the connector may include one or more universal serial bus (USB) ports, lightning connector ports, or other ports enabling users to directly connect their devices to the data collection device660(e.g., to exchange data, verify identity information, provide power, etc.). In particular embodiments, the data collection device660may be able to issue instructions (e.g., through the connector616inFIG.6C) to the vehicle's onboard computer and cause it to adjust certain vehicle configurations. In particular embodiments, the data collection device660may be configured to query the vehicle (e.g., through the connector616inFIG.6C) for certain data, such as current configurations of any of the aforementioned features, as well as the vehicle's speed, fuel level, tire pressure, external temperature gauges, navigation systems, and any other information available through the vehicle's computing system. In particular embodiments, the data collection device660may include an input/output interface (I/O)626configured to receive inputs from and output instructions to sensors, users, or/and the vehicle. The I/O interface may include circuits and components for communication and signal conversion (e.g., analog-to-digital converters, digital-to-analog converters). The I/O interface626may be connected to the integrated sensors (e.g., an IMU632, cameras634, sensors636) and the vehicle sensors (e.g., a GPS642, cameras644, sensors646) for sending instructions to and receiving data from these sensors. For example, the I/O interface626may be connected to an image-capturing device configured to recognize motion or gesture-based inputs from passengers, a microphone configured to detect and record speech or dialog uttered, a heat sensor to detect the temperature in the passenger compartment, and any other suitable sensors. As another example, the I/O interface626may include an audio device configured to provide audio outputs (such as alerts, instructions, or other information) to users and/or receive audio inputs, such as audio commands, which may be interpreted by a voice recognition system or any other command interface. In particular embodiments, the data collection device660may include one or more displays as shown inFIGS.1B-C. The data collection device660may include a front display604, a rear display610, and a lighting controller622. The front display604may be designed to face the outside of the vehicle so that it is visible to, e.g., ride requestors, and the rear display610may be designed to face the interior of the vehicle so that it is visible to, e.g., the passengers. The processor618may control information displayed on the rear display610and front display604. As described herein, each display may be designed to display information to different intended users, depending on the positioning of the users and the data collection device660. The data collection device660may control the front and rear display604and610based on display data of the data collection device660. The display data may include stored display patterns, sequences, colors, text, animation or other data to be displayed on the front and/or rear display. The display data may also include algorithms for generating content and controlling how it is displayed. The generated content, for example, may be personalized based on information received from the transportation management system, any third-party system, the vehicle, and the computing devices of the provider and/or requestor. In particular embodiments, display data may be stored in the volatile memory628(e.g., a random-access memory (RAM)) or/and in the non-volatile storage620(e.g., a hard disk drive, a solid-state drive, a flash drive, a compact disk, etc.) FIG.6Billustrates a front view602of an example data collection device660. A front view602of the data collection device660may include a front display604. In particular embodiments, the front display604may include a secondary region or separate display606. As shown inFIG.6B, the front display604may include various display technologies including, but not limited to, one or more liquid crystal displays (LCDs), one or more arrays of light emitting diodes (LEDs), AMOLED, or other display technologies. In particular embodiments, the front display604may include a cover that divides the display into multiple regions. In particular embodiments, separate displays may be associated with each region. In particular embodiments, the front display604may be configured to show colors, text, animation, patterns, color patterns, or any other suitable identifying information to requestors and other users external to a provider vehicle (e.g., at a popular pick-up location, requestors may quickly identify their respective rides and disregard the rest based on the identifying information shown). In particular embodiments, the secondary region or separate display606may be configured to display the same, or contrasting, information as front display604. FIG.6Cillustrates a rear view608of an example data collection device660. The rear view608may include a rear display610, a button612, one or more light sources614, a connection616, and one more sensors619. As with the front display604, the rear display610may include various display technologies including, but not limited to, one or more liquid crystal displays (LCDs), one or more arrays of light emitting diodes (LEDs), AMOLED, or other display technologies. The rear display610may be configured to display information to the provider, the requestor, or other passengers in the passenger compartment of the vehicle. In particular embodiments, rear display610may be configured to provide information to people who are external to and behind the provider vehicle. Information may be conveyed via, e.g., scrolling text, color, patterns, animation, and any other visual display. As further shown inFIG.6C, the data collection device660may include a power button612or any other suitable user interface that can be used to turn the device660on or off. In particular embodiments, power button612may be a hardware button or switch that physically controls whether power is provided to the data collection device660. Alternatively, power button612may be a soft button that initiates a startup/shutdown procedure managed by software and/or firmware instructions. Additionally, the data collection device660may include one or more light features614(such as one or more LEDs or other light sources) configured to illuminate areas adjacent to the device660and/or provide status signals. In particular embodiments, the data collection device660include a lighting controller to control the colors and/or other lighting displayed by the front display604, or/and the rear display610. The lighting controller may include rules and algorithms for controlling the displays so that the intended information is conveyed. For example, to help a set of matching provider and requestor find each other at a pick-up location, the lighting controller may obtain instructions that the color blue is to be used for identification. In response, the front display604may display blue and the lighting controller may cause the light features614to display blue so that the ride provider would know what color to look for. FIG.7illustrates an example block diagram of a transportation management environment for matching ride requestors with autonomous vehicles. In particular embodiments, the environment may include various computing entities, such as a user computing device730of a user701(e.g., a ride provider or requestor), a transportation management system760, an autonomous vehicle740, and one or more third-party system770. The computing entities may be communicatively connected over any suitable network710. As an example and not by way of limitation, one or more portions of network710may include an ad hoc network, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), a portion of the Internet, a portion of Public Switched Telephone Network (PSTN), a cellular network, or a combination of any of the above. In particular embodiments, any suitable network arrangement and protocol enabling the computing entities to communicate with each other may be used. AlthoughFIG.7illustrates a single user device730, a single transportation management system760, a single vehicle740, a plurality of third-party systems770, and a single network710, this disclosure contemplates any suitable number of each of these entities. As an example and not by way of limitation, the network environment may include multiple users701, user devices730, transportation management systems760, autonomous-vehicles740, third-party systems770, and networks710. The user device730, transportation management system760, autonomous vehicle740, and third-party system770may be communicatively connected or co-located with each other in whole or in part. These computing entities may communicate via different transmission technologies and network types. For example, the user device730and the vehicle740may communicate with each other via a cable or short-range wireless communication (e.g., Bluetooth, NFC, WI-FI, etc.), and together they may be connected to the Internet via a cellular network that is accessible to either one of the devices (e.g., the user device730may be a smartphone with LTE connection). The transportation management system760and third-party system770, on the other hand, may be connected to the Internet via their respective LAN/WLAN networks and Internet Service Providers (ISP).FIG.7illustrates transmission links750that connect user device730, autonomous vehicle740, transportation management system760, and third-party system770to communication network710. This disclosure contemplates any suitable transmission links750, including, e.g., wire connections (e.g., USB, Lightning, Digital Subscriber Line (DSL) or Data Over Cable Service Interface Specification (DOCSIS)), wireless connections (e.g., WI-FI, WiMAX, cellular, satellite, NFC, Bluetooth), optical connections (e.g., Synchronous Optical Networking (SONET), Synchronous Digital Hierarchy (SDH)), any other wireless communication technologies, and any combination thereof. In particular embodiments, one or more links750may connect to one or more networks710, which may include in part, e.g., ad-hoc network, the Intranet, extranet, VPN, LAN, WLAN, WAN, WWAN, MAN, PSTN, a cellular network, a satellite network, or any combination thereof. The computing entities need not necessarily use the same type of transmission link750. For example, the user device730may communicate with the transportation management system via a cellular network and the Internet, but communicate with the autonomous vehicle740via Bluetooth or a physical wire connection. In particular embodiments, the transportation management system760may fulfill ride requests for one or more users701by dispatching suitable vehicles. The transportation management system760may receive any number of ride requests from any number of ride requestors701. In particular embodiments, a ride request from a ride requestor701may include an identifier that identifies the ride requestor in the system760. The transportation management system760may use the identifier to access and store the ride requestor's701information, in accordance with the requestor's701privacy settings. The ride requestor's701information may be stored in one or more data stores (e.g., a relational database system) associated with and accessible to the transportation management system760. In particular embodiments, ride requestor information may include profile information about a particular ride requestor701. In particular embodiments, the ride requestor701may be associated with one or more categories or types, through which the ride requestor701may be associated with aggregate information about certain ride requestors of those categories or types. Ride information may include, for example, preferred pick-up and drop-off locations, driving preferences (e.g., safety comfort level, preferred speed, rates of acceleration/deceleration, safety distance from other vehicles when travelling at various speeds, route, etc.), entertainment preferences and settings (e.g., preferred music genre or playlist, audio volume, display brightness, etc.), temperature settings, whether conversation with the driver is welcomed, frequent destinations, historical riding patterns (e.g., time of day of travel, starting and ending locations, etc.), preferred language, age, gender, or any other suitable information. In particular embodiments, the transportation management system760may classify a user701based on known information about the user701(e.g., using machine-learning classifiers), and use the classification to retrieve relevant aggregate information associated with that class. For example, the system760may classify a user701as a young adult and retrieve relevant aggregate information associated with young adults, such as the type of music generally preferred by young adults. Transportation management system760may also store and access ride information. Ride information may include locations related to the ride, traffic data, route options, optimal pick-up or drop-off locations for the ride, or any other suitable information associated with a ride. As an example and not by way of limitation, when the transportation management system760receives a request to travel from San Francisco International Airport (SFO) to Palo Alto, California, the system760may access or generate any relevant ride information for this particular ride request. The ride information may include, for example, preferred pick-up locations at SFO; alternate pick-up locations in the event that a pick-up location is incompatible with the ride requestor (e.g., the ride requestor may be disabled and cannot access the pick-up location) or the pick-up location is otherwise unavailable due to construction, traffic congestion, changes in pick-up/drop-off rules, or any other reason; one or more routes to navigate from SFO to Palo Alto; preferred off-ramps for a type of user; or any other suitable information associated with the ride. In particular embodiments, portions of the ride information may be based on historical data associated with historical rides facilitated by the system760. For example, historical data may include aggregate information generated based on past ride information, which may include any ride information described herein and telemetry data collected by sensors in autonomous vehicles and/or user devices. Historical data may be associated with a particular user (e.g., that particular user's preferences, common routes, etc.), a category/class of users (e.g., based on demographics), and/or all users of the system760. For example, historical data specific to a single user may include information about past rides that particular user has taken, including the locations at which the user is picked up and dropped off, music the user likes to listen to, traffic information associated with the rides, time of the day the user most often rides, and any other suitable information specific to the user. As another example, historical data associated with a category/class of users may include, e.g., common or popular ride preferences of users in that category/class, such as teenagers preferring pop music, ride requestors who frequently commute to the financial district may prefer to listen to the news, etc. As yet another example, historical data associated with all users may include general usage trends, such as traffic and ride patterns. Using historical data, the system760in particular embodiments may predict and provide ride suggestions in response to a ride request. In particular embodiments, the system760may use machine-learning, such as neural networks, regression algorithms, instance-based algorithms (e.g., k-Nearest Neighbor), decision-tree algorithms, Bayesian algorithms, clustering algorithms, association-rule-learning algorithms, deep-learning algorithms, dimensionality-reduction algorithms, ensemble algorithms, and any other suitable machine-learning algorithms known to persons of ordinary skill in the art. The machine-learning models may be trained using any suitable training algorithm, including supervised learning based on labeled training data, unsupervised learning based on unlabeled training data, and/or semi-supervised learning based on a mixture of labeled and unlabeled training data. In particular embodiments, transportation management system760may include one or more server computers. Each server may be a unitary server or a distributed server spanning multiple computers or multiple datacenters. The servers may be of various types, such as, for example and without limitation, web server, news server, mail server, message server, advertising server, file server, application server, exchange server, database server, proxy server, another server suitable for performing functions or processes described herein, or any combination thereof. In particular embodiments, each server may include hardware, software, or embedded logic components or a combination of two or more such components for carrying out the appropriate functionalities implemented or supported by the server. In particular embodiments, transportation management system760may include one or more data stores. The data stores may be used to store various types of information, such as ride information, ride requestor information, ride provider information, historical information, third-party information, or any other suitable type of information. In particular embodiments, the information stored in the data stores may be organized according to specific data structures. In particular embodiments, each data store may be a relational, columnar, correlation, or any other suitable type of database system. Although this disclosure describes or illustrates particular types of databases, this disclosure contemplates any suitable types of databases. Particular embodiments may provide interfaces that enable a user device730(which may belong to a ride requestor or provider), a transportation management system760, vehicle system740, or a third-party system770to process, transform, manage, retrieve, modify, add, or delete the information stored in the data store. In particular embodiments, transportation management system760may include an authorization server (or any other suitable component(s)) that allows users701to opt-in to or opt-out of having their information and actions logged, recorded, or sensed by transportation management system760or shared with other systems (e.g., third-party systems770). In particular embodiments, a user701may opt-in or opt-out by setting appropriate privacy settings. A privacy setting of a user may determine what information associated with the user may be logged, how information associated with the user may be logged, when information associated with the user may be logged, who may log information associated with the user, whom information associated with the user may be shared with, and for what purposes information associated with the user may be logged or shared. Authorization servers may be used to enforce one or more privacy settings of the users701of transportation management system760through blocking, data hashing, anonymization, or other suitable techniques as appropriate. In particular embodiments, third-party system770may be a network-addressable computing system that may provide HD maps or host GPS maps, customer reviews, music or content, weather information, or any other suitable type of information. Third-party system770may generate, store, receive, and send relevant data, such as, for example, map data, customer review data from a customer review website, weather data, or any other suitable type of data. Third-party system770may be accessed by the other computing entities of the network environment either directly or via network710. For example, user device730may access the third-party system770via network710, or via transportation management system760. In the latter case, if credentials are required to access the third-party system770, the user701may provide such information to the transportation management system760, which may serve as a proxy for accessing content from the third-party system770. In particular embodiments, user device730may be a mobile computing device such as a smartphone, tablet computer, or laptop computer. User device730may include one or more processors (e.g., CPU and/or GPU), memory, and storage. An operating system and applications may be installed on the user device730, such as, e.g., a transportation application associated with the transportation management system760, applications associated with third-party systems770, and applications associated with the operating system. User device730may include functionality for determining its location, direction, or orientation, based on integrated sensors such as GPS, compass, gyroscope, or accelerometer. User device730may also include wireless transceivers for wireless communication and may support wireless communication protocols such as Bluetooth, near-field communication (NFC), infrared (IR) communication, WI-FI, and/or 2G/3G/4G/LTE mobile communication standard. User device730may also include one or more cameras, scanners, touchscreens, microphones, speakers, and any other suitable input-output devices. In particular embodiments, the vehicle740may be an autonomous vehicle and equipped with an array of sensors744, a navigation system746, and a ride-service computing device748. In particular embodiments, a fleet of autonomous vehicles740may be managed by the transportation management system760. The fleet of autonomous vehicles740, in whole or in part, may be owned by the entity associated with the transportation management system760, or they may be owned by a third-party entity relative to the transportation management system760. In either case, the transportation management system760may control the operations of the autonomous vehicles740, including, e.g., dispatching select vehicles740to fulfill ride requests, instructing the vehicles740to perform select operations (e.g., head to a service center or charging/fueling station, pull over, stop immediately, self-diagnose, lock/unlock compartments, change music station, change temperature, and any other suitable operations), and instructing the vehicles740to enter select operation modes (e.g., operate normally, drive at a reduced speed, drive under the command of human operators, and any other suitable operational modes). In particular embodiments, the autonomous vehicles740may receive data from and transmit data to the transportation management system760and the third-party system770. Example of received data may include, e.g., instructions, new software or software updates, maps, 3D models, trained or untrained machine-learning models, location information (e.g., location of the ride requestor, the autonomous vehicle740itself, other autonomous vehicles740, and target destinations such as service centers), navigation information, traffic information, weather information, entertainment content (e.g., music, video, and news) ride requestor information, ride information, and any other suitable information. Examples of data transmitted from the autonomous vehicle740may include, e.g., telemetry and sensor data, determinations/decisions based on such data, vehicle condition or state (e.g., battery/fuel level, tire and brake conditions, sensor condition, speed, odometer, etc.), location, navigation data, passenger inputs (e.g., through a user interface in the vehicle740, passengers may send/receive data to the transportation management system760and/or third-party system770), and any other suitable data. In particular embodiments, autonomous vehicles740may also communicate with each other as well as other traditional human-driven vehicles, including those managed and not managed by the transportation management system760. For example, one vehicle740may communicate with another vehicle data regarding their respective location, condition, status, sensor reading, and any other suitable information. In particular embodiments, vehicle-to-vehicle communication may take place over direct short-range wireless connection (e.g., WI-FI, Bluetooth, NFC) and/or over a network (e.g., the Internet or via the transportation management system760or third-party system770). In particular embodiments, an autonomous vehicle740may obtain and process sensor/telemetry data. Such data may be captured by any suitable sensors. For example, the vehicle740may have a Light Detection and Ranging (LiDAR) sensor array of multiple LiDAR transceivers that are configured to rotate 360°, emitting pulsed laser light and measuring the reflected light from objects surrounding vehicle740. In particular embodiments, LiDAR transmitting signals may be steered by use of a gated light valve, which may be a MEMs device that directs a light beam using the principle of light diffraction. Such a device may not use a gimbaled mirror to steer light beams in 360° around the autonomous vehicle. Rather, the gated light valve may direct the light beam into one of several optical fibers, which may be arranged such that the light beam may be directed to many discrete positions around the autonomous vehicle. Thus, data may be captured in 360° around the autonomous vehicle, but no rotating parts may be necessary. A LiDAR is an effective sensor for measuring distances to targets, and as such may be used to generate a three-dimensional (3D) model of the external environment of the autonomous vehicle740. As an example and not by way of limitation, the 3D model may represent the external environment including objects such as other cars, curbs, debris, objects, and pedestrians up to a maximum range of the sensor arrangement (e.g., 50, 100, or 200 meters). As another example, the autonomous vehicle740may have optical cameras pointing in different directions. The cameras may be used for, e.g., recognizing roads, lane markings, street signs, traffic lights, police, other vehicles, and any other visible objects of interest. To enable the vehicle740to “see” at night, infrared cameras may be installed. In particular embodiments, the vehicle may be equipped with stereo vision for, e.g., spotting hazards such as pedestrians or tree branches on the road. As another example, the vehicle740may have radars for, e.g., detecting other vehicles and/or hazards afar. Furthermore, the vehicle740may have ultrasound equipment for, e.g., parking and obstacle detection. In addition to sensors enabling the vehicle740to detect, measure, and understand the external world around it, the vehicle740may further be equipped with sensors for detecting and self-diagnosing the vehicle's own state and condition. For example, the vehicle740may have wheel sensors for, e.g., measuring velocity; global positioning system (GPS) for, e.g., determining the vehicle's current geolocation; and/or inertial measurement units, accelerometers, gyroscopes, and/or odometer systems for movement or motion detection. While the description of these sensors provides particular examples of utility, one of ordinary skill in the art would appreciate that the utilities of the sensors are not limited to those examples. Further, while an example of a utility may be described with respect to a particular type of sensor, it should be appreciated that the utility may be achieved using any combination of sensors. For example, an autonomous vehicle740may build a 3D model of its surrounding based on data from its LiDAR, radar, sonar, and cameras, along with a pre-generated map obtained from the transportation management system760or the third-party system770. Although sensors744appear in a particular location on autonomous vehicle740inFIG.7, sensors744may be located in any suitable location in or on autonomous vehicle740. Example locations for sensors include the front and rear bumpers, the doors, the front windshield, on the side panel, or any other suitable location. In particular embodiments, the autonomous vehicle740may be equipped with a processing unit (e.g., one or more CPUs and GPUs), memory, and storage. The vehicle740may thus be equipped to perform a variety of computational and processing tasks, including processing the sensor data, extracting useful information, and operating accordingly. For example, based on images captured by its cameras and a machine-vision model, the vehicle740may identify particular types of objects captured by the images, such as pedestrians, other vehicles, lanes, curbs, and any other objects of interest. In particular embodiments, the autonomous vehicle740may have a navigation system746responsible for safely navigating the autonomous vehicle740. In particular embodiments, the navigation system746may take as input any type of sensor data from, e.g., a Global Positioning System (GPS) module, inertial measurement unit (IMU), LiDAR sensors, optical cameras, radio frequency (RF) transceivers, or any other suitable telemetry or sensory mechanisms. The navigation system746may also utilize, e.g., map data, traffic data, accident reports, weather reports, instructions, target destinations, and any other suitable information to determine navigation routes and particular driving operations (e.g., slowing down, speeding up, stopping, swerving, etc.). In particular embodiments, the navigation system746may use its determinations to control the vehicle740to operate in prescribed manners and to guide the autonomous vehicle740to its destinations without colliding into other objects. Although the physical embodiment of the navigation system746(e.g., the processing unit) appears in a particular location on autonomous vehicle740inFIG.7, navigation system746may be located in any suitable location in or on autonomous vehicle740. Example locations for navigation system746include inside the cabin or passenger compartment of autonomous vehicle740, near the engine/battery, near the front seats, rear seats, or in any other suitable location. In particular embodiments, the autonomous vehicle740may be equipped with a ride-service computing device748, which may be a tablet or any other suitable device installed by transportation management system760to allow the user to interact with the autonomous vehicle740, transportation management system760, other users701, or third-party systems770. In particular embodiments, installation of ride-service computing device748may be accomplished by placing the ride-service computing device748inside autonomous vehicle740, and configuring it to communicate with the vehicle740via a wire or wireless connection (e.g., via Bluetooth). AlthoughFIG.7illustrates a single ride-service computing device748at a particular location in autonomous vehicle740, autonomous vehicle740may include several ride-service computing devices748in several different locations within the vehicle. As an example and not by way of limitation, autonomous vehicle740may include four ride-service computing devices748located in the following places: one in front of the front-left passenger seat (e.g., driver's seat in traditional U.S. automobiles), one in front of the front-right passenger seat, one in front of each of the rear-left and rear-right passenger seats. In particular embodiments, ride-service computing device748may be detachable from any component of autonomous vehicle740. This may allow users to handle ride-service computing device748in a manner consistent with other tablet computing devices. As an example and not by way of limitation, a user may move ride-service computing device748to any location in the cabin or passenger compartment of autonomous vehicle740, may hold ride-service computing device748, or handle ride-service computing device748in any other suitable manner. Although this disclosure describes providing a particular computing device in a particular manner, this disclosure contemplates providing any suitable computing device in any suitable manner. FIG.8illustrates an example computer system800. In particular embodiments, one or more computer systems800perform one or more steps of one or more methods described or illustrated herein. In particular embodiments, one or more computer systems800provide the functionalities described or illustrated herein. In particular embodiments, software running on one or more computer systems800performs one or more steps of one or more methods described or illustrated herein or provides the functionalities described or illustrated herein. Particular embodiments include one or more portions of one or more computer systems800. Herein, a reference to a computer system may encompass a computing device, and vice versa, where appropriate. Moreover, a reference to a computer system may encompass one or more computer systems, where appropriate. This disclosure contemplates any suitable number of computer systems800. This disclosure contemplates computer system800taking any suitable physical form. As example and not by way of limitation, computer system800may be an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC) (such as, for example, a computer-on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, a tablet computer system, an augmented/virtual reality device, or a combination of two or more of these. Where appropriate, computer system800may include one or more computer systems800; be unitary or distributed; span multiple locations; span multiple machines; span multiple data centers; or reside in a cloud, which may include one or more cloud components in one or more networks. Where appropriate, one or more computer systems800may perform without substantial spatial or temporal limitation one or more steps of one or more methods described or illustrated herein. As an example and not by way of limitation, one or more computer systems800may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein. One or more computer systems800may perform at different times or at different locations one or more steps of one or more methods described or illustrated herein, where appropriate. In particular embodiments, computer system800includes a processor802, memory804, storage806, an input/output (I/O) interface808, a communication interface810, and a bus812. Although this disclosure describes and illustrates a particular computer system having a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement. In particular embodiments, processor802includes hardware for executing instructions, such as those making up a computer program. As an example and not by way of limitation, to execute instructions, processor802may retrieve (or fetch) the instructions from an internal register, an internal cache, memory804, or storage806; decode and execute them; and then write one or more results to an internal register, an internal cache, memory804, or storage806. In particular embodiments, processor802may include one or more internal caches for data, instructions, or addresses. This disclosure contemplates processor802including any suitable number of any suitable internal caches, where appropriate. As an example and not by way of limitation, processor802may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (TLBs). Instructions in the instruction caches may be copies of instructions in memory804or storage806, and the instruction caches may speed up retrieval of those instructions by processor802. Data in the data caches may be copies of data in memory804or storage806that are to be operated on by computer instructions; the results of previous instructions executed by processor802that are accessible to subsequent instructions or for writing to memory804or storage806; or any other suitable data. The data caches may speed up read or write operations by processor802. The TLBs may speed up virtual-address translation for processor802. In particular embodiments, processor802may include one or more internal registers for data, instructions, or addresses. This disclosure contemplates processor802including any suitable number of any suitable internal registers, where appropriate. Where appropriate, processor802may include one or more arithmetic logic units (ALUs), be a multi-core processor, or include one or more processors802. Although this disclosure describes and illustrates a particular processor, this disclosure contemplates any suitable processor. In particular embodiments, memory804includes main memory for storing instructions for processor802to execute or data for processor802to operate on. As an example and not by way of limitation, computer system800may load instructions from storage806or another source (such as another computer system800) to memory804. Processor802may then load the instructions from memory804to an internal register or internal cache. To execute the instructions, processor802may retrieve the instructions from the internal register or internal cache and decode them. During or after execution of the instructions, processor802may write one or more results (which may be intermediate or final results) to the internal register or internal cache. Processor802may then write one or more of those results to memory804. In particular embodiments, processor802executes only instructions in one or more internal registers or internal caches or in memory804(as opposed to storage806or elsewhere) and operates only on data in one or more internal registers or internal caches or in memory804(as opposed to storage806or elsewhere). One or more memory buses (which may each include an address bus and a data bus) may couple processor802to memory804. Bus812may include one or more memory buses, as described in further detail below. In particular embodiments, one or more memory management units (MMUs) reside between processor802and memory804and facilitate accesses to memory804requested by processor802. In particular embodiments, memory804includes random access memory (RAM). This RAM may be volatile memory, where appropriate. Where appropriate, this RAM may be dynamic RAM (DRAM) or static RAM (SRAM). Moreover, where appropriate, this RAM may be single-ported or multi-ported RAM. This disclosure contemplates any suitable RAM. Memory804may include one or more memories804, where appropriate. Although this disclosure describes and illustrates particular memory, this disclosure contemplates any suitable memory. In particular embodiments, storage806includes mass storage for data or instructions. As an example and not by way of limitation, storage806may include a hard disk drive (HDD), a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these. Storage806may include removable or non-removable (or fixed) media, where appropriate. Storage806may be internal or external to computer system800, where appropriate. In particular embodiments, storage806is non-volatile, solid-state memory. In particular embodiments, storage806includes read-only memory (ROM). Where appropriate, this ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or flash memory or a combination of two or more of these. This disclosure contemplates mass storage806taking any suitable physical form. Storage806may include one or more storage control units facilitating communication between processor802and storage806, where appropriate. Where appropriate, storage806may include one or more storages806. Although this disclosure describes and illustrates particular storage, this disclosure contemplates any suitable storage. In particular embodiments, I/O interface808includes hardware, software, or both, providing one or more interfaces for communication between computer system800and one or more I/O devices. Computer system800may include one or more of these I/O devices, where appropriate. One or more of these I/O devices may enable communication between a person and computer system800. As an example and not by way of limitation, an I/O device may include a keyboard, keypad, microphone, monitor, mouse, printer, scanner, speaker, still camera, stylus, tablet, touch screen, trackball, video camera, another suitable I/O device or a combination of two or more of these. An I/O device may include one or more sensors. This disclosure contemplates any suitable I/O devices and any suitable I/O interfaces808for them. Where appropriate, I/O interface808may include one or more device or software drivers enabling processor802to drive one or more of these I/O devices. I/O interface808may include one or more I/O interfaces808, where appropriate. Although this disclosure describes and illustrates a particular I/O interface, this disclosure contemplates any suitable I/O interface. In particular embodiments, communication interface810includes hardware, software, or both providing one or more interfaces for communication (such as, for example, packet-based communication) between computer system800and one or more other computer systems800or one or more networks. As an example and not by way of limitation, communication interface810may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or any other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a WI-FI network. This disclosure contemplates any suitable network and any suitable communication interface810for it. As an example and not by way of limitation, computer system800may communicate with an ad hoc network, a personal area network (PAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), or one or more portions of the Internet or a combination of two or more of these. One or more portions of one or more of these networks may be wired or wireless. As an example, computer system800may communicate with a wireless PAN (WPAN) (such as, for example, a Bluetooth WPAN), a WI-FI network, a WI-MAX network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network), or any other suitable wireless network or a combination of two or more of these. Computer system800may include any suitable communication interface810for any of these networks, where appropriate. Communication interface810may include one or more communication interfaces810, where appropriate. Although this disclosure describes and illustrates a particular communication interface, this disclosure contemplates any suitable communication interface. In particular embodiments, bus812includes hardware, software, or both coupling components of computer system800to each other. As an example and not by way of limitation, bus812may include an Accelerated Graphics Port (AGP) or any other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a HYPERTRANSPORT (HT) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a low-pin-count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCIe) bus, a serial advanced technology attachment (SATA) bus, a Video Electronics Standards Association local (VLB) bus, or another suitable bus or a combination of two or more of these. Bus812may include one or more buses812, where appropriate. Although this disclosure describes and illustrates a particular bus, this disclosure contemplates any suitable bus or interconnect. Herein, a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other types of integrated circuits (ICs) (such, as for example, field-programmable gate arrays (FPGAs) or application-specific ICs (ASICs)), hard disk drives (HDDs), hybrid hard drives (HHDs), optical discs, optical disc drives (ODDs), magneto-optical discs, magneto-optical drives, floppy diskettes, floppy disk drives (FDDs), magnetic tapes, solid-state drives (SSDs), RAM-drives, SECURE DIGITAL cards or drives, any other suitable computer-readable non-transitory storage media, or any suitable combination of two or more of these, where appropriate. A computer-readable non-transitory storage medium may be volatile, non-volatile, or a combination of volatile and non-volatile, where appropriate. Herein, “or” is inclusive and not exclusive, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A or B” means “A, B, or both,” unless expressly indicated otherwise or indicated otherwise by context. Moreover, “and” is both joint and several, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A and B” means “A and B, jointly or severally,” unless expressly indicated otherwise or indicated otherwise by context. The scope of this disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments described or illustrated herein that a person having ordinary skill in the art would comprehend. The scope of this disclosure is not limited to the example embodiments described or illustrated herein. Moreover, although this disclosure describes and illustrates respective embodiments herein as including particular components, elements, feature, functions, operations, or steps, any of these embodiments may include any combination or permutation of any of the components, elements, features, functions, operations, or steps described or illustrated anywhere herein that a person having ordinary skill in the art would comprehend. Furthermore, reference in the appended claims to an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative. Additionally, although this disclosure describes or illustrates particular embodiments as providing particular advantages, particular embodiments may provide none, some, or all of these advantages. | 109,908 |
11861459 | The diagrams depicted herein are illustrative. There can be many variations to the diagrams or the operations described therein without departing from the spirit of the invention. For instance, the actions can be performed in a differing order or actions can be added, deleted or modified. Also, the term “coupled” and variations thereof describe having a communications path between two elements and do not imply a direct connection between the elements with no intervening elements/connections between them. All of these variations are considered a part of the specification. In the accompanying figures and following detailed description of the disclosed embodiments, the various elements illustrated in the figures are provided with two or three digit reference numbers. With minor exceptions, the leftmost digit(s) of each reference number correspond to the figure in which its element is first illustrated. DETAILED DESCRIPTION Various embodiments of the invention are described herein with reference to the related drawings. Alternative embodiments of the invention can be devised without departing from the scope of this invention. Various connections and positional relationships (e.g., over, below, adjacent, etc.) are set forth between elements in the following description and in the drawings. These connections and/or positional relationships, unless specified otherwise, can be direct or indirect, and the present invention is not intended to be limiting in this respect. Accordingly, a coupling of entities can refer to either a direct or an indirect coupling, and a positional relationship between entities can be a direct or indirect positional relationship. Moreover, the various tasks and process steps described herein can be incorporated into a more comprehensive procedure or process having additional steps or functionality not described in detail herein. The following definitions and abbreviations are to be used for the interpretation of the claims and the specification. As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having,” “contains” or “containing,” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a composition, a mixture, process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but can include other elements not expressly listed or inherent to such composition, mixture, process, method, article, or apparatus. Additionally, the term “exemplary” is used herein to mean “serving as an example, instance or illustration.” Any embodiment or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or designs. The terms “at least one” and “one or more” may be understood to include any integer number greater than or equal to one, i.e. one, two, three, four, etc. The terms “a plurality” may be understood to include any integer number greater than or equal to two, i.e. two, three, four, five, etc. The term “connection” may include both an indirect “connection” and a direct “connection.” The terms “about,” “substantially,” “approximately,” and variations thereof, are intended to include the degree of error associated with measurement of the particular quantity based upon the equipment available at the time of filing the application. For example, “about” can include a range of ±8% or 5%, or 2% of a given value. For the sake of brevity, conventional techniques related to making and using aspects of the invention may or may not be described in detail herein. In particular, various aspects of computing systems and specific computer programs to implement the various technical features described herein are well known. Accordingly, in the interest of brevity, many conventional implementation details are only mentioned briefly herein or are omitted entirely without providing the well-known system and/or process details. It is to be understood that although this disclosure includes a detailed description on cloud computing, implementation of the teachings recited herein are not limited to a cloud computing environment. Rather, embodiments of the present invention are capable of being implemented in conjunction with any other type of computing environment now known or later developed. Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models. Characteristics are as follows: On-demand self-service: a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider. Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs). Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time. Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service. Infrastructure as a Service (IaaS): the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls). Deployment Models are as follows: Private cloud: the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises. Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises. Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services. Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds). A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure that includes a network of interconnected nodes. Referring now toFIG.1, illustrative cloud computing environment50is depicted. As shown, cloud computing environment50comprises one or more cloud computing nodes10with which local computing devices used by cloud consumers, such as, for example, personal digital assistant (PDA) or cellular telephone54A, desktop computer54B, laptop computer54C, and/or automobile computer system54N may communicate. Nodes10may communicate with one another. They may be grouped (not shown) physically or virtually, in one or more networks, such as Private, Community, Public, or Hybrid clouds as described hereinabove, or a combination thereof. This allows cloud computing environment50to offer infrastructure, platforms and/or software as services for which a cloud consumer does not need to maintain resources on a local computing device. It is understood that the types of computing devices54A-N shown inFIG.1are intended to be illustrative only and that computing nodes10and cloud computing environment50can communicate with any type of computerized device over any type of network and/or network addressable connection (e.g., using a web browser). Referring now toFIG.2, a set of functional abstraction layers provided by cloud computing environment50(FIG.1) is shown. It should be understood in advance that the components, layers, and functions shown inFIG.2are intended to be illustrative only and embodiments of the invention are not limited thereto. As depicted, the following layers and corresponding functions are provided: Hardware and software layer60includes hardware and software components. Examples of hardware components include: mainframes61; RISC (Reduced Instruction Set Computer) architecture based servers62; servers63; blade servers64; storage devices65; and networks and networking components66. In some embodiments, software components include network application server software67and database software68. Virtualization layer70provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers71; virtual storage72; virtual networks73, including virtual private networks; virtual applications and operating systems74; and virtual clients75. In one example, management layer80may provide the functions described below. Resource provisioning81provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and Pricing82provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may comprise application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal83provides access to the cloud computing environment for consumers and system administrators. Service level management84provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment85provides pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA. Workloads layer90provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation91; software development and lifecycle management92; virtual classroom education delivery93; data analytics processing94; transaction processing95; and providing automatic determination of recommended hyper-local data sources and features for use in modeling96. Referring toFIG.3, there is shown an embodiment of a processing system300for implementing the teachings herein. In this embodiment, the system300has one or more central processing units (processors)21a,21b,21c, etc. (collectively or generically referred to as processor(s)21). In one or more embodiments, each processor21may include a reduced instruction set computer (RISC) microprocessor. Processors21are coupled to system memory34and various other components via a system bus33. Read only memory (ROM)22is coupled to the system bus33and may include a basic input/output system (BIOS), which controls certain basic functions of system300. FIG.3further depicts an input/output (I/O) adapter27and a network adapter26coupled to the system bus33. I/O adapter27may be a small computer system interface (SCSI) adapter that communicates with a hard disk23and/or tape storage drive25or any other similar component. I/O adapter27, hard disk23, and tape storage device25are collectively referred to herein as mass storage24. Operating system40for execution on the processing system300may be stored in mass storage24. A network adapter26interconnects bus33with an outside network36enabling data processing system300to communicate with other such systems. A screen (e.g., a display monitor)35is connected to system bus33by display adaptor32, which may include a graphics adapter to improve the performance of graphics intensive applications and a video controller. In one embodiment, adapters27,26, and32may be connected to one or more I/O busses that are connected to system bus33via an intermediate bus bridge (not shown). Suitable I/O buses for connecting peripheral devices such as hard disk controllers, network adapters, and graphics adapters typically include common protocols, such as the Peripheral Component Interconnect (PCI). Additional input/output devices are shown as connected to system bus33via user interface adapter28and display adapter32. A keyboard29, mouse30, and speaker31all interconnected to bus33via user interface adapter28, which may include, for example, a Super I/O chip integrating multiple device adapters into a single integrated circuit. In exemplary embodiments, the processing system300includes a graphics processing unit41. Graphics processing unit41is a specialized electronic circuit designed to manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display. In general, graphics processing unit41is very efficient at manipulating computer graphics and image processing and has a highly parallel structure that makes it more effective than general-purpose CPUs for algorithms where processing of large blocks of data is done in parallel. Thus, as configured inFIG.3, the system300includes processing capability in the form of processors21, storage capability including system memory34and mass storage24, input means such as keyboard29and mouse30, and output capability including speaker31and display35. In one embodiment, a portion of system memory34and mass storage24collectively store an operating system coordinate the functions of the various components shown inFIG.3. In exemplary embodiments, a system for providing automatic determination of recommended hyper-local data sources and features for use in modeling is provided. In exemplary embodiments, the systems, methods, and techniques disclosed herein may allow for the automatic identification or one or more hyper-local features to be used in a generating a predictive model on a workbench software platform based on the client data provided and a description of the use-case. The term workbench software platform is used herein to describe a software platform that provides modeling tools to allow a user to generate and train a model based on user-submitted client data (e.g., sales data, product inventory data, etc.) as well as hyper-local data provided by the platform. The techniques described herein may be implemented in conjunction with, but not limited to, for example, IBM's Metropulse Analytics Workbench. Traditionally, workbench software platforms may provide users with access to hyper-local data sources that may be combined with user-submitted client data in generating and/or training a model. As will be understood by those of skill in the art, hyper-local data sources may include data sources relating to local activity, such as for example, a neighborhood profile, seasonal factors, shopper demographers, social influences, brand affinity, purchase trends, and other such types of data that correspond to particular locations or locals. Each hyper-local data source may include number features. For example, demographic hyper-local data sources may feature of tax distribution, average population/unit, income, and the like. As will be appreciated by those of skill in the art, different data sources and different features may be useful in generating different models based on different use-cases and different client data. Thus, selection of appropriate data sources and/or features can be critical to the performance of the model. However, such hyper-local data sources are commonly confidential and users may not be permitted to copy or export the hyper-local data sources, which can prevent a user from performing an analysis on the hyper-local data sources and features to attempt to develop any insights that may assist with improved feature selection for model training. Aspects of the present invention attempt to solve the problem of data source/feature selection for the generation of models by generating a feature profile relation graph that provides an indication of the strength of relationships between different features and client data profiles and use-case profiles. The feature profile relation graph may be generated over time based on various models trained on the workbench software platform by different users and can be updated based on every new model that is trained on the platform. When similar relationships between features and data profiles are observed in different models, those relationships within the feature profile relationship graph may be strengthened. When a user desires to create a new model, the user may input the client data and an indication of the use-case into the system, and the system may automatically determine and rank the best hyper-local features for use in the new model, by identifying which hyper-local features in the feature profile relation graph have the strongest ties to data profiles and use-cases similar to those of the new model. In this way, the system may remove the guesswork in selecting hyper-local features for use in a new model, allowing users to generate more accurate models with less effort, while also preserving the integrity and confidentiality of the hyper-local data sources provided by the workbench software platform. FIG.4depicts a block diagram of a processing system400for providing automatic determination of recommended hyper-local data sources and features for use in modeling, according to aspects of the present disclosure. The various components, modules, engines, etc. described regardingFIG.4can be implemented as instructions stored on a computer-readable storage medium, as hardware modules, as special-purpose hardware (e.g., application specific hardware, application specific integrated circuits (ASICs), application specific special processors (ASSPs), field programmable gate arrays (FPGAs), as embedded controllers, hardwired circuitry, etc.), or as some combination or combinations of these. According to aspects of the present disclosure, the engine(s) described herein can be a combination of hardware and programming. The programming can be processor executable instructions stored on a tangible memory, and the hardware can include the processing device402for executing those instructions. Thus a system memory (e.g., memory404) can store program instructions that when executed by the processing device402implement the engines described herein. Other engines can also be utilized to include other features and functionality described in other examples herein. The processing system400includes the processing device402, the memory404, a model generation engine406, a data profiling engine418, a model performance profiling engine410, a use-case profiling engine412, a feature profile relation graph generation engine414and a hyper-local feature recommendation engine416. According to some embodiments, processing system400may be a workbench software platform. The processing system400can be configured to communicate with a user device420, which may display data to and receive user inputs from a user421. According to some embodiments, the processing system400may communicate with user device420, data store422and hyper-local data lake430via communications network that may be one or more of, or a combination of, public (e.g., Internet), private (e.g., local area network, wide area network, virtual private network), and may include wireless and wireline transmission systems (e.g., satellite, cellular network, terrestrial networks, etc.). In exemplary embodiments, user devices420can include, but are not limited to, a smartphone, a wearable device such as a smartwatch, an augmented reality headset, a tablet, a smart speaker, a television, a computer system such as the one shown inFIG.3, or any other suitable electronic device. The processing system may store and access data via a connected data store422, and may also access and use hyper-local data provided via the hyper-local data lake430. According to some embodiments, hyper-local data may be data that corresponds to a location or a locality, such as a city, a street, a corner block, a store, a neighborhood, a specific location on a map, or any other feasible granularity of location-associated data that may be collected and/or stored as part of the hyper-local data lake430. For example, in various embodiments, hyper-local data may be data that pertains to an area that is approximately 1,000 square meters, 2,500 square meters, 5,000 square meters, 10,000 square meters, or any other size of area or areas as may be appropriate to cover one or more localities that may be of interest to a business or other organization seeking to make data-driven decisions. In some embodiments, hyper-local data lake430may be a set of databases in a workbench software platform where the hyperlocal data is stored. The model generation engine406allows a user to build and train predictive models in relation to specified use-cases that fuse user-provided client data with hyper-local data sources made available by the processing system400, for example, via the hyper-local data lake430. As will be appreciated by those of skill in the art, predictive models may include for example, models that predict sales of products and services in various locations or models that determine how to distribute sales resources such as the locations of vending machines. The model generation engine406may be configured to allow a user to specify a desired granularity of location-based information (e.g., how many stores to put in a city vs. how many kiosks to put on a street). Granularity of location-based data can be managed using geo-hashing, which is a technique that can divide the world into high, medium and low level data (e.g., state vs. city vs. street). Features may similarly be broken into different levels of granularity (e.g., average income in the state vs. average income in a city vs. average income on a given street). In some embodiments, model generation engine406may train a model (e.g., using machine learning techniques) based on user-submitted client data, a description of a use-case and user-specified hyper-local data sources and features to train the model. According to some embodiments, processing system400may conduct an ongoing learning phase to learn the ties between hyper-local data features and discovered data/use-case profiles based on each instance of the training of a predictive model. The learning phase may include generating a data profile via the data profiling engine408, evaluating the performance of the model via the model performance engine410and profiling the use-case via the use-case profiling engine as described further below. According to some embodiments, the result of the learning phase may be the construction of a feature profile relation graph, such as the example feature profile relation graph500shown inFIG.5 The data profiling engine408can be applied to any external data (e.g., client data) fed into the workbench software platform (i.e., processing system400) to generate a data profile that includes one or more data profile vectors. The data profile may serve as a means of characterizing client data such that it may be compared to other client data (e.g., provided by other users in different models) to determine degrees of similarity between data sets. According to some embodiments, data profiling engine408may profile a client data set based on the text and associated metadata using natural language processing (NLP) techniques to build various profile vectors for the data. As will be appreciated by those of skill in the art, NLP techniques may be applied to the data set to determine the meaning and/or context of aspects of the data. According to some embodiments, such data profiling may be achieved by prefetching various keywords and phrases specific to various domains and categories that are used in building the profiling models and applying knowledge representation and reasoning to client data in view of the pre-fetched keywords to generate profile vectors. As will be appreciated by those of skill in the art, knowledge representation and reasoning is a field in artificial intelligence in which data is represented as a knowledge graph for various reasonings. For example, profile vectors may be neural word embeddings that can be generated using a process such as Word2Vec. As will be appreciated by those of skill in the art, Word2Vec is a two-layer neural network that processes text, receiving a text corpus as input and outputting a set of vectors that are feature vectors for the words in the corpus. According to some embodiments, the resulting data profile vectors may be used to determine the domain and/or category of the client data set. For example, domain and categories of the data may be represented as embeddings/vectors that enable various similarity measures. As used herein, a domain may refer to a high level categorization of goods and services such as for example, fashion, electronics, groceries, and the like, whereas a category may represent a more granular categorization of a given domain. For example, electronics may be a domain and categories may include televisions, cellular phones, speakers, laptops, and the like. As shown inFIG.5, a data profile generated by data profile engine408may be represented as a client data profile node502in a feature profile relation graph500generated by feature profile relation graph generation engine414. The model performance profiling engine410may determine, based on the client data and hyper-local data sources used in a model, the features from the hyper-local data sources used in the model that contributed to the model performance. In other words, of the hyper-local data sources used in the model, the model performance profiling engine410may determine the impact on the model of each of the corresponding features. The impact or contribution that a feature has on the model may be referred to as the feature importance. According to some embodiments, the model performance profiling engine410may determine the relative feature importance of each feature to the model and may rank the features in order of importance. As will be appreciated by those of skill the art, the feature importance of features may be determined using feature selection algorithms, by penalizing features that are not very important by running different regularization methods such as Lasso or Ridge and zeroing out the coefficients of those parameters in the model, or by using any other techniques that are known or developed in the domain of Explainable Artificial Intelligence (AI), in which various known techniques exist that can be used to determine feature importance and/or understand which features dominate more in predictions and the like. As shown inFIG.5, features having impact on one or more models may be represented as hyper-local feature nodes504in a feature profile relation graph500generated by feature profile relation graph generation engine414. Each hyper-local feature node504may represent a different feature or a feature of a different level of granularity than another hyper-local feature node504of the same feature type. For example, a first hyper-local feature node504may represent “average income—city” whereas a second hyper-local feature node504may represent “average income—street.” The use-case profiling engine412generates a use-case profile based on a user-input natural text description of the use-case of the model. For example, a user may enter “determining locations of vending machines within a city” as a use-case of the model. As will be appreciated by those of skill in the art, semantic representation and comparison of natural language texts may be achieved by using a word representation that encodes similarity, utilizing techniques and tools such as distributional similarity-based representations, natural language interpretation, sentence encoding, bidirectional long short-term memory (BiLSTM) encoding, neural network word embedding (e.g., Word2Vec), convolutional neural networks, word vectors, and the like. Thus, according to some embodiments, a use-case profile may be represented as a vector or word embedding. As shown inFIG.5, a use-case profile generated by data profile engine408may be represented as a use-case profile node506in a feature profile relation graph500generated by feature profile relation graph generation engine414. As described above, the feature profile relation graph generation engine414is configured to generate a feature profile relation graph500as illustrated inFIG.5. The feature profile relation graph500may include a plurality of client data profile nodes502, hyper-local feature nodes504and use-case profile nodes506. A given hyper-local feature node504may be connected to or associated with a client data profile node502via an edge510. A given hyper-local feature node504may be associated with a plurality of client data profile nodes502, each via a respective edge. Similarly, a given hyper-local feature node504may be associated with one or more use-case profile nodes via respective edges. Each respective edge may have an associated edge weight (e.g., edge510has a weight of “2.0”) that represents a strength of the relationship between the respective hyper-local feature node504and one of an associated client data profile node502or a user-case profile node506. The edge weights are determined based on the feature importances determined by the model performance profiling engine410across various models. Thus, if a feature heavily impacts a model performance, that feature will be given more weight with respect to the relationship between the corresponding hyper-local feature node504and the associated client data profile node502and use-case profile node506. Thus, the more frequent a relationship is observed across different models and the stronger the impact the feature has on the models, the more relative weight the respective edges will be given. In this way, the feature profile relation graph500may establish which features will have the biggest contribution to models having a particular client data profile and use-case profile. In some embodiments, the feature profile relation graph500may be continually or repeatedly updated with each new model that is generated using the workbench software platform. Once feature profile relation graph500has been generated or updated with data sufficient to show significant relationships between hyper-local feature nodes504and client data profile nodes502and/or use case profile nodes506(e.g., one or more edges has an edge weight that exceeds a predetermined threshold), the feature profile relation graph500may be used to automatically generate feature recommendations for new models. The hyper-local feature recommendation engine416may be configured to automatically generate a recommendation of one or more hyper-local data sources and/or hyper-local features to be used in training a new model based on a new client data set and a use-case description provided by a user and the feature profile relation graph. According to some embodiments, the processing system400may discover the domain and/or category of the new client data set by applying performing data profiling (e.g., via data profiling engine408) on the new client data and may determine the use-case of the new model by applying use-case profiling (e.g., via use-case profiling engine412) to the use-case description. Based on the discovered or determined data profile and use-case profile, the hyper-local feature recommendation engine416may determine the top K hyper-local features that have the strongest ties to both the client data profile and the use-case profile. According to some embodiments, this may be achieved by identifying the hyper-local feature nodes504of the feature profile relation graph that have the strongest ties (i.e., highest edge weights) to client data profile nodes502and use-case profile nodes506that are most similar to the client data profile and the use-case profile associated with the new model. The similarity between the new client data profile and client data profile nodes502and similarity between the new use-case data profile and use-case profile nodes506may be determined by, for example, comparing the profile vectors generated during the profiling process to determine the degree of similarity between them. As will be appreciated by those of skill in the art, various similarity measures may be used to compare profile vectors, such as but not limited to, use of cosine distance or Euclidean distance. As will be appreciated by those of skill in the art, in various embodiments different algorithms may be used to determine which hyper-local feature nodes have the highest ties to both the data profile and use-case profile of the new model. For example, in some embodiments, the client data profile node502that is most similar to the new client data profile and the use-case profile node506that is most similar to the new use-case profile may be identified, and each hyper-local feature node504that shares an edge with one or both of these two most similar nodes may be identified as being one of the top K hyper-local feature nodes504. For each of these hyper-local feature nodes504, the edge weight(s) of the edge(s) connecting to the most similar node(s) may be summed and the features may be ordered and ranked in order of which features have the highest total edge weight. According to some embodiments, instead of identifying a most similar node for each of the client data profile nodes502and the use-case profile nodes506, the hyper-local feature recommendation engine416may instead identify a degree of similarity between the new client data profile and each client data profile node502and a degree of similarity between the new use-case profile and the use-case profile nodes506and perform a weighted summation of feature edges connecting to those nodes based on the weighting. For example, if the new client data profile is identical to a first client data profile node502, the system may apply a weighting of “1” whereas if the new client data profile is only half similar to a second client data profile node502the system may apply a weighting of “0.5.” Assuming both the first and second client data profile nodes502are connected to a given hyper-local feature node504by a first and second edge respectively, the system may then multiply the edge weight of the first edge by “1” and the edge weight of the second edge by “0.5” and sum them together to determine a score for the hyper-local feature node. It will be understood that such a procedure may determine scores for each hyper-local feature node504by performing a weighted summation of all edge weights, adjusted based on their respective weightings, for all edges of each hyper-local feature node504. According to some embodiments, once the top K hyper-local features are identified, they may be provided to a user (e.g., via user device420) to allow a user to make a selection of features to be used in generating the new model. According to some embodiments, processing system400may automatically select one or more of the top K hyper-local features for use in the model and automatically train and generate the model without further user input. In this way, the system may provide automatic guidance regarding what the best hyper-local features to use in generating a new model are, which can save a data scientist significant time and generate more accurate models. Turning now toFIG.6, a flow diagram of a method600for providing automatic determination of recommended hyper-local data sources and features for use in modeling in accordance with an embodiment is shown. In one or more embodiments of the present invention, the method600may be embodied in software that is executed by computer elements located within a network that may reside in the cloud, such as the cloud computing environment50described herein above and illustrated inFIGS.1and2. In other embodiments, the computer elements may reside on a computer system or processing system, such as the processing system300described herein above and illustrated inFIG.3, processing system400described herein above and illustrated inFIG.4, or in some other type of computing or processing environment. The method600begins at block602and includes responsive to training each model of a plurality of models using a software platform, receiving (e.g., via processing system400), client data, a use-case description and a selection of hyper-local data sources to be used in the model from a user associated with the model (e.g., via user device420). According to some embodiments, the hyper-local data sources are accessible by the software platform. For example, the software platform may access data hyper-local data that pertains to a city block or an area of town that is approximately 5,000 square meters, or any such data set that is available on a locality. In some embodiments, processing system400may obtain hyper-local data from a hyper-local data lake430. In some embodiments, processing system400may obtain various data (including geolocation data) via an application programming interface (API) that allows processing system400to obtain data from various public and/or third party data sources, such as U.S. census data, weather data, traffic data, foot traffic data, social network profile data, and the like. According to some embodiments, the software platform may be a workbench software platform that is configured permit a plurality of users to use the hyper-local data sources in modeling and restrict each user of the plurality of users to only use client data provided by the user in modeling. In some embodiments, the workbench software platform may be configured to prevent users from copying or exporting the hyper-local data to a location that is external to the software platform. However, a workbench software platform implementing the techniques described herein may nonetheless allow individual users to benefit from the collective insights obtained from generating a feature profile relation graph based on the differing models of all users of the workbench software platform. Thus, although a particular user may not have direct access to the client data, use-cases and model data provided by another user, a workbench software platform executed in accordance with embodiments of the disclosure may nonetheless allow all users to benefit from collective learning provided by the techniques described herein. As shown at block604, the method includes generating (e.g., via data profiling engine408) a client data profile based on the client data. A client data profile may be generated for each model of a plurality of models in response to training each respective model. According to some embodiments, a client data profile may be represented as an n-dimensional vector. As shown at block606, the method includes determining (e.g., via model performance profiling engine410) a feature importance for each feature of a plurality of features associated with the selected hyper-local data sources. A feature importance of a plurality of features associated with the selected hyper-local data sources may be determined for each model of the plurality of models in response to training each respective model. As shown at block608, the method includes generating (e.g., via use-case profiling engine412) a use-case profile based on the use-case description. A use-case profile may be generated for each model of a plurality of models in response to training each respective model. As shown at block610, the method includes generating (e.g., via feature profile relation graph generation engine414) a feature profile relation graph based a plurality of determined client data profiles (e.g., client data profiles generated with respect to a plurality of different models at block604), a plurality of determined feature importances associated with features associated with hyper-local data sources (e.g. feature importances determined with respect to a plurality of different models at block606), and a plurality of determined use-case profiles (e.g., use-case profiles generated with respect to a plurality of different models at block608). According to some embodiments, the feature profile relation graph may include a plurality of client data profile nodes (e.g., corresponding to the plurality of determined client data profiles), a plurality of hyper-local feature nodes (e.g., corresponding to the plurality of determined feature importances) and a plurality of use-case profile nodes (e.g., corresponding to the plurality of determined use-case profiles). According to some embodiments, each node (i.e., client data profile nodes, hyper-local feature nodes and use-case nodes) may represent or be associated with various metadata such as one or more of textual details of the node, textual representation of the feature and vector representation of the features. Each hyper-local feature node may represent a particular feature of a hyper-local data source. As shown above inFIG.5and discussed previously above, according to some embodiments, each hyper-local feature node of the plurality of hyper-local feature nodes may be associated with one or more client data profile nodes and one or more user-case profile nodes by a respective edge having an associated edge weight. According to some embodiments, a feature importance may represent a degree to which the respective feature contributes to the performance of the model. In some embodiments, each edge weight may be based on one or more feature importances associated with the respective hyper-local feature node and the edge weight may represent a strength of the relationship between the respective hyper-local feature node and one of an associated data profile node or a user-case profile node. For example, if 10 different models with similar client data profiles (or profile vectors) are trained using the software platform and 8 of them show a high level of feature importance for a particular feature, then the edge weight between that hyper-local feature node and the data profile node would be higher than it would be if only 3 of the 10 different models showed a high level of feature importance for the particular feature. Thus, when similar relations are observed from many different users (i.e., different models being trained using different data sets and use-cases), the corresponding edge weight may be increased to show the strong relationship between the hyper-local data feature and the data profiles. Accordingly, in some embodiments, a given edge may be assigned a higher relative edge weight based on determining that different models have similar relative feature importances associated with a given hyper-local feature node. As shown at block612, the method includes responsive to receiving a new client data set and a new use-case description, determining (e.g., via hyper-local feature recommendation engine416) one or more hyper-local features as suggested hyper-local features for use in building a new model based on the new client data set, the new use-case description and the feature profile relation graph. According to some embodiments, determining one or more hyper-local features as suggested hyper-local features for use in building a new model may include generating, based on the new client data, a new client data profile, determining a most similar client data profile node of the plurality of client data profile nodes of the feature profile relation graph based on the new client data profile, generating, based on the new use-case description, a new use-case profile, determining a most similar use-case profile node of the plurality of use-case profile nodes of the feature profile relation graph based on the new use-case profile, and determining suggested features based on one or more hyper-local feature nodes of the plurality of hyper-local feature nodes of the feature profile relation graph having the highest edge weights with the most similar client data profile node and the most similar use-case profile node. In other words, the processing system400may determine which hyper-local feature nodes504of a feature profile relation graph500have the edges with the highest weight that are connected to or associated with client data profile node(s)502and use-case profile node(s)506that are similar or most similar to the new client data profile and new use-case profile, respectively. In some embodiments, the method may further include outputting the suggested hyper-local features for display to a user. For example, the suggested hyper-local features may be output to user device420for display to a user, and the user may select one or more of the suggested hyper-local features for use in training the new model. In some embodiments, the method may further include automatically initiating training of the new model based on the new client data set and the suggested hyper-local features. In this case, the system may automatically determine which hyper-local features to use and automatically train the model, thereby allowing for a user to create a model with little effort by merely inputting the client data and a description of the use-case. According to some embodiments, the feature profile relation graph may be automatically updated in response to training the new model. Thus, in some embodiments, with every new model that is trained using the processing system400, the process of profiling the client data, determining the importance of features on the model, and profiling the use-case may be repeated, and the feature profile relation graph may be updated to include the results by, for example, adding one or more new nodes and/or adjusting the edge weight of one or more previously existing nodes. Additional processes may also be included. It should be understood that the process depicted inFIG.6represent illustrations, and that other processes may be added or existing processes may be removed, modified, or rearranged without departing from the scope and spirit of the present disclosure. The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instruction by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments described herein. | 55,688 |
11861460 | SUMMARY In accordance with the systems, methods, and machine-readable media described herein, a system (e.g., including a trainer machine or other computer system for training a learning machine) accesses a training database of reference metadata descriptive of reference travel plans that include reference first-type plans and reference second-type plans. The system then trains a learning machine to distinguish candidate first-type plans from candidate second-type plans. The training of the learning machine is based on a set of decision trees generated from randomly selected subsets of the reference metadata, and the randomly selected subsets each describe a corresponding randomly selected portion of the reference plans. The decision trees may include random decision trees, such as those generated by a Random Forests® technique. The system then modifies the trained learning machine, and this modification of the learning machine is based on asymmetrical penalties for incorrectly distinguishing candidate first-type plans from candidate second-type plans. The system then provides the modified learning machine for run-time use (e.g., in classifying plans of the first type from plans of the second type). At this point, the modified, trained learning machine has been trained to distinguish candidate first-type plans from candidate second-type plans based on the asymmetrical penalties for incorrectly distinguishing candidate first-type plans from candidate second-type plans. According to various example embodiments, such reference or candidate plans may include reference or candidate travel plans (e.g., travel itineraries, each including one or more flights reserved or taken, one or more hotel stays reserved or completed, one or more car rentals reserved or completed, etc., or any suitable combinations thereof). DETAILED DESCRIPTION Example methods (e.g., algorithms) facilitate training a learning machine based on plan types, and example systems (e.g., special-purpose machines configured by special-purpose software) are configured to facilitate training a learning machine based on plan types. Examples merely typify possible variations. Unless explicitly stated otherwise, structures (e.g., structural components, such as modules) are optional and may be combined or subdivided, and operations (e.g., in a procedure, algorithm, or other function) may vary in sequence or be combined or subdivided. In the following description, for purposes of explanation, numerous specific details are set forth to provide a thorough understanding of various example embodiments. It will be evident to one skilled in the art, however, that the present subject matter may be practiced without these specific details. For brevity and clarity, several example embodiments of the systems and methods described herein discuss example scenarios in which the plans are travel plans (e.g., travel itineraries). However, other types of plans are contemplated by the present subject matter (e.g., delivery routes, equipment maintenance plans, event schedules, or task performance sequences). FIG.1is a network diagram illustrating a network environment100suitable for training a learning machine120, according to some example embodiments. The network environment100includes a trainer machine110, a database115, the learning machine120, and devices130and150, all communicatively coupled to each other via a network190. The trainer machine110, with or without the database115, may form all or part of a cloud118(e.g., a geographically distributed set of multiple machines configured to function as a single server), which may form all or part of a network-based system105(e.g., a cloud-based server system configured to provide one or more network-based services to the devices130and150). The trainer machine110, the database115, the learning machine120, and the devices130and150may each be implemented in a special-purpose (e.g., specialized) computer system, in whole or in part, as described below with respect toFIG.5. Also shown inFIG.1are users132and152. One or both of the users132and152may be a human user (e.g., a human being), a machine user (e.g., a computer configured by a software program to interact with the device130or150), or any suitable combination thereof (e.g., a human assisted by a machine or a machine supervised by a human). The user132is associated with the device130and may be a user of the device130. For example, the device130may be a desktop computer, a vehicle computer, a home media system (e.g., a home theater system or other home entertainment system), a tablet computer, a navigational device, a portable media device, a smart phone, or a wearable device (e.g., a smart watch, smart glasses, smart clothing, or smart jewelry) belonging to the user132. Likewise, the user152is associated with the device150and may be a user of the device150. As an example, the device150may be a desktop computer, a vehicle computer, a home media system (e.g., a home theater system or other home entertainment system), a tablet computer, a navigational device, a portable media device, a smart phone, or a wearable device (e.g., a smart watch, smart glasses, smart clothing, or smart jewelry) belonging to the user152. Any of the systems or machines (e.g., databases and devices) shown inFIG.1may be, include, or otherwise be implemented in a special-purpose (e.g., specialized or otherwise non-conventional and non-generic) computer that has been modified to perform one or more of the functions described herein for that system or machine (e.g., configured or programmed by special-purpose software, such as one or more software modules of a special-purpose application, operating system, firmware, middleware, or other software program). For example, a special-purpose computer system able to implement any one or more of the methodologies described herein is discussed below with respect toFIG.5, and such a special-purpose computer may accordingly be a means for performing any one or more of the methodologies discussed herein. Within the technical field of such special-purpose computers, a special-purpose computer that has been specially modified (e.g., configured by special-purpose software) by the structures discussed herein to perform the functions discussed herein is technically improved compared to other special-purpose computers that lack the structures discussed herein or are otherwise unable to perform the functions discussed herein. Accordingly, a special-purpose machine configured according to the systems and methods discussed herein provides an improvement to the technology of similar special-purpose machines. As used herein, a “database” is a data storage resource and may store data structured as a text file, a table, a spreadsheet, a relational database (e.g., an object-relational database), a triple store, a hierarchical data store, or any suitable combination thereof. Moreover, any two or more of the systems or machines illustrated inFIG.1may be combined into a single system or machine, and the functions described herein for any single system or machine may be subdivided among multiple systems or machines. The network190is a network that enables communication between or among systems, machines, databases, and devices (e.g., between the machine110and the device130). Accordingly, the network190may be a wired network, a wireless network (e.g., a mobile or cellular network), or any suitable combination thereof. The network190may include one or more portions that constitute a private network, a public network (e.g., the Internet), or any suitable combination thereof. Accordingly, the network190may include one or more portions that incorporate a local area network (LAN), a wide area network (WAN), the Internet, a mobile telephone network (e.g., a cellular network), a wired telephone network (e.g., a plain old telephone service (POTS) network), a wireless data network (e.g., a WiFi network or WiMax network), or any suitable combination thereof. Any one or more portions of the network190may communicate information via a transmission medium. As used herein, “transmission medium” refers to any intangible (e.g., transitory) medium that is capable of communicating (e.g., transmitting) instructions for execution by a machine (e.g., by one or more processors of such a machine), and includes digital or analog communication signals or other intangible media to facilitate communication of such software. FIG.2is a block diagram illustrating components of the trainer machine110, as configured for training the learning machine120, according to some example embodiments. The trainer machine110is shown as including a reference metadata accessor module210, a learning machine trainer module220, a learning machine modifier module230, and a learning machine provider module240, all configured to communicate with each other (e.g., via a bus, shared memory, or a switch). The reference metadata accessor module210may be or include custom software, custom hardware, or both, configured to access reference metadata of reference plans (e.g., stored by the database115and accessed therefrom). The learning machine trainer module220may be or include custom software, custom hardware, or both, configured to train the learning machine120(e.g., initially train the learning machine120based on training data). The learning machine modifier module230may be or include custom software, custom hardware, or both, configured to modify the trained learning machine120(e.g., as trained by the learning machine trainer module220). The learning machine provider module240may be or include custom software, custom hardware, or both, configured to provide (e.g., for run-time usage) the modified, trained learning machine120(e.g., as modified by the learning machine modifier module230). As shown inFIG.2, the reference metadata accessor module210, the learning machine trainer module220, the learning machine modifier module230, the learning machine provider module240, or any suitable combination thereof, may form all or part of an app200(e.g., a mobile app) that is stored (e.g., installed) on the trainer machine110(e.g., responsive to or otherwise as a result of data being received via the network190). Furthermore, one or more processors299(e.g., hardware processors, digital processors, or any suitable combination thereof) may be included (e.g., temporarily or permanently) in the app200, the reference metadata accessor module210, the learning machine trainer module220, the learning machine modifier module230, the learning machine provider module240, or any suitable combination thereof. Any one or more of the components (e.g., modules) described herein may be implemented using hardware alone (e.g., one or more of the processors299) or a combination of hardware and software. For example, any component described herein may physically include an arrangement of one or more of the processors299(e.g., a subset of or among the processors299) configured to perform the operations described herein for that component. As another example, any component described herein may include software, hardware, or both, that configure an arrangement of one or more of the processors299to perform the operations described herein for that component. Accordingly, different components described herein may include and configure different arrangements of the processors299at different points in time or a single arrangement of the processors299at different points in time. Each component (e.g., module) described herein is an example of a means for performing the operations described herein for that component. Moreover, any two or more components described herein may be combined into a single component, and the functions described herein for a single component may be subdivided among multiple components. Furthermore, according to various example embodiments, components described herein as being implemented within a single system or machine (e.g., a single device) may be distributed across multiple systems or machines (e.g., multiple devices). FIGS.3and4are flowcharts illustrating operations of the trainer machine110in performing a method300of training, modifying, and providing the learning machine120for run-time use, according to some example embodiments. Operations in the method300may be performed by the trainer machine110, using components (e.g., modules) described above with respect toFIG.2, using one or more processors (e.g., microprocessors or other hardware processors), or using any suitable combination thereof. As shown inFIG.3, the method300includes operations310,320,330, and340. In operation310, the reference metadata accessor module210accesses a training database (e.g., stored in the database115) of reference metadata. The accessed reference metadata corresponds to reference plans (e.g., a set of reference travel plans or other reference plans), describes aspects of the reference plans, and is associated (e.g., by the training database) with these reference plans. Each one of the reference plans has its respectively corresponding associated reference metadata that describes that reference plan. For example, the reference plans may be reference travel plans that include reference first-type travel plans and reference second-type travel plans. In various example embodiments, the first-type travel plans are travel plans classified as personal travel plans (e.g., for recreation or for family visits), non-tracked travel plans (e.g., for tax calculations or for other accounting calculations), non-reimbursable travel plans, non-business travel plans, or any suitable combination thereof; and the second-type travel plans are travel plans classified as non-personal travel plans (e.g., for work or for non-recreational travel), tracked travel plans (e.g., for tax calculations or for other accounting calculations), reimbursable travel plans, business travel plans, or any suitable combination thereof. In operation320, the learning machine trainer module220trains the learning machine120(e.g., from an untrained state to a trained state, or from a pre-training state to a post-training state) to function as an artificially intelligent classifier configured to distinguish between first and second types of plans. Accordingly, by virtue of the learning machine trainer module220performing operation320, the learning machine120becomes trained to distinguish between first and second types of candidate plans (e.g., candidate travel itineraries to be classified as first-type travel itineraries or second-type travel itineraries). That is, the trained learning machine120is configured by this training process to determine whether a given candidate plan is classified (e.g., categorized or labelled) as a first-type candidate plan or as a second-type candidate plan. According to various example embodiments, the training of the learning machine120in operation320may be based on one or more factors. Examples of such factors include: whether a plan is a one-way plan or a round-trip plan (e.g., whether a flight in the plan was round-trip); whether a plan included international travel (e.g., whether the plan included an international flight); the number of travelers corresponding to a plan (e.g., travelling together in the same travel itinerary); the day of the week on which a plan begins (e.g., Sunday, Monday, etc.); the day of the week on which a plan ends; the local time at which a plan begins; the local time at which a plan ends; the number of destinations in a plan (e.g., the total count of destination cities or airports); the number of stops (e.g., layovers or stopovers) per destination in a plan (e.g., the ratio of stops to destination cities or airports); the sizes of the destination cities or airports in a plan (e.g., above or below a predetermined threshold size); whether the date or date range of a plan includes a holiday (e.g., includes a governmental holiday, such as a federal holiday); whether a plan includes a car rental reservation; whether a plan includes a hotel reservation; the source entity (e.g., a travel website among multiple travel websites) through which a plan was reserved; and any suitable combination thereof. Accordingly, a one-way plan may be more likely to be a first-type (e.g., personal or recreational) plan, while a round-trip plan may be less likely to be a first-type plan, more likely to be a second-type (e.g., non-personal or non-recreational) plan, or both. A plan that does not include international travel may be more likely to be a first-type plan, while a plan that includes international travel may be less likely to be a first-type plan, more likely to be a second-type plan, or both. A plan with a traveler count above a threshold value may be more likely to be a first-type plan, while a plan with a traveler count at or below the threshold value may be less likely to be a first-type plan, more likely to be a second-type plan, or both. A plan that begins or ends on a certain day of the week may be more likely to be a first-type plan, while plans that do not may be less likely to be a first-type plan, more likely to be a second-type plan, or both. A plan that begins or ends during certain hours of the day in local time may be more likely to be a first-type plan, while plans that do not may be less likely to be a first-type plan, more likely to be a second-type plan, or both. A plan with a low number of destinations at or below a threshold value may be more likely to be a first-type plan, while plans with high numbers of destinations above the threshold value may be less likely to be a first-type plan, more likely to be a second-type plan, or both. A plan with a low number of stops (e.g., at or below a threshold value of zero or one) may be more likely to be a first-type plan, while plans with high numbers of stops (e.g., above the threshold value) may be less likely to be a first-type plan, more likely to be a second-type plan, or both. A plan with a low ratio of stops to destinations (e.g., at or below a threshold value) may be more likely to be a first-type plan, while plans with high ratios of stops to destinations (e.g., above the threshold value) may be less likely to be a first-type plan, more likely to be a second-type plan, or both. Furthermore, a plan with destination cities or airports at or below a threshold size may be more likely to be a first-type plan, while plans with destination cities or airports above the threshold size may be less likely to be a first-type plan, more likely to be a second-type plan, or both. A plan whose date or date range includes a holiday (e.g., a governmental holiday) may be more likely to be a first-type plan, while plans whose dates or date ranges do not may be less likely to be a first-type plan, more likely to be a second-type plan, or both. A plan that includes a car rental reservation may be more likely to be a first-type plan, while plans that do not may be less likely to be a first-type plan, more likely to be a second-type plan, or both. A plan that does not include a hotel reservation may be more likely to be a first-type plan, while plans that do may be less likely to be a first-type plan, more likely to be a second-type plan, or both. A plan reserved through a first source entity (e.g., a first travel website) may be more likely to be a first-type plan, while plans reserved through a second source entity (e.g., a second travel website) may be less likely to be a first-type plan, more likely to be a second-type plan, or both. In operation330, the learning machine modifier module230modifies the learning machine120(e.g., as previously trained in operation320). The modification of the learning machine120in operation330is based on a pair of asymmetrical penalties (e.g., asymmetrical adverse or negative weights) for incorrectly (e.g., erroneously, inaccurately, or wrongly) distinguishing between first and second types of plans. In some example embodiments, the penalty (e.g., a first penalty) for incorrectly classifying a plan as a first-type plan is greater than the penalty (e.g., a second penalty) for incorrectly classifying a plan as a second-type plan. In other example embodiments, the penalty for incorrectly classifying a plan as a first-type plan is lesser than the penalty for incorrectly classifying a plan as a second-type plan. The asymmetrical penalties may be applicable to reference plans (e.g., whose types are known during training of the learning machine120), candidate plans (e.g., whose types are to be determined at run-time), or both, according to various example embodiments. In operation340, the learning machine provider module240provides the output of operation330, namely, the trained and modified learning machine120for run-time use. This may be performed by enabling one or more of the devices130and150to access the learning machine120(e.g., via a user interface, such as a graphical user interface, or via a programmatic interface, such as an application programming interface); marking the learning machine120as being ready, released, or otherwise available for run-time use (e.g., as part of the network-based system105) in distinguishing between first and second types of plans; uploading or otherwise implementing a copy of the learning machine120into the cloud118or other portion of the network-based system105; providing a copy of the learning machine120to one or more of the devices130and150; or any suitable combination thereof. As shown inFIG.4, in addition to any one or more of the operations previously described, the method300may include one or more of operations420,421,422,423,424,425,426,427,430, and431. One or more of operations420-427may be performed as part (e.g., a precursor task, a subroutine, or a portion) of operation320, in which the learning machine trainer module220trains the learning machine120to distinguish between first and second types of plans (e.g., reference plans, candidate plans, or both). In operation420, the training of the learning machine120is based on decision trees constructed or otherwise generated by a Random Forests® technique. For example, such decision trees may be generated from randomly selected subsets of the reference metadata accessed in operation310. Such generated decision trees may be stored (e.g., temporarily or permanently) in the database115, in the trainer machine110, or in both. The randomly selected subsets of the reference metadata may each describe a corresponding randomly selected portion of the reference plans (e.g., a randomly chosen subdivision of the reference travel itineraries to which the reference metadata corresponds). Accordingly, performance of operation of420may include randomly selecting portions of the reference plans that correspond to the reference metadata, randomly selecting subsets of the reference metadata, generating decision trees from the randomly selected subsets of the reference metadata or the corresponding reference metadata for the randomly selected portions of the reference plans, training the learning machine120based on the generated decision trees, or any suitable combination thereof. Operation421may be suitable where the reference plans are reference travel plans, and the reference metadata of the reference travel plans indicates source entities that each reserved a corresponding reference travel plan among the reference travel plans for a corresponding user. In operation421, the training of the learning machine120to distinguish candidate first-type travel plans from candidate second-type travel plans is based on the indicated source entities. Thus, where the source entities are or include sources of travel bookings (e.g., websites that offer travel bookings), such sources of travel bookings may influence the training of the learning machine120(e.g., determining or otherwise fully or partially affecting weightings applied to the decision trees). Operation422may be suitable where the reference plans are reference travel plans, and the reference metadata of the reference travel plans indicates sizes of destination airports. Each of the indicated sizes may respectively correspond to a different reference travel plan among the reference travel plans. In operation422, the training of the learning machine120to distinguish candidate first-type travel plans from candidate second-type travel plans is based on the indicated sizes of the destination airports. Thus, the sizes of destination airports may influence the training of the learning machine120(e.g., determining or otherwise fully or partially affecting weightings applied to the decision trees). Operation423may be suitable where the reference plans are reference travel plans, and the reference metadata of the reference travel plans indicates ratios of layovers to destination cities. Each of the indicated ratios may respectively correspond to a different reference travel plan among the reference travel plans. In operation423, the training of the learning machine120to distinguish candidate first-type travel plans from candidate second-type travel plans is based on the indicated ratios of layovers to destination cities. Thus, the number of layovers per destination may influence the training of the learning machine120(e.g., determining or otherwise fully or partially affecting weightings applied to the decision trees). Operation424may be suitable where the reference plans are reference travel plans, and the reference metadata of the reference travel plans indicates total counts of destination cities. Each of the indicated total counts may respectively correspond to a different reference travel plan among the reference travel plans. In operation424, the training of the learning machine120to distinguish candidate first-type travel plans from candidate second-type travel plans is based on the indicated total counts of destination cities. Thus, the total counts of destination cities may influence the training of the learning machine120(e.g., determining or otherwise fully or partially affecting weightings applied to the decision trees). Operation425may be suitable where the reference plans are reference travel plans, and the reference metadata of the reference travel plans includes indications of whether conventions occurred in destination cities that each respectively corresponds to a different reference travel plan among the reference travel plans. In operation425, the training of the learning machine120to distinguish candidate first-type travel plans from candidate second-type travel plans is based on the indications of whether conventions occurred in the destination cities. Thus, the indications of whether conventions co-occurred in destination cities may influence the training of the learning machine120(e.g., determining or otherwise fully or partially affecting weightings applied to the decision trees). Operation426may be suitable where the reference plans are reference travel plans, and the reference metadata of the reference travel plans indicates dates of travel. Each reference travel plan among the reference travel plans respectively corresponds to a different set of one or more dates of travel (e.g., a single date, or a pair or range of dates). In operation426, the learning machine trainer module220accesses a curated database of annual first-type events whose dates of occurrence vary by year. The curated database may be stored in the database115and accessed therefrom. Furthermore, in operation426, the training of the learning machine120to distinguish candidate first-type travel plans from candidate second-type travel plans is based on a comparison of the dates of occurrence to the dates of travel. Thus, such annual first-type events may influence the training of the learning machine120(e.g., determining or otherwise fully or partially affecting weightings applied to the decision trees). Operation427may be suitable where the reference plans are reference travel plans, and the reference metadata of the reference travel plans indicates dates of travel. Each reference travel plan among the reference travel plans respectively corresponds to a different set of one or more dates of travel (e.g., a single date, or a pair or range of dates). In operation427, the learning machine trainer module220accesses a curated database of annual second-type events whose dates of occurrence vary by year. The curated database may be stored in the database115and accessed therefrom. Furthermore, in operation427, the training of the learning machine120to distinguish candidate first-type travel plans from candidate second-type travel plans is based on a comparison of the dates of occurrence to the dates of travel. Thus, such annual second-type events may influence the training of the learning machine120(e.g., determining or otherwise fully or partially affecting weightings applied to the decision trees). As shown inFIG.4, one or both of operations430and431may be performed as part of operation330, in which the learning machine modifier module230modifies the trained learning machine120based on asymmetrical penalties discussed above. In operation430, the learning machine modifier module230applies a first penalty (e.g., in a pair of unequal and asymmetrical penalties) for incorrectly classifying a plan as a first-type plan. This first penalty is applied to the trained learning machine120(e.g., in the form of weighting one or more decision trees with a first mathematical weighting factor that penalizes erroneous classifications into the first type of plan). In operation431, the learning machine modifier module230applies a second penalty (e.g., in the pair of unequal and asymmetrical penalties) for incorrectly classifying a plan as a second-type plan. This second penalty is applied to the trained learning machine120(e.g., in the form of weighting one or more decision trees with a second mathematical weighting factor that penalizes erroneous classifications into the second type of plan). With respect to operations430and431, the first penalty is more significant (e.g., larger in absolute value) than the second penalty in some example embodiments. This may have the effect of biasing the modified learning machine120to be more aggressive in classifying plans into the second type, and more conservative in classifying plans into the first type. Conversely, in alternative example embodiments, the first penalty is less significant than the second penalty. This may have the effect of biasing the modified learning machine120toward being more conservative in classifying plans into the second type, and more aggressive in classifying plans into the first type. According to various example embodiments, one or more of the methodologies described herein may facilitate training a learning machine. Moreover, one or more of the methodologies described herein may facilitate training a learning machine to distinguish first-type plans from second-type plans. Hence, one or more of the methodologies described herein may facilitate automatically distinguishing candidate first-type plans from candidate second-type plans based on asymmetrical penalties for incorrectly distinguishing candidate first-type plans from candidate second-type plans, as well as consistently making such distinction across numerous instances of candidate plans, compared to capabilities of pre-existing systems and methods. When these effects are considered in aggregate, one or more of the methodologies described herein may obviate a need for certain efforts or resources that otherwise would be involved in classifying first-type plans and second-type plans. Efforts expended by a user in determining correct classifications of plans may be reduced by use of (e.g., reliance upon) a special-purpose machine that implements one or more of the methodologies described herein. Computing resources used by one or more systems or machines (e.g., within the network environment100) may similarly be reduced (e.g., compared to systems or machines that lack the structures discussed herein or are otherwise unable to perform the functions discussed herein). Examples of such computing resources include processor cycles, network traffic, computational capacity, main memory usage, graphics rendering capacity, graphics memory usage, data storage capacity, power consumption, and cooling capacity. FIG.5is a block diagram illustrating components of a machine500, according to some example embodiments, able to read instructions524from a machine-readable medium522(e.g., a non-transitory machine-readable medium, a machine-readable storage medium, a computer-readable storage medium, or any suitable combination thereof) and perform any one or more of the methodologies discussed herein, in whole or in part. Specifically,FIG.5shows the machine500in the example form of a computer system (e.g., a computer) within which the instructions524(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine500to perform any one or more of the methodologies discussed herein may be executed, in whole or in part. In alternative embodiments, the machine500operates as a standalone device or may be communicatively coupled (e.g., networked) to other machines. In a networked deployment, the machine500may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a distributed (e.g., peer-to-peer) network environment. The machine500may be a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a cellular telephone, a smart phone, a set-top box (STB), a personal digital assistant (PDA), a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions524, sequentially or otherwise, that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute the instructions524to perform all or part of any one or more of the methodologies discussed herein. The machine500includes a processor502(e.g., one or more central processing units (CPUs), one or more graphics processing units (GPUs), one or more digital signal processors (DSPs), one or more application specific integrated circuits (ASICs), one or more radio-frequency integrated circuits (RFICs), or any suitable combination thereof), a main memory504, and a static memory506, which are configured to communicate with each other via a bus508. The processor502contains solid-state digital microcircuits (e.g., electronic, optical, or both) that are configurable, temporarily or permanently, by some or all of the instructions524such that the processor502is configurable to perform any one or more of the methodologies described herein, in whole or in part. For example, a set of one or more microcircuits of the processor502may be configurable to execute one or more modules (e.g., software modules) described herein. In some example embodiments, the processor502is a multicore CPU (e.g., a dual-core CPU, a quad-core CPU, an 8-core CPU, or a 128-core CPU) within which each of multiple cores behaves as a separate processor that is able to perform any one or more of the methodologies discussed herein, in whole or in part. Although the beneficial effects described herein may be provided by the machine500with at least the processor502, these same beneficial effects may be provided by a different kind of machine that contains no processors (e.g., a purely mechanical system, a purely hydraulic system, or a hybrid mechanical-hydraulic system), if such a processor-less machine is configured to perform one or more of the methodologies described herein. The machine500may further include a graphics display510(e.g., a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, a cathode ray tube (CRT), or any other display capable of displaying graphics or video). The machine500may also include an alphanumeric input device512(e.g., a keyboard or keypad), a pointer input device514(e.g., a mouse, a touchpad, a touchscreen, a trackball, a joystick, a stylus, a motion sensor, an eye tracking device, a data glove, or other pointing instrument), a data storage516, an audio generation device518(e.g., a sound card, an amplifier, a speaker, a headphone jack, or any suitable combination thereof), and a network interface device520. The data storage516(e.g., a data storage device) includes the machine-readable medium522(e.g., a tangible and non-transitory machine-readable storage medium) on which are stored the instructions524embodying any one or more of the methodologies or functions described herein. The instructions524may also reside, completely or at least partially, within the main memory504, within the static memory506, within the processor502(e.g., within the processor's cache memory), or any suitable combination thereof, before or during execution thereof by the machine500. Accordingly, the main memory504, the static memory506, and the processor502may be considered machine-readable media (e.g., tangible and non-transitory machine-readable media). The instructions524may be transmitted or received over the network190via the network interface device520. For example, the network interface device520may communicate the instructions524using any one or more transfer protocols (e.g., hypertext transfer protocol (HTTP)). In some example embodiments, the machine500may be a portable computing device (e.g., a smart phone, a tablet computer, or a wearable device) and may have one or more additional input components530(e.g., sensors or gauges). Examples of such input components530include an image input component (e.g., one or more cameras), an audio input component (e.g., one or more microphones), a direction input component (e.g., a compass), a location input component (e.g., a global positioning system (GPS) receiver), an orientation component (e.g., a gyroscope), a motion detection component (e.g., one or more accelerometers), an altitude detection component (e.g., an altimeter), a temperature input component (e.g., a thermometer), and a gas detection component (e.g., a gas sensor). Input data gathered by any one or more of these input components530may be accessible and available for use by any of the modules described herein (e.g., with suitable privacy notifications and protections, such as opt-in consent or opt-out consent, implemented in accordance with user preference, applicable regulations, or any suitable combination thereof). As used herein, the term “memory” refers to a machine-readable medium able to store data temporarily or permanently and may be taken to include, but not be limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, and cache memory. While the machine-readable medium522is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store instructions. The term “machine-readable medium” shall also be taken to include any medium, or combination of multiple media, that is capable of carrying (e.g., storing or communicating) the instructions524for execution by the machine500, such that the instructions524, when executed by one or more processors of the machine500(e.g., processor502), cause the machine500to perform any one or more of the methodologies described herein, in whole or in part. Accordingly, a “machine-readable medium” refers to a single storage apparatus or device, as well as cloud-based storage systems or storage networks that include multiple storage apparatus or devices. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, one or more tangible and non-transitory data repositories (e.g., data volumes) in the example form of a solid-state memory chip, an optical disc, a magnetic disc, or any suitable combination thereof. A “non-transitory” machine-readable medium, as used herein, specifically excludes propagating signals per se. According to various example embodiments, the instructions524for execution by the machine500can be communicated via a carrier medium (e.g., a machine-readable carrier medium). Examples of such a carrier medium include a non-transient carrier medium (e.g., a non-transitory machine-readable storage medium, such as a solid-state memory that is physically movable from one place to another place) and a transient carrier medium (e.g., a carrier wave or other propagating signal that communicates the instructions524). Certain example embodiments are described herein as including modules. Modules may constitute software modules (e.g., code stored or otherwise embodied in a machine-readable medium or in a transmission medium), hardware modules, or any suitable combination thereof. A “hardware module” is a tangible (e.g., non-transitory) physical component (e.g., a set of one or more processors) capable of performing certain operations and may be configured or arranged in a certain physical manner. In various example embodiments, one or more computer systems or one or more hardware modules thereof may be configured by software (e.g., an application or portion thereof) as a hardware module that operates to perform operations described herein for that module. In some example embodiments, a hardware module may be implemented mechanically, electronically, hydraulically, or any suitable combination thereof. For example, a hardware module may include dedicated circuitry or logic that is permanently configured to perform certain operations. A hardware module may be or include a special-purpose processor, such as a field programmable gate array (FPGA) or an ASIC. A hardware module may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. As an example, a hardware module may include software encompassed within a CPU or other programmable processor. It will be appreciated that the decision to implement a hardware module mechanically, hydraulically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations. Accordingly, the phrase “hardware module” should be understood to encompass a tangible entity that may be physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Furthermore, as used herein, the phrase “hardware-implemented module” refers to a hardware module. Considering example embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where a hardware module includes a CPU configured by software to become a special-purpose processor, the CPU may be configured as respectively different special-purpose processors (e.g., each included in a different hardware module) at different times. Software (e.g., a software module) may accordingly configure one or more processors, for example, to become or otherwise constitute a particular hardware module at one instance of time and to become or otherwise constitute a different hardware module at a different instance of time. Hardware modules can provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules may be regarded as being communicatively coupled. Where multiple hardware modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over circuits and buses) between or among two or more of the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module may perform an operation and store the output of that operation in a memory (e.g., a memory device) to which it is communicatively coupled. A further hardware module may then, at a later time, access the memory to retrieve and process the stored output. Hardware modules may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information from a computing resource). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions described herein. As used herein, “processor-implemented module” refers to a hardware module in which the hardware includes one or more processors. Accordingly, the operations described herein may be at least partially processor-implemented, hardware-implemented, or both, since a processor is an example of hardware, and at least some operations within any one or more of the methods discussed herein may be performed by one or more processor-implemented modules, hardware-implemented modules, or any suitable combination thereof. Moreover, such one or more processors may perform operations in a “cloud computing” environment or as a service (e.g., within a “software as a service” (SaaS) implementation). For example, at least some operations within any one or more of the methods discussed herein may be performed by a group of computers (e.g., as examples of machines that include processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an application program interface (API)). The performance of certain operations may be distributed among the one or more processors, whether residing only within a single machine or deployed across a number of machines. In some example embodiments, the one or more processors or hardware modules (e.g., processor-implemented modules) may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the one or more processors or hardware modules may be distributed across a number of geographic locations. Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and their functionality presented as separate components and functions in example configurations may be implemented as a combined structure or component with combined functions. Similarly, structures and functionality presented as a single component may be implemented as separate components and functions. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein. Some portions of the subject matter discussed herein may be presented in terms of algorithms or symbolic representations of operations on data stored as bits or binary digital signals within a memory (e.g., a computer memory or other machine memory). Such algorithms or symbolic representations are examples of techniques used by those of ordinary skill in the data processing arts to convey the substance of their work to others skilled in the art. As used herein, an “algorithm” is a self-consistent sequence of operations or similar processing leading to a desired result. In this context, algorithms and operations involve physical manipulation of physical quantities. Typically, but not necessarily, such quantities may take the form of electrical, magnetic, or optical signals capable of being stored, accessed, transferred, combined, compared, or otherwise manipulated by a machine. It is convenient at times, principally for reasons of common usage, to refer to such signals using words such as “data,” “content,” “bits,” “values,” “elements,” “symbols,” “characters,” “terms,” “numbers,” “numerals,” or the like. These words, however, are merely convenient labels and are to be associated with appropriate physical quantities. Unless specifically stated otherwise, discussions herein using words such as “accessing,” “processing,” “detecting,” “computing,” “calculating,” “determining,” “generating,” “presenting,” “displaying,” or the like refer to actions or processes performable by a machine (e.g., a computer) that manipulates or transforms data represented as physical (e.g., electronic, magnetic, or optical) quantities within one or more memories (e.g., volatile memory, non-volatile memory, or any suitable combination thereof), registers, or other machine components that receive, store, transmit, or display information. Furthermore, unless specifically stated otherwise, the terms “a” or “an” are herein used, as is common in patent documents, to include one or more than one instance. Finally, as used herein, the conjunction “or” refers to a non-exclusive “or,” unless specifically stated otherwise. The following enumerated descriptions describe various examples of methods, machine-readable media, and systems (e.g., machines, devices, or other apparatus) discussed herein. A first example provides a method comprising:accessing, by one or more processors, a training database of reference metadata descriptive of reference travel plans that include reference first-type travel plans (e.g., personal, non-tracked, non-reimbursable, or non-business travel plans) and reference second-type travel plans (e.g., non-personal, tracked, reimbursable, or business travel plans);training, by the one or more processors, a learning machine to distinguish candidate first-type travel plans from candidate second-type travel plans, the learning machine being trained based on decision trees generated from randomly selected subsets of the reference metadata that is descriptive of the reference travel plans, the randomly selected subsets each describing a corresponding randomly selected portion of the reference travel plans that include the reference first-type travel plans and the reference second-type travel plans;modifying, by the one or more processors, the trained learning machine based on asymmetrical penalties for incorrectly distinguishing candidate first-type travel plans from candidate second-type travel plans; andproviding, by the one or more processors, the modified learning machine trained to distinguish candidate first-type travel plans from candidate second-type travel plans based on the asymmetrical penalties for incorrectly distinguishing candidate first-type travel plans from candidate second-type travel plans. According to such a method, the trainer machine110may train, modify, and provide the learning machine120(e.g., for run-time access by the devices130and150). A second example provides a method according to the first example, wherein:the asymmetrical penalties include unequal first and second penalties, the first penalty to be applied for incorrectly classifying a candidate first-type travel plan being greater than the second penalty to be applied for incorrectly classifying a candidate second-type travel plan. Thus, misclassifying a travel plan as a first-type travel plan would incur a larger penalty than misclassifying it as a second-type travel plan. A third example provides a method according to the first example, wherein:the asymmetrical penalties include unequal first and second penalties, the first penalty to be applied for incorrectly classifying a candidate first-type travel plan being less than the second penalty to be applied for incorrectly classifying a candidate second-type travel plan. Thus, misclassifying a travel plan as a second-type travel plan would incur a larger penalty than misclassifying it as a first-type travel plan. A fourth example provides a method according to any one of the first to third examples, wherein:the reference metadata of the reference travel plans indicates source entities that each reserved a corresponding reference travel plan among the reference travel plans for a corresponding user; andthe training of the learning machine to distinguish candidate first-type travel plans from candidate second-type travel plans is based on the indicated source entities that each reserved a corresponding reference travel plan for a corresponding user. Thus, where the source entities are or include sources of travel bookings (e.g., web sites that offer travel bookings), such sources of travel bookings may influence the training of the learning machine (e.g., determining or otherwise fully or partially affecting weightings applied to the decision trees). For example, a first source (e.g., a first travel website) may be more influential than a second source (e.g., a second travel website). A fifth example provides a method according to any one of the first through fourth examples, wherein:the reference metadata of the reference travel plans indicates sizes of destination airports, the indicated sizes respectively corresponding to each reference travel plan among the reference travel plans; andthe training of the learning machine to distinguish candidate first-type travel plans from candidate second-type travel plans is based on the indicated sizes of the destination airports. Thus, the sizes of destination airports may influence the training of the learning machine (e.g., determining or otherwise fully or partially affecting weightings applied to the decision trees). For example, large airport sizes (e.g., above a threshold percentile in size, runways, runway length, flights per day, or gates) may be correlated with first-type travel plans, while small airport sizes (e.g., below the threshold percentile in size, runways, runway length, flights per day, or gates) may be correlated with second-type travel plans, or vice versa. A sixth example provides a method according to any of the first through fifth examples, wherein:the reference metadata of the reference travel plans indicates ratios of layovers to destination cities, the indicated ratios respectively corresponding to each reference travel plan among the reference travel plans; andthe training of the learning machine to distinguish candidate first-type travel plans from candidate second-type travel plans is based on the indicated ratios of layovers to destination cities. Thus, the number of layovers per destination may influence the training of the learning machine (e.g., determining or otherwise fully or partially affecting weightings applied to the decision trees). For example, a small number of layovers per destination (e.g., 0 or 1) may be correlated with second-type travel plans, while a large number of layovers per destination (e.g., 2+) may be correlated with first-type travel plans, or vice versa. A seventh example provides a method according to any of the first through sixth examples, wherein:the reference metadata of the reference travel plans indicates total counts of destination cities, the indicated total counts respectively corresponding to each reference travel plan among the reference travel plans; andthe training of the learning machine to distinguish candidate first-type travel plans from candidate second-type travel plans is based on the indicated total counts of destination cities. Thus, the total counts of destination cities may influence the training of the learning machine (e.g., determining or otherwise fully or partially affecting weightings applied to the decision trees). For example, large total counts of destination cities may be correlated with first-type travel plans, while small total counts of destination cities may be correlated with second-type travel plans, or vice versa. An eighth example provides a method according to any of the first through seventh examples, wherein:the reference metadata of the reference travel plans includes indications of whether conventions occurred in destination cities respectively corresponding to each reference travel plan among the reference travel plans; andthe training of the learning machine to distinguish candidate first-type travel plans from candidate second-type travel plans is based on the indications of whether conventions occurred in the destination cities. Thus, the indications of whether conventions co-occurred in destination cities may influence the training of the learning machine (e.g., determining or otherwise fully or partially affecting weightings applied to the decision trees). For example, occurrence of conventions may be correlated with second-type travel plans, while non-occurrence of conventions may be correlated with first-type travel plans. As another example, occurrence of first-type conventions may be correlated with first-type travel plans, while occurrence of second-type conventions may be correlated with second-type travel plans. A ninth example provides a method according to any of the first through eighth examples, wherein:accessing a curated database of annual first-type events whose dates of occurrence vary by year; and wherein:the reference metadata of the reference travel plans indicates dates of travel respectively corresponding to each reference travel plan among the reference travel plans; andthe training of the learning machine to distinguish candidate first-type travel plans from candidate second-type travel plans is based on a comparison of the dates of occurrence to the dates of travel. Thus, such annual first-type events may influence the training of the learning machine (e.g., determining or otherwise fully or partially affecting weightings applied to the decision trees). For example, travel plans during the Thanksgiving holidays or during Lunar New Year periods may be more likely to be first-type travel plans, while travel plans outside any of the annual first-type events tracked in the curated database may be more likely to be second-type travel plans. A tenth example provides a method according to any of the first through ninth examples, wherein:accessing a curated database of annual second-type events whose dates of occurrence vary by year; and wherein:the reference metadata of the reference travel plans indicates dates of travel respectively corresponding to each reference travel plan among the reference travel plans; andthe training of the learning machine to distinguish candidate first-type travel plans from candidate second-type travel plans is based on a comparison of the dates of occurrence to the dates of travel. Thus, such annual second-type events may influence the training of the learning machine (e.g., determining or otherwise fully or partially affecting weightings applied to the decision trees). For example, travel plans during a yearly trade show (e.g., Consumer Electronics Show (CES)) may be more likely to be second-type travel plans, while travel plans outside any of the annual second-type events tracked in the curated database may be more likely to be first-type travel plans. An eleventh example provides a system (e.g., a computer system for training a learning machine) comprising:one or more processors; anda memory storing instructions that, when executed by at least one processor among the one or more processors, cause the system to perform operations comprising:accessing a training database of reference metadata descriptive of reference travel plans that include reference first-type travel plans and reference second-type travel plans;training a learning machine to distinguish candidate first-type travel plans from candidate second-type travel plans, the learning machine being trained based on decision trees generated from randomly selected subsets of the reference metadata that is descriptive of the reference travel plans, the randomly selected subsets each describing a corresponding randomly selected portion of the reference travel plans that include the reference first-type travel plans and the reference second-type travel plans;modifying the trained learning machine based on asymmetrical penalties for incorrectly distinguishing candidate first-type travel plans from candidate second-type travel plans; andproviding the modified learning machine trained to distinguish candidate first-type travel plans from candidate second-type travel plans based on the asymmetrical penalties for incorrectly distinguishing candidate first-type travel plans from candidate second-type travel plans. A twelfth example provides a system according to the eleventh example, wherein:the asymmetrical penalties include unequal first and second penalties, the first penalty to be applied for incorrectly classifying a candidate first-type travel plan being greater than the second penalty to be applied for incorrectly classifying a candidate second-type travel plan. A thirteenth example provides a system according to the eleventh example or the twelfth example, wherein:the reference metadata of the reference travel plans indicates source entities that each reserved a corresponding reference travel plan among the reference travel plans for a corresponding user; andthe training of the learning machine to distinguish candidate first-type travel plans from candidate second-type travel plans is based on the indicated source entities that each reserved a corresponding reference travel plan for a corresponding user. A fourteenth example provides a system according to any of the eleventh to thirteenth examples, wherein:the reference metadata of the reference travel plans indicates ratios of layovers to destination cities, the indicated ratios respectively corresponding to each reference travel plan among the reference travel plans; andthe training of the learning machine to distinguish candidate first-type travel plans from candidate second-type travel plans is based on the indicated ratios of layovers to destination cities. A fifteenth example provides a system according to any of the eleventh through fourteenth examples, wherein:the reference metadata of the reference travel plans includes indications of whether conventions occurred in destination cities respectively corresponding to each reference travel plan among the reference travel plans; andthe training of the learning machine to distinguish candidate first-type travel plans from candidate second-type travel plans is based on the indications of whether conventions occurred in the destination cities. A sixteenth example provides a machine-readable medium (e.g., a non-transitory machine-readable storage medium) comprising instructions that, when executed by one or more processors of a machine, cause the machine to perform operations comprising:accessing a training database of reference metadata descriptive of reference travel plans that include reference first-type travel plans and reference second-type travel plans;training a learning machine to distinguish candidate first-type travel plans from candidate second-type travel plans, the learning machine being trained based on decision trees generated from randomly selected subsets of the reference metadata that is descriptive of the reference travel plans, the randomly selected subsets each describing a corresponding randomly selected portion of the reference travel plans that include the reference first-type travel plans and the reference second-type travel plans;modifying the trained learning machine based on asymmetrical penalties for incorrectly distinguishing candidate first-type travel plans from candidate second-type travel plans; andproviding the modified trained learning machine trained to distinguish candidate first-type travel plans from candidate second-type travel plans based on the asymmetrical penalties for incorrectly distinguishing candidate first-type travel plans from candidate second-type travel plans. A seventeenth example provides a machine-readable medium according to the sixteenth example, wherein:the asymmetrical penalties include unequal first and second penalties, the first penalty to be applied for incorrectly classifying a candidate first-type travel plan being greater than the second penalty to be applied for incorrectly classifying a candidate second-type travel plan. An eighteenth example provides a machine-readable medium according to the sixteenth example or the seventeenth example, wherein:the reference metadata of the reference travel plans indicates sizes of destination airports, the indicated sizes respectively corresponding to each reference travel plan among the reference travel plans; andthe training of the learning machine to distinguish candidate first-type travel plans from candidate second-type travel plans is based on the indicated sizes of the destination airports. A nineteenth example provides a machine-readable medium according to any of the sixteenth through eighteenth examples, wherein:the reference metadata of the reference travel plans indicates total counts of destination cities, the indicated total counts respectively corresponding to each reference travel plan among the reference travel plans; andthe training of the learning machine to distinguish candidate first-type travel plans from candidate second-type travel plans is based on the indicated total counts of destination cities. A twentieth example provides a machine-readable medium according to any of the sixteenth to nineteenth examples, wherein the operations further comprise:accessing a curated database of annual second-type events whose dates of occurrence vary by year; and wherein:the reference metadata of the reference travel plans indicates dates of travel respectively corresponding to each reference travel plan among the reference travel plans; andthe training of the learning machine to distinguish candidate first-type travel plans from candidate second-type travel plans is based on a comparison of the dates of occurrence to the dates of travel. A twenty-first example provides a method comprising:accessing, by one or more processors, a candidate travel plan to be classified by a learning machine trained to distinguish candidate first-type travel plans (e.g., personal, non-tracked, non-reimbursable, or non-business travel plans) from candidate second-type travel plans (e.g., non-personal, tracked, reimbursable, or business travel plans);accessing, by the one or more processors, the learning machine trained to distinguish candidate first-type travel plans from candidate second-type travel plans, the learning machine being trained based on decision trees generated from randomly selected subsets of reference metadata that is descriptive of reference travel plans that include reference first-type travel plans and reference second-type travel plans, the randomly selected subsets each describing a corresponding randomly selected portion of the reference travel plans, the trained learning machine being modified based on asymmetrical penalties for incorrectly distinguishing candidate first-type travel plans from candidate second-type travel plans;inputting, by the one or more processors, candidate metadata of the candidate travel plan to the learning machine trained to distinguish candidate first-type travel plans from candidate second-type travel plans and modified based on the asymmetrical penalties for incorrectly distinguishing candidate first-type travel plans from candidate second-type travel plans; andcausing, by the one or more processors, presentation of a classification that indicates a type of the candidate travel plan, the indicated type being output from the learning machine in response to the inputting of the candidate metadata of the candidate travel plan. According to such a method, a device (e.g., device130) may access and use the trained and modified learning machine120to classify a candidate travel plan and provide an indication of its type. A twenty-second example provides a method according to the twenty-first example, wherein:the candidate metadata of the candidate travel plan indicates a source entity that reserved the candidate travel plan for a corresponding user; andthe learning machine outputs the type of the candidate travel plan based on the source entity that reserved the candidate travel plan for the corresponding user. A twenty-third example provides a method according to the twenty-first or twenty-second example, wherein:the candidate metadata of the candidate travel plan indicates a size of a destination airport corresponding to the candidate travel plan; andthe learning machine outputs the type of the candidate travel plan based on the size of the destination airport corresponding to the candidate travel plan. A twenty-fourth example provides a method according to any of the twenty-first through twenty-third examples, wherein:the candidate metadata of the candidate travel plan indicates a ratio of layovers to destination cities corresponding to the candidate travel plan; andthe learning machine outputs the type of the candidate travel plan based on the ratio of layovers to destination cities corresponding to the candidate travel plan. A twenty-fifth example provides a method according to any of the twenty-first through twenty-fourth examples, wherein:the candidate metadata of the candidate travel plan indicates a total count of destination cities corresponding to the candidate travel plan; andthe learning machine outputs the type of the candidate travel plan based on the total count of destination cities corresponding to the candidate travel plan. A twenty-sixth example provides a method according to any of the twenty-first through twenty-fifth examples, wherein:the candidate metadata of the candidate travel plan includes an indication of whether a convention occurred in a destination city corresponding to the candidate travel plan; andthe learning machine outputs the type of the candidate travel plan based on the indication of whether the convention occurred in the destination city corresponding to the candidate travel plan. A twenty-seventh example provides a method according to any of the twenty-first through twenty-sixth examples, wherein:the candidate metadata of the candidate travel plan indicates one or more dates of travel corresponding to the candidate travel plan; andthe learning machine outputs the type of the candidate travel plan based on the one or more dates of travel corresponding to the candidate travel plan. A twenty-eighth example provides a system (e.g., a computer system for training a learning machine) comprising:one or more processors; anda memory storing instructions that, when executed by at least one processor among the one or more processors, cause the system to perform operations comprising:accessing a candidate travel plan to be classified by a learning machine trained to distinguish candidate first-type travel plans from candidate second-type travel plans;accessing the learning machine trained to distinguish candidate first-type travel plans from candidate second-type travel plans, the learning machine being trained based on decision trees generated from randomly selected subsets of reference metadata that is descriptive of reference travel plans that include reference first-type travel plans and reference second-type travel plans, the randomly selected subsets each describing a corresponding randomly selected portion of the reference travel plans, the trained learning machine being modified based on asymmetrical penalties for incorrectly distinguishing candidate first-type travel plans from candidate second-type travel plans;inputting candidate metadata of the candidate travel plan to the learning machine trained to distinguish candidate first-type travel plans from candidate second-type travel plans and modified based on the asymmetrical penalties for incorrectly distinguishing candidate first-type travel plans from candidate second-type travel plans; andcausing presentation of a classification that indicates a type of the candidate travel plan, the indicated type being output from the learning machine in response to the inputting of the candidate metadata of the candidate travel plan. A twenty-ninth example provides a machine-readable medium (e.g., a non-transitory machine-readable storage medium) comprising instructions that, when executed by one or more processors of a machine, cause the machine to perform operations comprising:accessing a candidate travel plan to be classified by a learning machine trained to distinguish candidate first-type travel plans from candidate second-type travel plans;accessing the learning machine trained to distinguish candidate first-type travel plans from candidate second-type travel plans, the learning machine being trained based on decision trees generated from randomly selected subsets of reference metadata that is descriptive of reference travel plans that include reference first-type travel plans and reference second-type travel plans, the randomly selected subsets each describing a corresponding randomly selected portion of the reference travel plans, the trained learning machine being modified based on asymmetrical penalties for incorrectly distinguishing candidate first-type travel plans from candidate second-type travel plans;inputting candidate metadata of the candidate travel plan to the learning machine trained to distinguish candidate first-type travel plans from candidate second-type travel plans and modified based on the asymmetrical penalties for incorrectly distinguishing candidate first-type travel plans from candidate second-type travel plans; andcausing presentation of a classification that indicates a type of the candidate travel plan, the indicated type being output from the learning machine in response to the inputting of the candidate metadata of the candidate travel plan. A thirtieth example provides a carrier medium carrying machine-readable instructions for controlling a machine to carry out the operations (e.g., method operations) performed in any one of the previously described examples. | 74,953 |
11861461 | DETAILED DESCRIPTION In the following description, numerous specific details are set forth to provide a more thorough understanding of the various embodiments. However, it will be apparent to one of skilled in the art that the inventive concepts may be practiced without one or more of these specific details. System Overview FIG.1is a conceptual illustration of a system100configured to implement one or more aspects of the present invention. The system100includes, without limitation, computer instances110(1)-110(3), a user device190, a training database120, and a model database140. For explanatory purposes, multiple instances of like objects are denoted with reference numbers identifying the object and parenthetical numbers identifying the instance where needed. Any number of the components of the system100may be distributed across multiple geographic locations or implemented in one or more cloud computing environments (i.e., encapsulated shared resources, software, data, etc.) in any combination. In alternate embodiments, the system100may include any number of compute instances110, any number of user devices190, and any number and type of databases in any combination. As shown, each of the compute instances110includes, without limitation, a processor112and a memory116. The processor112may be any instruction execution system, apparatus, or device capable of executing instructions. For example, the processor112could comprise a central processing unit (“CPU”), a graphics processing unit (“GPU”), a controller, a micro-controller, a state machine, or any combination thereof. The memory116stores content, such as software applications and data, for use by the processor112of the compute instance110. In alternate embodiments, each of the compute instances110may include any number of processors112and any number of memories116in any combination. In particular, any number of the compute instances110(including one) may provide a multiprocessing environment in any technically feasible fashion. The memory116may be one or more of a readily available memory, such as random access memory (“RAM”), read only memory (“ROM”), floppy disk, hard disk, or any other form of digital storage, local or remote. In some embodiments, a storage (not shown) may supplement or replace the memory116. The storage may include any number and type of external memories that are accessible to the processor112. For example, and without limitation, the storage may include a Secure Digital Card, an external Flash memory, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. Each of the compute instances110is configured to implement one or more applications or subsystems of applications. For explanatory purposes only, each application is depicted as residing in the memory116of a single compute instance110and executing on a processor112of the single compute instance110. However, as persons skilled in the art will recognize, the functionality of each application may be distributed across any number of other applications that reside in the memories116of any number of compute instances110and execute on the processors112of any number of compute instances110in any combination. Further, the functionality of any number of applications or subsystems may be consolidated into a single application or subsystem. In particular, the compute instance110(1) is configured to provide a workflow that trains machine learning models based on training designs having associated style(s) and uses any number of trained machine learning models to account for stylistic preferences when automatically generating and evaluating designs. Each design represents, without limitation, any number and type of objects digitally in any technically feasible fashion. For instance, in some embodiments, one or more designs are computer-aided design (“CAD”) geometry models that represent the geometry of an object in any technically feasible fashion (e.g., volumetric, surface boundary, etc.) and format (e.g., mesh, boundary representation, etc.) that is suitable for design/manufacturing/processing. In other embodiments, any number of designs specify the position and orientation of any number of virtual objects (e.g., design primitives), where each virtual object digitally represents an associated object. In some embodiments, any number of designs specify 3D shapes as point clouds. In various embodiments, any number of designs are 3D images or two-dimension (“2D”) images of objects. Each 2D or 3D image may be a hand drawing, a sketch, a photo, a frame of a video, etc. A typical conventional design process for generating a design for an object while taking into account stylistic preferences is primarily manual. A designer manually generates an initial design that reflects the desired style of the object and then manually modifies the initial design to generate a production design that meets functional aspects of the object. One drawback of a manual design process is that generating and modifying the initial design can be tedious and prohibitively time-consuming. If the time allocated for design activities is limited, then the designer may be able to consider only a limited number of design options during the design process, which can reduce the overall quality of the production design. Additionally, many novice designers are unable to generate designs manually that have a desired style without assistance from more experienced designers who are familiar with that particular style. In an attempt to reduce the time required for design activities, some designers use a conventional generative design process. The designer configures a generative design application to generate a generative design space that includes a vast number (e.g., thousands) of designs that satisfy functional goals and constraints. The designer subsequently explores the generative design space, manually viewing and evaluating the different generated designs and eventually selecting a single, final design for additional design and/or manufacturing activities. One drawback of using a conventional generative design process is that the resulting designs oftentimes have organic shapes that are aesthetically unappealing or expensive/difficult to manufacture. Because of the prevalence of organic shapes in a typical generative design space, oftentimes none of the designs generated via a conventional generative design process are aesthetically acceptable to the designer. Further, even if a particular design generated via a conventional generative design process is aesthetically acceptable to the designer, manufacturing the organic shapes included in the design is usually inefficient. Instituting a Workflow for Stylizing Designs To address the above problems, the compute instance110(1) implements a workflow subsystem150, the compute instance110(2) implements a training application130, and the compute instance110(3) implements a stylization subsystem170. The workflow subsystem150resides in the memory116(1) and executes on the processor112(1), the training application130resides in the memory116(2) and executes on the processor112(2), and the stylization subsystem170resides in the memory116(3) and executes on the processor112(3). Together, the workflow subsystem150, the training application130, and the stylization subsystem170institute a “stylization workflow” that accounts for stylistic preferences when generating and curating any number of stylized designs182. The workflow subsystem150is also referred to herein as the “workflow application.” The stylization workflow includes, without limitation, a training phase, an inspiration stage, a design generation phase, and a curation phase. The stylization workflow may be used to generate and evaluate any number of stylized designs182for any industry and for any purpose. For instance, the stylization workflow may be used in industrial design to design furniture, tools, gadgets, etc. The stylization workflow may be used in architectural design to design facades, etc. The stylization workflow may be used in civil design to design bridges, roads, and the like. Importantly, the stylization workflow may be used to increase manufacturability of existing designs or generate stylized designs182that are suitable for a particular manufacturing technique. Each of the stylized designs182is a design that is generated based on at least one target stylistic trait. As referred to herein, a “stylistic trait” may be any perceptible property and/or manufacturing-related property that characterizes a group of designs. Aesthetic traits are perceptible properties that characterize a group of designs and are therefore a subset of stylistic traits. A manufacturing-related property is associated with manufacturing physical object(s) based on a design. Each stylistic trait may be associated with a label that identifies the stylistic trait. Some examples of labels for stylistic traits are “bold,” “powerful,” “intricate,” “skinny,” “organic,” and “sharp.” Examples of stylistic traits include, but are not limited to:Material and material properties, such as color, texture, reflection, diffusion, specular, and other surface finish attributes that affect appearance and/or the texture of surfaces.Edge and corner sharpness, angles, and curvatures.Surface curvatures (e.g., doubly or singly curved, Gauss curvature, mean curvature, principle curvatures, etc.) and their distributions and statistics.Corner normals, edge normals, surface normals, and associated distributions and statistics.Minimum, maximum, distributions, proportions, and statistics (e.g., mean, median, etc.) of feature sizes and thicknesses.Topological properties of a shape, such as the genus of the shape, statistical characteristics of the topological network (i.e., skeleton) of the shape and/or statistical characteristics of the geometries of the topological network.Combinations, repetitions, symmetries, and patterns of any number of perceptual properties and/or other stylistic traits, such as bi-grams (i.e., local combinations of properties) and associated correlations, joint probabilities, and statistics.Other subjective or objective, local or global, perceptual characteristics that may not necessarily be defined geometrically or mathematically but can be captured from the 2D or 3D representations of shapes and surfaces using machine-learning techniques.Other subjective or objective, local or global, characteristics related to the manufacturability or perceived manufacturability (i.e., with particular manufacturing processes and fabrication methods, machines, and/or tool sets) of objects and surfaces. As referred to herein, a “style” or a “design language” is an aggregation of stylistic traits that are common between all designs in a collection of designs. Notably, a style may apply to designs associated with different classes of objects having different functionalities. For example, a design of a chair and a design of a motorcycle that have similar local curvatures and surfaces may belong to the same style. Collections of designs may be defined in any technically feasible fashion. For example, a collection of designs could include designs associated with the same era, designer, company, brand, franchise, shop, manufacturing machine, manufacturing process, and/or manufacturing tool set. A style may be associated with a sense of character, identity, cultural/social background, and/or manufacturing commonality (e.g., a manufacturing machine, a manufacturing tool, a manufacturing tool set, a manufacturing method, etc.). For example, one style could encapsulate a “streamlined” appearance that a particular company is known for. Another style could express commonalities between a set of parts that can be manufactured efficiently with a particular Computer Numerical Control (CNC) milling machine. Any number of styles may be well-known (e.g., Art Deco, Art Nouveau, etc.). As shown, the workflow subsystem150includes, without limitation, an interface engine152, target data160, a stylized design set180, a post-stylization engine184, an evaluation application172(1), and a curation engine188. The interface engine152may operate on any type of data received in any technically feasible fashion. Further, the interface engine152may implement any number and type of privacy features. For instance, in various embodiments, the interface engine152ensures that the data associated with each designer is not shared with other designers. In the same or other embodiments, the interface engine152allows each designer to shared data with any number of other designs (e.g., within a working group or a company). In some embodiments, the interface engine152allows each designer to store and/or share data, such as the training database120and/or the model database140, with other designers via a private cloud, a public cloud, or a semi-private cloud. The interface engine152generates a graphical user interface (GUI)192, displays the GUI192on the user device190, and receives input via the GUI192. The user device190may be any type of device that is capable of transmitting input data and/or displaying visual content. For example, the user device190could be a game console, a smartphone, a smart television (TV), a laptop, a tablet, or a desktop computer. The GUI192enables any number of designers to execute any of the phases in the stylization design flow any number of times in any order in any technically feasible fashion. For example, the GUI192could provide a different execution button for each phase and, at any given time, disable the execution buttons for phases requiring additional information. In the training phase, the interface engine152generates the training database120based on input received via the GUI192. The training database120includes, without limitation, any number of training designs122and any number of style labels124. Each of the training designs122may be any design associated with any type(s) of object(s). Each of the style labels124is an identifier (e.g., a string) that refers to a particular style or stylistic trait. Some examples of style labels124are “minimalist,” “Art-Deco,” “Art-Nouveau,” “Apple laptop styles circa 2010”, “Leica cameras circa 1960,” “2.5D 3-Axis CNC”. Each of the training designs122is associated with one or more style labels124in any technically feasible fashion. Further, each of the style labels124may characterize different types of designs across different classes of objects. For example, the style label124“Art-Deco” could be associated with each of the training designs122(1)-122(3). The training design122(1) could be an image of a building, the training design122(2) could be a CAD geometry model of a car, and the training design122(3) could be a sketch of a chair entered by the designer via the GUI192. The interface engine152may generate the training database120in any technically feasible fashion. For instance, in some embodiments, a designer specifies one or more designs (e.g., a directory of designs, a single design, etc.) and a style label124via a training configuration pane in the GUI192. If the specified style label124is not already included in the training database120, then the interface engine152adds the selected style label124to the training database120. For each specified design, the interface engine152adds the specified design as a new training design122to the training database120and associates the new training design122with the specified style label124. In the same or other embodiments, a designer specifies one or more designs, any number of negative style labels124, and any number of positive style labels124via a training configuration pane in the GUI192. A positive style label124indicates that each of the specified designs belongs to the associated style. A negative style label124indicates that each of the specified designs does not belong to the associated style. The interface engine152adds the specified style labels124that are not already included in the training database120to the training database120. For each specified design, the interface engine152adds the specified design as a new training design122to the training database120, associates the new training design122in a positive manner to each of the positive style labels124, and associates the new training design122in a negative manner to each of the negative style labels124. Upon receiving a request to train the style model132from a designer via the GUI192, the interface engine152provides the training database120to the training application130. The training application130performs any number and type of supervised machine-learning techniques to generate the style model132based on the training database120. In alternate embodiments, the training application130may generate or re-generate the style model132based on the training database120in response to any type of trigger. For example, in some embodiments, the training database120is continually updated and the training application130is configured to re-generate the style model132based on the training database120every twenty-four hours. The training application130trains the style model132to map a design to characterization information associated with one or more of the style labels124. As referred to herein, characterization information may include, without limitation, any number and combination of probabilities, assignments, Boolean values, scores, etc. For instance, in some embodiments, the characterization information for a design is a probability distribution over the style labels124included in the training database120. For each of the styles represented by the style labels, the probability distribution estimates a likelihood that the design belongs to the style. In other embodiments, the characterization information specifies a single style label124that is associated with the style to which the design is predicted to belong. In yet other embodiments, the characterization information includes a Boolean value for each of the style labels124. The Boolean value for a particular style124predicts whether the design belongs to the style represented by the associated style label124. In alternate embodiments the characterization information may also include any number of gradients (e.g., derivatives/sensitivities) with respect to design specifications/parameters/variations. The style model132may be any type of model including, without limitation, a binary classification model, a multiclass classification model, and a regression model. The style model132may be trained using any number of the training designs122included in the training database120to make predictions associated with any number of the training labels124. For instance, a binary classification model associated with a given style label124predicts whether a design belongs to the style associated with the style label124. A multiclass classification model predicts a probability distribution for a design across at least two of the style labels124. A regression model associated with a given style label124predicts a numeric value that indicates the similarity between the style of a design and the style associated with the style label124. After training, the training application130adds the style model132to the model database140. The model database140may include any number and type of style models132that are generated in any technically feasible fashion. Each of the style models132may be any type of executable software that maps a design to characterization information in any technically feasible fashion based on any number of stylization algorithms. If the style model132is trained via machine-learning techniques, then the stylization algorithms are determined during the training process. Notably, each of the style models132can map a design associated with any class of objects to characterization data. Notably, if the style model132(x) is trained via machine-learning techniques using the training database120, then the style model132(x) can reliably map a design associated with a particular class of objects to characterization data irrespective of whether any of the training designs122are associated with the class of objects. In alternate embodiments, the training application130may generate multiple style models132based on the training database120, where each of the style models132classifies designs based on a different subset of the style labels124and/or training designs122. In the same or other embodiments, the training application130generates any number of style models132based on any number of training databases120that are acquired (e.g generated or retrieved from any accessible memory) in any technically feasible fashion. In alternate embodiments, the interface engine152and the training application130may perform any number and type of operations in conjunction with any number of other software applications to generate any number of style models132in any technically feasible fashion. For instance, in an element-based training, the interface engine152generates the training database120that includes design elements and associated style labels124. The interface engine152receives input specifying design elements (e.g., edges, surfaces, etc.) in any number of designs and any number of positive and/or any number of negative style labels124. A positive style label124specifies that the presence of each of the specified design elements in a design indicates that the design belongs to the associated style. A negative style label124specifies that the presence of each of the specified design elements in a design indicates that the design does not belong to the associated style. In some embodiments, the training application130performs semi-supervised machine-learning operations to generate any number of style models132based on any number of the training designs122and any amount of designer input. For instance, in some embodiments, the training application130executes any number and type of unsupervised learning techniques, such as clustering, or applies any amount of previous training and knowledge to group the training designs122into different styles (labeled or unlabeled groups). Based on the groups, the training application130may then cluster new training designs122into groups, discover new groups, and suggest the style labels124for the groups. The interface engine182may display the suggested style labels124for the groups via the GUI192and allow a designer to review and correct the suggested style labels124. The interface engine182may enable a designer to review and correct the suggested style labels124via the GUI192in any technically feasible fashion. For example, the GUI192could include graphical widgets that enable the design to add and/or edit the suggested style labels124, drag and drop a group into another group to merge groups and the associated suggested style labels124, etc. In various embodiments, the training application130performs any number and type of unsupervised machine-learning operations in addition to any number of supervised machine-learning operations to generate any number of style models132. In some embodiments, the training application130may perform data mining operations to acquire the training designs122from the web or any other resource (e.g., data lakes, etc.) without human intervention. Further, the training application130may determine any number of relationships between the training designs122and any number of style labels124based on any amount and type of data or metadata. For example, the training application130could determine the relationships between training designs122that are images and any number of style labels124based the proximity and relation of words in textual data (that may be potential style labels124) to the images on web pages. In another example, the training application130could search for images associated with certain words (e.g., “bold,” “strong,” “intricate,” etc.) via an internet search engine, or similar technology, and then use the mined images as training designs122or to provide additional data for any number of machine-learning operations. In the same or other embodiments, the training application130may perform data mining operations to determine any number of relationships between designs, potential training designs122, and training designs122. In alternate embodiments, any number of the style models132may be pre-trained. In the same or other embodiments, the training application130may perform any number and type of operations that customize any number of pre-trained style models132based on any amount and type of input received via the GUI192and the interface engine152. In various embodiments, the training application130may periodically perform any number of data mining operations to update any amount of training data (including the training database120) and re-generate any number of style models132based on the newly acquired training data. In alternate embodiments, the training database120may be replaced or supplemented with any type of method for acquiring training data. For example, in some embodiments, the training application130implements Federated Learning techniques to generate one or more style models132. As persons skilled in the art will recognize, “Federating Learning” is a collaborative machine-learning technique that decentralizes the training process in a way that allows different users to train a single model with user-specific private data (e.g., the training designs122) without actually sending the data to a central training process, thereby maintaining privacy. The model database140may include any number and types of style models132and may be stored in any technically feasible fashion. For instance, the model database140could be stored in the memory116one of the computer instance110(1)-110(3), the memory116of any other compute instance110such as a model server, a private cloud, a public cloud, a semi-private cloud, a content delivery network (“CDN”), etc. Access to each of the style models132included in the model database140may be open (i.e., accessible to any designer) or may be restricted in any technically feasible fashion to a specific group of designers. In various embodiments, the training application130may implement any number and type of machine-learning algorithms in any technically feasible fashion to determine any number and type of style labels124and/or generate any number and type of style models132. Examples of machine-learning techniques include, without limitation, the following types of algorithms: support vector machines (“SVMs”), artificial neural networks (including deep learning), Bayesian networks, genetic algorithms, regression, decision trees, random forests, gradient boosting, k-nearest neighbors, k-means, long short-term memory (“LSTM”) and/or other recurrent neural network (“RNN”), etc. In the target style specification stage, the interface engine152interacts with a designer via the GUI192to generate the target data160that guides the behavior of the stylization subsystem170, also referred to herein as the “stylization application.” The target data160includes, without limitation a target style specification166. As depicted in dotted boxes, the target data160may also include, without limitation, an initial design set162or a synthesis configuration164. The target style specification166indicates any number of style preferences in any technically feasible fashion that is consistent with the style labels124and the stylization subsystem170. The interface engine152may generate the target style specification166in any technically feasible fashion. For instance, in some embodiments, the designer selects any number of the style labels124as individual positive targets and any number of other style labels124as individual negative targets via the GUI192. In response, the interface engine152generates the target style specification166that causes the stylization subsystem170to attempt to generate designs that, with respect to style, belong to at least one of the positive targets and do not belong to any of the negative targets. In other embodiments, the designer selects any number of the style labels124as a combined positive target and, in response, the interface engine152generates the target style specification166that causes the stylization subsystem170to attempt to generate designs that, with respect to style, belong to all of the positive targets. For example, the designer could select a combined positive target of “company xyz” and “CNC machine X.” In response, the interface engine152would generate the target style specification166that causes the stylization subsystem170to attempt to generate designs that are characteristic of company xyz and can be efficiently generated using the CNC machine X. In some embodiments, the stylization subsystem170modifies an initial design based on the target style specification166to generate one or more stylized designs182. In such embodiments, the target data160includes the initial design set162that specifies any number of initial designs. In other embodiments, the stylization subsystem170generates the stylized designs182based on the target style specification and the synthesis configuration164. The synthesis configuration164specifies any amount and type of control items that impact the behavior of a synthesis algorithm and are not directly related to a style. For instance, the synthesis configuration164may specify, without limitation, any number and type of optimization criteria, design constraints, objectives, regularization values, and bias values in any combination. The control items may be related to physical and/or mechanical performance (e.g. stiffness, displacement, stress, strain, heat dissipation, weight, mass, center of gravity, stability, buckling, natural frequencies, etc.), environmental impact, energy efficiency, ergonomics, manufacturing time and costs, running costs, life-cycle costs, etc. For example, the synthesis configuration164for designing a lamp could include an objective to maximize the amount of visible light emitted from the lamp, an objective to minimize the weight of the lamp, and a mechanical stability constraint that constrains the projection of the center of gravity of the lamp to be inside the footprint of the lamp. The stylization subsystem170may perform any number and type of optimization or editing operations to generate the stylized designs182that reflect the synthesis configuration164and the target style specification166. Advantageously, the interface engine152may configure the GUI192to enable the designer to efficiently specify the target data160in any technically feasible fashion. For instance, in various embodiments, the interface engine152displays any number of the style labels124along with thumbnails of the training designs122belonging to the associated style to facilitate the generation of the target style specification166. In the same or other embodiments, the interface engine152enables the designer to select and/or sketch via the GUI192any number of initial designs included in the initial design set162and/or any number of optimization criteria and/or constraints specified in the synthesis configuration164. In alternate embodiments, the target data160includes any number of core elements not shown) as additional objective(s) or constraint(s) that guide the behavior of the stylization subsystem170. The core elements are either constraints or recommendations for generating the stylized designs182. The core elements are global key points, lines, curves, corners, edges, profiles, and/or surfaces that encapsulate some general stance, feeling, or character for a class of objects. If some of the surfaces, edges and key features of a design follow the core elements for the associated class of object, then the design conveys a certain character. In contrast with a style, core elements for a particular class of objects are dependent on a functional aspect of the class of object and, consequently, the relevance and applicability of core elements is limited to the class of objects. For example, the core elements for a motorcycle could define an overall appearance via two circles representing wheels, a triangle representing the engine and seat that is connected to the rear wheel, and a line connecting the front wheel to the triangle that extends to the handlebars. If the design of a motorcycle complies with the core elements, then the overall appearance of the design conveys a desired character (e.g., fast, powerful, etc.). However, the design of another object (e.g., a boat or a truck) that complies with the same core elements does not necessarily convey the designed character. During the inspiration phase, the interface engine152may determine the core elements in any technically feasible fashion. For instance, in some embodiments, the interface engine152enables the designer to specify (e.g., sketch) the core elements via the GUI192in a stand-alone fashion or superimposed on an existing design (e.g., one of the training designs122). In other embodiments, a core element extraction application (not shown) implements any number of machine-learning techniques to generate any number of core elements based on the subset of the training designs122associated with a selected class of objects and a selected style label124. In some embodiments, the core element extraction application may generate a set of core elements based on a selected set of designs and then assign a style label124to the set of core elements. Subsequently, the core element extraction application may automatically generate new core elements based on additional designs that are associated with the style label124. To initiate the design generation phase, the workflow subsystem150selects one or more of the style models132from the model database140in any technically feasible fashion. For instance, in some embodiments, the workflow subsystem150selects the style model(s)132based on designer input received via the GUI192and the interface engine152. In other embodiments, the workflow subsystem150compares the style labels124that each of the style models132has learned to the style labels124that are referenced in the target style specification166. As referred to herein, the style labels124that a given style model132(x) has “learned” are the style labels124included in the training database120(x) that the training application130used to train the style model132(x). The workflow subsystem150then selects the style model(s)132that, together, have learned the style labels124that are referenced in the target style specification166. If the target data160includes the initial design set162, then the workflow subsystem150executes the design generation phase separately for each of the initial designs included in the initial design set162and aggregates the resulting stylized designs182into the stylized design set180. To execute the design generation phase for an initial design, the workflow subsystem150configures the stylization subsystem170to generate stylized designs182based on the target style specification166, the selected style model(s)132, and the initial design. If, however, the target data160does not include the initial design set162, then the workflow subsystem150configures the stylization subsystem170to generate the stylized designs182included in the stylized design set182based on the target style specification166and the synthesis configuration164. During the design generation phase, the stylization subsystem170generates any number of stylized designs182based on the selected style model(s)132and either the synthesis configuration164or one of the initial designs included in the initial design set162. As shown, the stylization subsystem170includes, without limitation, the evaluation application172(2) and a generation application174. Together, the evaluation application172(2) and the generation application174generate the stylized designs182in an iterative design process. As described in greater detail in conjunction withFIGS.2and3, the evaluation application172(2) receives a current design and computes a style score for the current design based on the target style specification166and the selected style models132. First, the evaluation application172computes characterization information for the current design based on the selected style model(s)132. More precisely, for each of the selected style models(s)132, the evaluation application172provides the current design as an input to the selected style model132. The output of the selected style model132is model-specific characterization information associated with the style labels124that the selected style model132has learned. The evaluation application172then aggregates the model-specific characterization information in any technically feasible fashion to generate the characterization information for the current design. Subsequently, the evaluation application172(2) computes a style score for the current design based on the characterization information and the target style specification166. The style score for a current design is a value for a style metric that indicates a level of compliance that the current design has with the target style specification166. The evaluation application172(2) may compute the style score in any technically feasible fashion. For instance, in some embodiments, the characterization information is a probability distribution and the evaluation application172(2) compares each of the probabilities included in the style distribution to the target style specification166based on the associated style labels124. If the target style specification166specifies the style label124(x) as a positive target, then the evaluation application172(2) increases the style score as the probability associated with the style label124(x) increases. If the target style specification166specifies the style label124(x) as a negative target, then the evaluation application172(2) decreases the style score as the probability associated with the style label124(x) increases. In alternate embodiments (e.g where the generation application174implements a gradient-based optimization algorithm), the style score may also include the gradients of the style score with respect to design specifications/variables/parameters that allow the generation application174to make the proper modifications to the design specifications/variables/parameters towards achieving the target style specification166. The generation application174generates one or more current designs based on the style score(s), any number of optimization algorithms, and any number of shape generation algorithms in any combination. An optimization algorithm may modify an existing design, synthesize a new design, generate a control set that configures a shape generation algorithm to modify an existing design, and/or generate a control set that configures a shape generation algorithm to synthesize a new design. A shape generation algorithm generates a design that includes any number of shapes based on a control set. The generation application174either modifies existing design content or synthesizes new design content. If the generation application174receives an initial design from the workflow subsystem150, then the generation application174modifies existing content. The generation application174sets a current design equal to the initial design and then performs an iterative design process that incrementally modifies the current design to generate one or more stylized designs182. For each iteration, the evaluation application172(2) computes style score(s) for the current design(s) based on the selected style model(s)132and the target style specification166. The generation application174then modifies the current design(s) based on an objective of optimizing the style score(s). The generation application174may implement any number and type of optimization algorithms to modify the current design(s). For instance, the generation application174may perform any number and combination of topology optimization algorithms, parametric optimization algorithms, and constrained shape reconstruction algorithms. After the final iteration, the generation application174transmits the current design(s) as stylized design(s)182to the workflow subsystem150. In some embodiments, the generation application174may also transmit the style score(s) associated with the stylized design(s)182to the workflow subsystem150. The workflow subsystem150then adds the stylized design(s)182to the stylized design set180.FIG.2describes one embodiment of the stylization subsystem170that modifies existing design content in greater detail. If, however, the generation application174does not receive an initial design, then the generation application174synthesizes new content based on the synthesis configuration164. More specifically, the generation application174performs an iterative design process based on the synthesis configuration164and an objective of optimizing the style scores to generate the stylized designs182. To initiate the iterative design process, the generation application174generates a current design set of one or more current designs based on the synthesis configuration164. For each iteration, the evaluation application172computes a style score for each of the current designs included in the current design set. The generation application174then synthesizes a new current design set based on the style scores and the synthesis configuration164. The generation application174may implement any number and type of optimization algorithms to synthesize new design content. For instance, the generation application174may implement any number and combination of generative design algorithms, evolutionary design algorithms, multi-objective optimization algorithms, etc. After the final iteration, the generation application174transmits the current design(s) included in the current design set as the stylized designs182to the workflow subsystem150. In some embodiments, the generation application174may also transmit the style score(s) associated with the stylized design(s)182to the workflow subsystem150. The workflow subsystem150then adds the stylized design(s)182to the stylized design set180.FIG.3describes one embodiment of the stylization subsystem170that synthesizes new design content in greater detail. The stylization subsystem170may terminate the iterative design process based on any number and type of completion criteria. For instance, in some embodiments, the stylization subsystem170may terminate the iterative design process after a maximum number of iterations (e.g., 1,000) that is specified via the GUI192. In the same or other embodiments, the stylization subsystem170may terminate the iterative design process when the average style score of the current design(s) is greater than a minimum style score (e.g., 95). As a general matter, the stylization subsystem170may implement any number and type of optimization algorithms, synthesis algorithms, shape generation algorithms, and style metrics to generate any number of stylized designs182that reflect the target style specification166. Accordingly, in various embodiments, the resulting stylized designs182may vary in shape, topology, performance, etc. Further, the generation application174may perform operations based on any amount of data generated during any number (including zero) of previous iterations. For instance, in some embodiments, the generation application174could execute a stochastic optimization algorithm (e.g., simulating annealing algorithm) to randomly generate minor modifications to be applied to a current design. In the same or other embodiments, for each current design, the generation application174could execute a gradient-based optimization algorithm (e.g., via back-propagation) to synthesize a new current design based on the current design. In various embodiments, the generation application174could implement an evolutionary algorithm (e.g., a genetic algorithm) to synthesize a new current design set based on a current design set. In alternate embodiments, the stylization algorithm(s) encapsulated in one or more of the style models132may be replaced with any type of stylization algorithms expressed in any technically feasible fashion and the techniques described herein are modified accordingly. For instance, in some alternate embodiments, each of the style labels124is associated with a different style construction set that encapsulate one or more stylization algorithms and includes, without limitation, any number of design primitives, design elements, and design operations in any combination. The training database120, the training application130, the model database140, the style models132, and the stylization subsystem170are replaced with a “style construction subsystem.” The style construction subsystem constructs the stylized designs182based on a target construction set that is determined based on the style construction sets and the target style specification166. In some embodiments, the style construction subsystem generates one or more of the style construction sets based, at least in part, on input received via the GUI192. For example, the style construction subsystem could suggest a style construction set, including design parameter constraints, and an associated style label124via the GUI192and the interface engine152. A designer may then edit and modify the style construction set and the associated style label124via the GUI192and the interface engine152. In the same or other embodiments, the style construction subsystem may implement any number of machine-learning techniques to generate each of the style construction sets. For example, in some embodiments, the style construction subsystem implements an evolutionary algorithm to generate the style construction set for a specified style label124based on a specified set of training designs122. The design primitives may include, without limitation, any parts (including all) and/or combinations of any number of prisms, spheres, ellipsoids, cubes, cuboids, pyramids, truncated pyramids, cylinders, cones, truncated cones, etc. The design elements and the design operations may include, without limitation, profiles and cross-sections, swarf paths, fillets and bevels, revolutions, extrusions, Boolean operations (e.g., union, subtraction, intersection), and so forth. Any number of the design primitives, design elements, and design operations may be constrained in terms of any number of associated design parameters (e.g., size, length, radius, position, orientation, etc.). Each design parameter may be constrained to have specified relationships with any number of other design parameters, such as attachment relationships or alignment relationships. Each design primitive may be constrained to have specified relationships with global or local axis and origins. For example, each instance of a design primitive could be limited to positions and orientations that are parallel to a global ground plane. The style construction subsystem may combine any number of style construction sets to generate the target construction set based on the target style specification166in any technically feasible fashion. Subsequently, the style construction subsystem may implement any number of optimization algorithms in any combination to generate the stylized designs182based on the target construction set. Examples of optimization algorithms include, without limitation, evolutionary optimization algorithms, stochastic optimization algorithms, real optimization algorithms, and mixed-integer optimization. The style construction subsystem may construct the stylized designs182in any technically feasible fashion. For instance, the style construction subsystem may implement a constructive solid geometry (“CSG”) algorithm. After the stylization subsystem170generates the stylized design set180, the post-stylization engine184may further refine any number of the stylized designs182based on a post-stylization configuration186. The post-stylization configuration186may include any number and type of objective and constraints. For instance, in some embodiments, the interface engine152interacts with the designer via the GUI192to determine any number of post-stylization objectives and constraints (e.g., physical performance) included in the post-stylization configuration186. The post-stylization engine184then performs parametric optimization operations on each of the stylized designs182based on the post-stylization configuration186. In the curation phase, the workflow subsystem150evaluates, curates, and displays any number of the stylized designs182based on any amount and type of data. In various embodiments, the workflow subsystem150receives the style scores for the stylized designs182from the stylization subsystem170. In the same or other embodiments, the workflow subsystem150configures the evaluation application172(2) to generate a curation score set156based on a curation style specification154. In alternate embodiments, the workflow subsystem150may generate any number of curation score sets156based on any number of curation style specifications154. To generate the curation score set156, the interface engine152interacts with the designer via the GUI192to generate the curation style specification154. The curation style specification154indicates any number of style-based criteria for visualization and other curation activities (e.g. filtering) based on the style labels124and the style models132. In some embodiments, the interface engine152may initially set the curation style specification154equal to the target style specification166and then allow a designer to modify the curation style specification152via the GUI192. Subsequently, the workflow subsystem150selects any number of style models132included in the model database140with which to evaluate the stylized designs182based on the curation style specification152. The workflow subsystem150may select the style models132in any technically feasible fashion. For instance, in some embodiments, the workflow subsystem150may implement any of the techniques described previously herein with respect to selecting the style model(s)132with which to evaluate current designs based on the target style specification166. For each of the stylized design182included in the stylized design set180, the evaluation application172computes a curation score based on the selected style model(s)132and the curation style specification154and adds the curation score to the curation score set156. Note that the curation score for the stylized design182(x) may differ from the style score for the stylized design182(x) previously computed during the design generation phase. The curation engine188interacts with a designer via the GUI192to perform any number of filtering, sorting, plotting, etc. operations that facilitate the evaluation of the stylized designs182based on any amount and type of data, including curation scores and style scores. For instance, in various embodiments, the interface engine may generate a display (presented via the GUI192) showing a subset of the stylized designs182ordered according to the style scores and/or the curation scores. In the same or other embodiments, the curation engine188may sort, filter, cluster, and/or visually differentiate the stylized designs182based on the styles indicated via the style scores in any technically feasible fashion. Examples of visualization techniques that the curation engine188may implement to distinguish between different styles include, without limitation, color maps, grouping, axes rotation, radar maps, etc. For example, the designer could configure the curation engine188to generate a plot in which each of the stylized designs182is represented as a different dot, where the color of the dot indicates the style for which the stylized design182has the highest probability of belonging. In another example, the designer could configure the curation engine186to generate a plot in which the horizontal axis could indicate a performance metric and one extreme of the vertical axis could indicate one style and the other extreme of the vertical axis could indicate another style (e.g., Art-Deco vs Art-Nouveau on the vertical axis.) In yet another example, the designer could configure the curation engine186to cluster the stylized designs182based on any number of the style labels124and then visually differentiate the clusters (e.g., using colors) for any number of other curation activities (e.g., plotting, sorting, etc.). In alternate embodiments, the curation engine188may enable the designer to perform filtering and/or modification operations on any number of the stylized designs182based on one or more elements (e.g., edges, surface, etc.). For example, the designer could select one or more elements of one of the stylized designs182and then request that the workflow subsystem150filters, includes, or excludes stylized designs182based on the presence of the selected element(s). In another example, the designer could select one or more elements for removal and the designer could select or the curation engine188could suggest one or more elements as a replacement. To suggest a replacement for an element, the curation engine188could re-execute the design generation phase with any number and type of potential replacement elements to determine the replacement element that best matches the target style specification166. In response to a subsequent replacement request, the curation engine188could modify the stylized designs182. Alternatively, the curation engine188could generate a constraint corresponding to the selected element(s) and requested operation (filtering, replacement, etc.), add the constraint to the synthesis configuration164, and re-execute the design generation phase. In alternate embodiments, the interface engine152receives bio-feedback of the emotions of the designer as designs are displayed via the GUI192. The interface engine152may receive the bio-feedback from an electroencephalogram (“EEG”) or any other brain-computer interface. The curation engine188may evaluate the bio-feedback to determine when the designer is focused on a particular design and/or a particular element of a particular design in any technically feasible fashion. In various alternate embodiments, the workflow engine150estimates the emotional and/or attention of the designer based on a machine-learning model (e.g., an RNN such as an LSTM). In various alternate embodiments, the interface engine152receives eye tracking information (e.g., eye saccades, pupillary response, etc.) and the workflow engine150estimates the emotions and/or focus of the designer based on the eye tracking information. As part of the curation phase, the designer may select one or more designs as production design(s)194. For example, the designer could select one of the stylized designs182as the production design194. Alternatively, the designer could modify one of the stylized designs182and/or combine elements from multiple stylized designs182to generate a modified design and then select the modified design as the production design194. The workflow subsystem150may execute any amount and type of activities to facilitate subsequent design and/or manufacturing activities based on the production design(s)194. For instance, in some embodiments, the workflow subsystem150may generate any number of design files that represent the production design(s)194in a format and at a level of detail that is suitable for manufacturing by a selected manufacturing tool and/or process. The workflow subsystem150may then transmit the design files to the selected manufacturing tool and/or process. At any point in time, the workflow subsystem150and/or the training application130may add new training designs122(e.g., any number of the stylized designs182, designs acquired as part of data mining activities, newly entered designs, manually modified stylized designs182, etc.) and/or style labels124to any number of training databases120. Further, the training application130may re-execute the training phase to generate and/or re-generate any number of style models132based on any number of training databases120in response to any type of trigger. For instance, in some embodiments, the training application130is configured to re-generate each of the style models132included in the model database140daily. In other embodiments, the training application130automatically re-generates any associated style models132when the training database120is updated. The workflow subsystem150enables any number of designers to execute any number of phases of the stylization workflow in any order and any number of times (including zero). For example, using the GUI192(1) displayed on the user device190(1), a first designer could execute the training phase to generate the style model132(1) stored in the model database140. Subsequently, using the GUI192displayed on the user device190(2), a second designer could execute the inspiration phase to generate the target data160, execute the design generation phase to generate the stylized design set180, and then execute the curation phase to evaluate the stylized design set180. During the curation phase, the second designer could determine that the style scores associated with the style label124(1) were not accurate. The second designer could then execute the training phase to add additional designs (e.g., any number of the stylized designs182) to the training database120as positive and negative examples of the style label124(1) and re-generate the style model132(1). The second designer could skip the inspiration phase and re-execute the design generation phase based on the previous target data160to generate a new stylized design set180. Finally, the second designer could re-execute the curation phase and select one of the stylized designs182included in the new stylized design set180as the production design194. Advantageously, the workflow subsystem150reduces the time required to generate and evaluate stylized designs182based on stylistic preferences. In particular, using the style models132, the designers can automatically generate and evaluate the stylized designs182based on style metrics instead of manual modifying and visual scrutinizing designs. By reducing the time required to generate and evaluate the stylized designs182relative to conventional design techniques, the workflow subsystem150allows designers to generate and evaluate a larger number of designs having the preferred stylistic traits in a given amount of time. The overall quality of the production design194can, therefore, be improved. Further, novice designers can implement the automated stylization workflow successfully, without assistance from more experienced designers. It will be appreciated that the system shown herein is illustrative and that variations and modifications are possible. The connection topology, including the number, location, and arrangement of training databases120, model databases140, user devices190, and compute instances110may be modified as desired. In certain embodiments, one or more components shown inFIG.1may not be present. Note that the techniques described herein are illustrative rather than restrictive, and may be altered without departing from the broader spirit and scope of the invention. In particular, the flow subsystem150, the training application130, the stylization subsystem170, the evaluation application172, the generation application174, the post-stylization engine184, and the curation engine188may be implemented in any number of software applications in any combination. Further, in various embodiments, any number of the techniques disclosed herein may be implemented while other techniques may be omitted in any technically feasible fashion. Generating Stylized Designs FIG.2is a more detailed illustration of the stylization subsystem170ofFIG.1, according to various embodiments of the present invention. In particular, the stylization subsystem170depicted inFIG.2iteratively modifies the initial design262(x) included in the initial design set162based on the style model132to generate a single stylized design182. In alternate embodiments, the stylization subsystem170may generate any number of stylized designs182based on the initial design262(x) and any number of style models132. Further, for each of the initial designs262included in the initial design set162, the stylization subsystem170may generate a different number of stylized designs182. For explanatory purposes only, the parenthetical number associated with each of the current design212, a control set242, a style distribution222, and a style score232specifies an associated design iteration. For example, the current design212(67) is generated during the 67thiteration. As shown, the stylization subsystem170includes, without limitation, the evaluation application172and the generation application174. In operation, the stylization subsystem170sets a current design212(1) equal to the initial design262(x). The evaluation application172and the generation application172then execute an iterative design process that incrementally modifies the current design212(1) to generate the stylized design182. For the kthiteration, the evaluation application172generates the style score232(k) based on the current design212(k), the target style specification166, and the style model132. The evaluation application172includes, without limitation, a classification engine220and a comparison engine230. The classification engine220generates the style distribution222(k) based on the current design212(k) and the style model132. More precisely, the classification engine220provides the current design212(k) as an input to the style model132. The output of the style model132is the style distribution222(k). The style distribution222(k) specifies estimated probabilities of the current design212(k) belonging to the different styles associated with the style labels124that the style model132learned during the training phase. In alternate embodiments, the output of the style model132may be any type of characterization information and the techniques described herein are modified accordingly. As shown, the comparison engine230generates the style score232(k) based on the style distribution222(k) and the target style specification166. The style score232(k) is a value for a style metric that indicates a level of compliance that the current design212(k) has with the target style specification166. The comparison engine230may institute any style metric and determine the style score232(k) in any technically feasible fashion. For instance, in some embodiments, the comparison engine230compares each of the probabilities included in the style distribution222(k) to the target style specification166based on the style labels124included in the target style specification166. If the target style specification166specifies the style label124(x) as a positive target, then the comparison engine230increases the style score232(k) as the probability associated with the style label124(x) increases. If the target style specification166specifies the style label124(x) as a negative target, then the comparison engine230decreases the style score232(k) as the probability associated with the style label124(x) increases. The stylization subsystem170then determines whether to continue iterating based on any number and type of completion criteria (not shown). Some examples of completion criterion include, without limitation, a maximum number of iterations (e.g., 1,000), a minimum style score232(e.g., 95), a maximum amount of time, etc. The completion criteria may be specified in any technically feasible fashion. For instance, in some embodiments, the completion criteria are specified via the GUI192. In alternate embodiments, the stylization subsystem170may determine whether to continue iterating at any point in the design process. For example, the stylization subsystem170could determine to cease iterating after the generation application174generates the current design212(800). If the stylization subsystem170determines to continue iterating, then the generation application174modifies the current design212(k) to generate the current design212(k+1). The generation application174includes, without limitation, an optimization engine240and a shape generation engine210. The optimization engine240executes any number and type of optimization operations based on the style score232(k) to generate the control set242(k+1). For instance, the optimization engine240may execute any number and combination of topology optimization, parametric optimization, and constrained shape reconstruction operations. In alternate embodiments, the optimization engine240may perform optimization operations based on the style score232and any amount of additional data in any technically feasible fashion. For instance, in some embodiments, the optimization engine240may perform gradient-based optimization operations based on the style score232(k), any number of previously generated current designs212, and any number of previously generated style scores232. The control set242(k+1) includes any amount and type of data that configures the shape generation engine210to generate the current design212(k+1) in any technically feasible fashion. For instance, in some embodiments, the control set242may specify any number of parameters and/or any number of geometry generation commands that enable the shape generation engine210to generate the current design212(k+1) independently of the current design212(k). In other embodiments, the control set242may specify any number of parameters and/or any number of geometry modification commands that enable the shape generation engine210to modify the current design212(k) to generate the current design212(k+1). In alternate embodiments, the optimization engine240generates the current design212(k+1) without generating the control set242(k+1) and the shape generation engine210is omitted from the generation application174. The shape generation engine210generates the current design212(k+1) in any technically feasible fashion based on the control set242(k+1) and any amount (including none) of additional information. For instance, in some embodiments, the shape generation engine210may implement any number and combination of layout generation, shape generation, and parameterization operations to generate the current design212(k+1) without referencing the current design212(k). In other embodiments, the shape generation engine210may implement any number and combination of layout, shape, and parameter modification operations to modify the current design212(k) to generate the current design212(k+1) If, after computing the style score232(k), the stylization subsystem170determines to cease iterating based on the completion criteria, then the stylization subsystem170transmits the current design212(k) to the workflow subsystem150as the stylized design182. Subsequently, the workflow subsystem150adds the stylized design182to the stylized design set180. In alternate embodiments, the stylized subsystem170may also transmit the style score232(k) for the current design212(k) to the workflow subsystem150. FIG.3is a more detailed illustration of the stylization subsystem170ofFIG.1, according to other various embodiments of the present invention. In particular, the stylization subsystem170depicted inFIG.3synthesizes any number of stylized designs182based on the target specification166, the synthesis configuration164, and the style model132. In alternate embodiments, the stylization subsystem170may generate the stylized designs182based on the synthesis configuration164and any number of style models132. For explanatory purposes only, the parenthetical number associated with each of a current design set320, a set of control sets342, a style distribution set322, and a style score set332specifies an associated iteration. For example, the current design set320(67) is generated during the 67thiteration. The current design set320(k) includes, without limitation any number of current designs212. The number of current designs212included in the current design set320(a) may vary from the number of current designs212included in the current design set320(b). The style distribution set322(k) includes, without limitation, a different style distribution222for each of the current designs212included in the current design set320(k). The style score set332(k) includes, without limitation, a different style score232for each of the current designs212included in the current design set320(k). The set of control sets342(k) specifies, without limitation, any number of control sets242. As shown, the stylization subsystem170includes, without limitation, the evaluation application172and the generation application174. In operation, the stylization subsystem170initializes the style score set332(0) to an empty set. The evaluation application172and the generation application172then execute an iterative design process that generates any number of stylized designs182. For the kthiteration, the generation application174generates the current design set320(k) based on synthesis configuration164and the style score set332(k−1). As shown, the generation application174includes, without limitation, a synthesis engine310and the shape generation engine210. The synthesis engine310executes any number and type of optimization operations based on the synthesis configuration164and the style score set332(k−1) to generate the set of control sets342(k). For instance, the synthesis engine310may execute any number and combination of generative design operations, evolutionary design operations, multi-objective optimization operations, etc. In alternate embodiments, the synthesis engine310may perform optimization operations based on the synthesis configuration164, the style score set332(k−1), and any amount of additional data in any technically feasible fashion. For instance, in some embodiments, the synthesis engine310may perform gradient-based optimization operations based on the style score set332(k−1), any number of previously generated current design sets320, and any number of previously generated style score sets332. In the same or other alternate embodiments, the synthesis engine310generates the current design set320(k) without generating the set of control sets342(k) and the shape generation engine210is omitted from the generation application174. Each of the control sets242included in the set of control sets342(k) includes any amount of data that configures the shape generation engine210to generate a different current design212that is included in the current design set320(k). For each of the control sets242(x), the shape generation engine210generates a different current design212(x) and adds the current design212(x) to the current design set320(k). As described previously in conjunction withFIG.2, the shape generation engine210may generate the current design212based on the associated control set242and any amount (including none) of additional information in any technically feasible fashion. As shown, the evaluation application172generates the style score set332(k) based on the current design set320(k). The evaluation application172includes, without limitation, the classification engine220and the comparison engine230. For each of the current designs212(x) included in the current design set320(k), the classification engine220generates the style distribution222(x) included in the style distribution set322(k) based on the style model132. More precisely, to generate the style distribution222(x), the classification engine220provides the current design212(x) included in the current design set320(k) as an input to the style model132. The output of the style model132is the style distribution222(x). In alternate embodiments, the output of the style model132may be any type of characterization information and the techniques described herein are modified accordingly. Subsequently, for each of the current designs212(x) included in the current design set320(k), the comparison engine230generates the style score232(x) included in the style score set332(k) based on the style distribution222(x) included in the style distribution set322(k). The style score232(x) is a value for a style metric that indicates a level of compliance that the current design212(x) included in the current design set320(k) has with the target style specification166. The comparison engine230may institute any style metric and determine the style scores232in any technically feasible fashion. The stylization subsystem170then determines whether to continue iterating based on any number and type of completion criteria (not shown). In alternate embodiments, the stylization subsystem170may determine whether to cease iterating at any point in the design process. For instance, in alternate embodiments, the stylization subsystem170may determine whether to continue iterating immediately after the generation application174generates the current design set320(k), If the stylization subsystem170determines to continue iterating, then the generation application174modifies the current design set320(k) to generate the current design set320(k+1). Otherwise, the stylization subsystem170transmits each of the current designs212(x) included in the current design set320(k) as the stylized design182(x) to the workflow subsystem150. Subsequently, the workflow subsystem150adds each of the stylized designs182to the stylized design set180. In alternate embodiments, the stylized subsystem170may also transmit the style score set332(k) to the workflow subsystem150. Curating Stylized Designs FIG.4is an exemplary illustration of the graphical user interface (GUI)192ofFIG.1, according to various embodiments of the present invention. As shown, the GUI192depicts the initial design262(1), a design exploration plot480, the production design194, and new training data490. For explanatory purposes only, during the training phase, the style model132learns the style labels124(1) “tool set A” and the style label124(2) “tool set B.” The style label124(1) represents a style of design that can be manufactured efficiently on a CNC machine with a first set of tools (tool set A). The style label124(2) represents a style of design that can be manufactured efficiently on the CNC machine using a second set of tools (tool set B). During the inspiration phase, the designer specifies the initial design262(1) and the target design specification166having the positive target of either the style label124(1) or the style label124(2). As shown, the initial design262( ) is a wheel-shaped mechanical part having an organic shape. During the design generation phase, the stylization subsystem170generates the stylized designs182(1)-182(16) included in the stylized design set180based on the initial design262(1). During the curation phase, the designer configures the curation engine188to generate and display the design exploration plot480. The design exploration plot480depicts each the stylized designs182included in the stylized design set180with respect to a weight axis410and an estimated manufacturing time axis420. As shown, if the stylized design182(x) belongs to the style associated with the style label124(1) “tool set A,” then the curation engine188depicts the stylized design182(x) via a square in the design exploration plot480. If the stylized design182(x) belongs to the style associated with the style label124(2) “tool set B,” then the curation engine188depicts the stylized design182(x) via a circle in the design exploration plot480. Based on the design exploration plot480, the designer selects the stylized design182(8) that has the second lowest estimated manufacturing time of the stylized designs182classified as belonging to the style associated with tool set A as the production design194. The designer also interacts with the GUI192to add the new training data490to the training database120and re-generate the style model132based on the updated training database120. As shown, the new training data490specifies that the stylized design182(16) which is classified as belonging to the style associated with the tool set B actually belongs to the style associated with the tool set A. Advantageously, re-training the style model132based on the new training data490improves the performance (e.g., increases the accuracy) of the style model132. FIGS.5A-5Bset forth a flow diagram of method steps for generating and evaluating designs based on stylistic preferences, according to various embodiments of the present invention. Although the method steps are described with reference to the systems ofFIGS.1-4, persons skilled in the art will understand that any system configured to implement the method steps, in any order, falls within the scope of the present invention. As shown, a method500begins at step502, where the interface engine152displays the GUI192on the user device190to enable interaction with a designer. At step504, for any number of styles, the workflow subsystem150acquires stylization algorithm(s) based on the training database120. The workflow subsystem150may acquire any type of stylization algorithm(s) in any technically feasible fashion. For instance, in some embodiments, the workflow subsystem150configures the training application130to perform machine-learning operations to generate the style model132based on the training database120. In other embodiments, for each style, the workflow subsystem150acquires a different style construction set of design primitives, design elements, design operations, and combinations thereof. At step506, the interface engine152determines the target data160based on input received via the GUI. At step508, the workflow engine150determines whether the target data160includes the initial design set162. If, at step508, the workflow engine150determines that the target data160includes the initial design set162, then the method500proceeds to step510. At step510, for each of the initial designs262included in the initial design set162, the stylization subsystem170modifies the initial design262based on the target style specification166and the stylization algorithm(s) to generate any number of stylized designs182. The method500then proceeds directly to step514. If, however, at step508, the workflow engine150determines that the target data160does not include the initial design set162, then the method500proceeds directly to step512. At step512, the stylization subsystem170synthesizes any number of stylized designs182based on the synthesis configuration164, the target style specification166, and the stylization algorithm(s). The method500then proceeds to step514. At step514, the post-stylization engine184performs any number of post-stylization operations on the stylized designs182. At step516, the curation engine188curates and displays any number of the stylized designs182based on input received via the GUI192. At step518, the interface engine152determines whether any new training data490has been identified. If, at step518, the interface engine152determines that no new training data490has been identified, then the method500proceeds directly to step522. If however, at step518, the interface engine152determines that new training data490has been identified, then the method500proceeds to step520. At step520, the interface engine152updates the training database120based on the new training data490. The training engine130subsequently re-generates the stylization algorithm(s) based on the updated training database120. The method500then proceeds to step522. At step522, the interface engine152determines whether the production design194has been identified. At step522, if the interface engine152determines that the production design194has not been identified, then the method500proceeds to step524. At step524, the interface engine152updates any portion (including none) of the target data180based on input received via the GUI192. The method500then returns to step508, and the workflow subsystem150re-generates and re-curates the stylistic designs182. The workflow subsystem150continues to cycle through steps508-524until the interface engine152, at step522, determines that the production design194has been identified. If, however, at step522, the interface engine15determines that the production design194has been identified, then the method500proceeds to step526. At step526, the workflow subsystem150transmits the production design182to one or more software applications for subsequent design and/or manufacturing activities. The method500then terminates. FIG.6is a flow diagram of method steps for generating designs based on stylistic preferences, according to various embodiments of the present invention. Although the method steps are described with reference to the systems ofFIGS.1-4, persons skilled in the art will understand that any system configured to implement the method steps, in any order, falls within the scope of the present invention. As shown, a method600begins at step602, where the stylization subsystem170acquires one or more style model(s)132, the target style specification166, and the initial design262or the synthesis configuration164. At step604, the stylization subsystem170determines whether the stylization subsystem170has received the initial design262. If, at step604, the stylization subsystem170determines that the stylization subsystem170has received the initial design262, then the method600proceeds to step606. At step606, the stylization subsystem170sets the current design212equal to the initial design262. The method600then proceeds directly to step610. If, however, at step604, the stylization subsystem170determines that the stylization subsystem170has not received the initial design262, then the method600proceeds directly to step608. At step608, the generation application174generates any number of current designs212based on the synthesis configuration164. The method600then proceeds to step610. At step610, for each of the current designs212(x), the classification engine220generates the characterization information (e.g., the style distribution222(x)) based on the style model(s)132. At step612, for each of the current designs212(x), the comparison engine230generates the style score232(x) based on the associated characterization information and the target style specification166. At step614, the stylization subsystem170determines whether to continue iterating. The stylization subsystem170may determine whether to continue iterating based on any number and type of completion criteria. If, at step614, the stylization subsystem170determines to continue iterating, then the method600proceeds to step616. At step616, the stylization subsystem170determines whether the stylization subsystem170received the initial design262. If, at step616, the stylization subsystem170determines that the stylization subsystem170received the initial design262, then the method600proceeds to step618. At step618, the generation application174modifies the current design(s)212based on the style scores232to generate new current design(s)212. The method600then returns to step610, where the classification engine220generates the characterization information for each of the current designs212based on the style model(s)132. If, however, at step616, the stylization subsystem170determines that the stylization subsystem170did not receive the initial design262, then the method600proceeds directly to step620. At step620, the generation application174generates any number of new current design(s)212based on the synthesis configuration164and the style scores232. The method600then returns to step610, where the classification engine220generates the characterization information for each of the current designs212based on the style model(s)132. The stylization subsystem170continues to cycle through steps610-620until the stylization subsystem170determines to cease iterating. If, at step614, the stylization subsystem170determines to cease iterating, then the method600proceeds directly to step622. At step622, the stylization subsystem170transmits each of the current designs212as a different stylized design182to a software application (e.g., the workflow subsystem150) for any amount and type of curation, design, and/or manufacturing activities. The method600then terminates. In sum, the disclosed techniques may be used to efficiently generate and evaluate designs that reflect a target style. In one embodiment, a workflow subsystem provides a design graphical user interface (GUI) that enables a stylization workflow. The stylization workflow includes a training phase, an inspiration stage, a design generation phase, and a curation phase. In the training phase, a training application trains a style model to classify the styles of designs based on a training database of existing designs and styles labels that identify different styles. After training, the style model maps a design to a style distribution that estimates the probabilities that the design belongs to any number of the styles defined via the style labels. In the inspiration phase, the workflow subsystem interacts with a designer via a GUI to determine a target style specification and a synthesis configuration. The target style specification expresses any number of stylistic preferences based on the style labels. The synthesis configuration specifies any number of functional goals and constraints that are not directly related to style. In the design generation phase, a stylization subsystem executes an iterative design process based on the style model, the target style specification, and the synthesis configuration. The stylization subsystem includes, without limitation, a generation application and an evaluation application. In a first iteration, the generation application executes any number of optimization algorithms to generate a current design set based on the synthesis configuration. For each design included in the current design set, the evaluation engine computes a style score based on the style model and the target style specification. In each subsequent iteration, the generation application generates a new current design set based on the style scores and the synthesis configuration. The evaluation engine then computes the style scores for the new current design set. When a completion criterion (e.g., a maximum number of iterations, a minimum style score, etc.) is met, the stylization subsystem transmits each of the current designs included in the current design set as a stylized design to the workflow subsystem. In the curation phase, the workflow subsystem interacts with a designer via the GUI to determine a curation style specification that specifies any number of stylistic preferences based on the style labels. For each of the stylized design, the evaluation application computes a curation score based on the style model and the curation style specification. Subsequently, a curation engine performs any number of filtering, sorting, plotting, etc. operations based on the curation scores to enable the designer to efficiently select one or more of the stylized designs as production design(s). At any point in the design stylization workflow, the workflow subsystem allows the designer to add any number of the stylized designs and/or any other designs along with associated style label(s) to the training database. The workflow subsystem then re-trains the style model based on the updated training database. In this manner, the workflow subsystem can continually improve the accuracy/performance of the style model. At least one technical advantage of the disclosed techniques relative to the prior art is that, unlike prior art approaches, the workflow subsystem provides an automated workflow for generating and evaluating designs based on target styles. Each target style may be associated with a sense of character, an identity (e.g., a corporate identity), a cultural/social background, and/or a manufacturing commonality (e.g., a manufacturing machine, a manufacturing tool, a manufacturing tool set, a manufacturing method, etc.). For example, a target style could encapsulate aesthetic traits associated with a particular company as well as commonalities between a set of parts that can be manufactured efficiently with a particular CNC milling machine. In some embodiments, a GUI allows target style(s) to be specified and a machine-learning model is used to characterize designs based on the specified target style(s). By contrast, prior art techniques provide neither GUIs that enable style-related input nor mechanisms that effectively account for style-related input. Because the workflow subsystem can substantially increase the number of designs that can be generated and evaluated based on the target style in a given amount of time, relative to prior art approaches, the overall quality of the design ultimately selected for production can be improved. Additionally, since novice designers can implement the automated workflow successfully, without assistance from more experienced designers. These technical advantages provide one or more technological advancements over the prior art approaches. 1. In some embodiments, a computer-implemented method for generating designs that accounts for stylistic preferences comprises computing first characterization information based on a first design and a trained machine-learning model that maps one or more designs to characterization information associated with one or more styles; computing a style score based on the first characterization information and a target style that is included in the one or more styles; and generating a second design based on the style score, wherein the second design is more representative of the target style than the first design. 2. The computer-implemented method of clause 1, wherein the trained machine-learning model comprises a binary classification model, a multiclass classification model, or a regression model. 3. The computer-implemented method of clauses 1 or 2, wherein the trained machine-learning model is trained based on a plurality of designs associated with a first class of objects, and the first design is associated with a second class of objects. 4. The computer-implemented method of any of clauses 1-3, further comprising performing one or more data mining operations to acquire training data; and executing one or more unsupervised learning algorithms to generate the trained machine-learning model based on the training data. 5. The computer-implemented method of any of clauses 1-4, wherein generating the second design comprises executing a multi-objective optimization algorithm based on the style score, a first objective that is related to the style score, and a second objective that is not related to the style score. 6. The computer-implemented method of any of clauses 1-5, wherein the second objective is related to at least one of physical performance, mechanical performance, environmental impact, energy efficiency, ergonomics, manufacturing time, manufacturing cost, and running cost. 7. The computer-implemented method of any of clauses 1-6, wherein generating the second design comprises executing a gradient-based optimization algorithm based on the style score and the first design. 8. The computer-implemented method of any of clauses 1-7, wherein generating the second design comprises modifying the first design based on the style score and at least one of a topology optimization algorithm, a parametric optimization algorithm, a stochastic optimization algorithm, an evolutionary optimization algorithm, and a constrained shape reconstruction algorithm. 9. The computer-implemented method of any of clauses 1-8, wherein computing the style score comprises determining a first probability included in the first characterization information based on the target style; determining that the target style is a positive target; and increasing the style score based on the first probability. 10. The computer-implemented method of any of clauses 1-9, wherein the target style is associated with at least one of a sense of character, a corporate identity, a cultural background, a manufacturing tool, and a manufacturing method. 11. In some embodiments, one or more non-transitory computer readable media include instructions that, when executed by one or more processors, cause the one or more processors to generate designs that account for stylistic preferences by performing the steps of computing first characterization information based on a first design and a trained machine-learning model that maps one or more designs to characterization information associated with one or more styles; computing a style score based on the first characterization information and a first style preference that is associated with at least a first style included in the one or more styles; and generating a second design based on the style score, wherein the second design is more representative of the first style preference than the first design. 12. The one or more non-transitory computer readable media of clause 11, wherein the first characterization information comprises a probability distribution across the one or more styles, a Boolean value, or a particular style included in the one or more styles. 13. The one or more non-transitory computer readable media of clauses 11 or 12, wherein the trained machine-learning model is trained based on a plurality of designs associated with a first class of objects, and the first design is associated with a second class of objects. 14. The one or more non-transitory computer readable media of any of clauses 11-13, further comprising performing one or more data mining operations to acquire training data; and executing one or more unsupervised learning algorithms to generate the trained machine-learning model based on the training data. 15. The one or more non-transitory computer readable media of any of clauses 11-14, wherein generating the second design comprises executing a multi-objective optimization algorithm based on the style score, a first objective that is related to the style score, and a second objective that is not related to the style score. 16. The one or more non-transitory computer readable media of any of clauses 11-15, wherein the second objective is related to at least one of physical performance, mechanical performance, environmental impact, energy efficiency, ergonomics, manufacturing time, manufacturing cost, and running cost. 17. The one or more non-transitory computer readable media of any of clauses 11-16, wherein generating the second design comprises executing a gradient-based optimization algorithm based on the style score and the first design. 18. The one or more non-transitory computer readable media of any of clauses 11-17, wherein computing the style score comprises determining that the first style is a negative target based on the first style preference; determining a first probability included in the first characterization information based on the first style; and decreasing the style score based on the first probability. 19. The one or more non-transitory computer readable media of any of clauses 11-18, wherein the first style is characterized by at least one of an aesthetic trait and a manufacturing-related property. 20. In some embodiments, a system for generating designs that accounts for stylistic preferences comprises one or more memories storing instructions; and one or more processors that are coupled to the one or more memories and, when executing the instructions, are configured to compute first characterization information based on a first design and a trained machine-learning model that maps one or more designs to characterization information associated with one or more styles; compute a style score based on the first characterization information and a target style that is included in the one or more styles; and execute at least one optimization algorithm to generate a second design based on the style score, wherein the second design is more representative of the target style than the first design. Any and all combinations of any of the claim elements recited in any of the claims and/or any elements described in this application, in any fashion, fall within the contemplated scope of the present invention and protection. The descriptions of the various embodiments have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. Aspects of the present embodiments may be embodied as a system, method or computer program product. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “module,” a “system,” or a “computer.” In addition, any hardware and/or software technique, process, function, component, engine, module, or system described in the present disclosure may be implemented as a circuit or set of circuits. Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine. The instructions, when executed via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such processors may be, without limitation, general purpose processors, special-purpose processors, application-specific processors, or field-programmable gate arrays. The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. While the preceding is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. | 101,694 |
11861462 | DETAILED DESCRIPTION FIG.1is a block diagram of an application100for preparing structured datasets for ML training, in accordance with aspects of the present disclosure. Often, training data may be processed to prepare the data for use in training ML systems. For example, the training data may be received in a tidy form with each column representing a particular feature and a single row represents a particular observation. The data may be transformed to extract or enhance the information available in the data set, for example, through numerical encoding, normalization, Boolean conversion, infill, etc. Additionally, one or more datasets may be separated into subsets. For example, labels may also be split out into a separate set associated with the other data sets, and the training set may be split between a training data set and one or more validation data sets. In certain cases, test data sets may be received separate from the training data set and transformed. In certain cases, the transformations applied to the test data sets may be substantially consistent with those applied to the training data set. After the initial data sets are prepared, the data sets may be used to train the target ML system. Post training, it may be desirable to be able to consistently prepare additional data sets for the target ML system. As used herein, consistent preparation, consistent processing, or consistent formatting refers to applying transformations in a substantially consistent manner to those applied to the training data set, or data sets that have been so transformed. This additional data sets may be used, for example, to generate predictions using the target ML system, perform additional training of the target ML system, or train a new ML model with consistently formatted data. As the initial training data set processing can involve a significant number of steps and application of specific data transformation processes, it would be desirable to have a system capable of streamlining the processing of the additional data that is consistent with how the initial training data set was processed. Referring to the current example, an initial training data set102may be received by a ML data set transformation process106. In certain cases, the initial training data set102may be received along with an initial test data set104. The ML data set transformation process106performs a series of transformation steps on the received data sets and outputs the processed training data set108. In some cases, a consistently processed validation data set110may also be output. In some cases, a processed test data set112may also be output if a test data set104was received and/or requested. In cases where the test data set104is not provided, the processed test data set112may be based on portions of the processed training data set108. The processed training data set108and, in some cases, processed validation data set110may be used by the target ML system118for training the ML system118. In cases where the test data set104was passed to the initial ML data set transformation process106, the returned processed test data set112may be used by the target ML system118, for example, for making predictions based on the test data set104. In some cases, the training and/or prediction generation of the target ML system118using processed data sets may be supplemented by an external corresponding data sets not processed by the initial ML data set transformation process106and/or the processed test data set112, for example such as if labels are not passed, or, as another example, if a row may have a corresponding image file. In such cases, the pairing of processed data sets with external corresponding data sets may be supported by an ID set containing row index information which may be returned as a separate information type for the returned processed data sets. The ML system118may be any ML system and may be separate from the ML data set transformation process106. The series of transformation steps performed by the ML data set transformation process106may be user defined, or in certain cases, automatically determined by the ML data set transformation process106based on properties of the data in the received training data set102. The ML data set transformation process106tracks the specific transformations applied to the data sets and outputs parameters of these transformations in a separate metadata database114. Additionally, a feature importance results report116may also be outputted. In certain cases, the feature importance results report116may be an informational report, for example to the user. In certain cases, results of a feature importance evaluation may also be included in the metadata database114. The metadata database114may be provided to the ML data set transformation process122to process an additional test data set120. Consistent processing of this additional test data set120may be performed based on information from the metadata database114without having to specify the specific transformations. Where the ML data set transformation process106determined specific transformation processes to apply to the initial training data set102, these specific transformation processes may be applied based on the metadata database114without redetermining those specific transformation processes and/or redetermining transformation parameters inferred from properties of the training data. An additional test data set120may thus be processed by an additional ML data set transformation process122and the returned processed additional test set124may be used by the target ML system118, for example, for making predictions based on the additional test data set120. In certain cases, if the scale of data in an original training data102set exceeds a resource limit, such as a memory limit, run time limit, user defined time constraint, etc., there may be a desire to partition the original training data set into an initial training set and one or more additional training data sets. Information in the initial training data set may be used to generate and populate a metadata database114indicating the transformations applied to obtain the processed training data set108. Consistent transformations may then be applied to the remaining partition or partitions of the original training data set by passing this data to the additional ML data set transformation process122as an additional test data set120in conjunction with the returned metadata database114to process the remainder of the original training data set to be returned as a processed additional test data set124. Similarly, when the scale of data in a test data set104or additional test data set120exceeds the resource limit, that data set may be partitioned for iterated application of consistent processing in the additional ML data set transformation process122. Referring to the process flow forFIG.1, an alternate means for consistent processing of the additional test data set at block120may be achieved without the use of the metadata database of block114and without the use of the additional ML data set transformation process of block122by passing the additional test data set of block120to the initial ML data set transformation process of block106as a test data set of block104in conjunction with the original training data set for which consistent processing is desired as block102, returning a processed test data set112comparable to the processed additional test data set of block124. FIG.2is a flow diagram for a technique for processing training data200, in accordance with aspects of the present disclosure. Throughout the technique for processing training data200implementation parameters and information about columns, such as which columns are present in which data sets, how columns are associated, how columns are processed, along with steps of processing, derived transformation parameters, and metrics regarding columns, may be stored in a metadata database. The metadata database generally captures information regarding the categories of transformations, actions performed on the data sets, and information about the relationships between source columns and derived columns to help provide consistent processing of later acquired data sets. Examples of potential metadata entries associated with distinct derived columns include a map or other indication as between derived columns and originating source columns, associations between derived columns and transformation functions applied to those derived columns, a root category of transformations applied to a derived column's source column, a last category of transformation applied to a derived column, transformation parameters applied to a derived column which may have been derived from properties of that column's derivation in the training set, and/or a trained model for cases where infill may have been predicted for those columns using ML infill methods. The metadata database may also contain entries to support retrieval of transformation parameters for later processing of additional data. This processing of additional data sets may use column labels of source columns as a key. Examples of potential metadata entries associated with distinct source columns include a root category associated with a source column, which derived columns originated from a source column, and/or a derived column label which, in some instances, may help allow accessing training set derived parameters of transformation functions from the metadata database. The metadata database may also contain transformation parameters, which in certain cases may be applicable to more than one derived column or source column. Examples of potential metadata entries associated with such parameters include any user passed parameters or definitions applied in the initial processing of the training data set, results of a feature importance evaluation, a trained PCA model, if applicable, original source column labels, returned derived column labels, data to support the decoding of predictions from a target ML system trained on the returned sets, software versioning or identification information related to the processing of the training data set or other such information. At block202, a tabular training data set is received. This training data set may be passed in, for example, as a file, set of files, references, or another input format. The training data may be organized as a table, with a specific observation of the training data in each row, along with multiple associated feature columns with a single column per feature. For example, the training data may include one or more columns defining aspects of the data set and include cells containing data items. In certain cases, the data sets may include one or more columns designated as labels. These label columns generally identify a specific aspect of the feature the target ML system may be trained on. As an example, for features such as a set of pixel activations from the image of a handwritten character, the label may be the character that the handwritten character is supposed to be recognized as. As another example, for features such as a collection of house properties, the label may be the price of house sale transactions. Thus, the labels define the ground truth associated with the set of features. In certain cases, the label columns may be included as adjoined designated columns to train and/or test data sets. In certain cases where labels may not be available, labels may be automatically designated, for example, via a pattern or based on defined permutations of features. Other columns may be defined, such as an identifier or index column, as well as one or more columns tracking any transformations that may be applied to the feature. In certain cases, certain columns, such as the identifier and/or index columns, may be preserved as unedited, read-only columns in a set which may serve as a store for columns which are to be excluded from transformations, or excluded from deriving infill or feature importance with predictive models. At block204, a pre-designated test data set may also be provided. The test data set is typically similar to the training data set but comprises data for use to verify a ML system after the ML system has been trained on the training data set or to generate predictions from a model to be trained on the returned training data. In certain cases, the receiving of the test data set at block204may be omitted. At block206, a feature importance evaluation may be performed for the features and the corresponding labels. In certain cases, the feature importance evaluation may be initiated or omitted based on, for example, an indication from a user. The feature importance evaluation measures impact to ML predictive accuracy associated with the features and may be based on derived properties from source columns, such as transformations between single or multiple column sets. In certain cases, the feature importance evaluation may return a feature importance value associated with each column indicating a predictive significance for the associated column. Feature importance evaluation is discussed below in detail in conjunction withFIG.3. At block208, column labels of the source columns are identified. In certain cases, when columns are provided without labels, block208may include the automated assignment of column labels to the columns of the train and/or test data sets. For example, the automated assignment of column labels may be based on the assumption that an order of columns in the training data set, test data set, and/or additional test data set to be processed are consistent. The set of column labels may be stored in the metadata database. If both the training data set and test data set are provided, the data sets may be cross validated to ensure that the training data and test data are consistent. For example, the two data sets may be compared to ensure they have a consistent number of columns, consistent column labels, and/or consistent data set properties. Consistency between the training data and test data with respect to certain columns helps allow the test data set to be evaluated by the trained ML system in a manner consistent with the training data set. In certain cases, block208may include the identification or automated assignment of column labels from the columns of the train and/or test data sets. In certain cases, certain columns may be designed for inclusion with a certain data set, without being included in other data sets. For example, certain labels or identifier columns may be included with just the training data set and omitted from the test data set. In certain cases, whether specific columns are included with certain data sets may be user configurable. At block210, the data sets are separated into a training data set, potentially, validation data sets, and/or test data sets. Each of the training, validation, and test sets may include consistently partitioned and associated label and ID sets. Label columns may be placed in one or more label sets corresponding to their respective data sets. Any identifier or other read-only columns may be placed in one or more ID sets corresponding to their respective data sets. Where a test data set is included separately from the training data set, the test data set remains the separate test data set, and the validation data set may be defined based on portions of the training data set. In some cases, a validation partition from the training set may be sampled from sequential points, for example from the top or bottom rows of the training set. In some cases, a second validation set may consist of partitioned randomly sampled rows from the first validation set which were partitioned from sequential rows of the training set. In some cases, one or more validation sets may consist of partitioned randomly sampled rows from the training set. In some cases, a ratio of training data to be partitioned for one or more validation sets may be based on user passed parameters. In some cases, the partitioning of a validation data set from the training data set may be omitted. At block212, each column of the data sets may be looped though to check to determine if infill is needed for cells of the column. In an alternate configuration, the loop through the columns of block212may be parallelized. Data infill generally helps fill holes in a dataset. For example, cells of each column may be checked to verify whether a cell in an existing row is empty or improperly formatted for a root category of data and should be infilled. An indication of whether infill is needed for a cell may be saved, for example, for use in conjunction with an infill process. As an example, a Boolean map of the row/columns may be saved to a metadata database, the Boolean map indicating whether infill is needed. In some cases, preparation of this Boolean map identifying, for example, rows subject to infill for a column may be incorporated as a primitive category entry of transformation in the library of transformation categories per the example of transformation primitives given in Table 1 below such that the collections are returned as columns in the returned training, validation, and/or test sets. In some cases, activation of this Boolean mapping of infill rows as primitive category entries in the transformation trees may be defined based on, for example, user input, such an input argument. At block214, a column may be evaluated to determine a root category for the column based on properties of data in the column. For example, data in a first column may be evaluated to determine the two most common variable types of data items in the columns. Based on the determined variable types, potential categories of data transformations and infill technique may be assigned. For example, a column with mostly numerical data types may be a candidate for numerical processing data techniques, where a numerical column with all positive values may be a candidate for a power law transformation. Similarly, a column including categorical information may be a candidate for one hot encoding or ordinal encoding, or a column with mostly date-time data may be a candidate for a time series technique. In certain cases, statistical processing techniques may be determined, for example, based on an evaluation of distribution properties of data in a column. Similarly, techniques for extracting features from string sets may be determined based on properties of the sets. In certain cases, the second most common data type may be included in the basis for determining processing and infill methods when the most common type will be subject to infill. At block216, feature engineering transformations based on the identified root category for each column may be applied. Feature engineering generally prepares a data set for use with ML systems by, for example, processing the training data set and/or test data sets into formats that can be readily or more efficiently used by ML systems by shaping or encoding the data sets through one or more data transformations. In certain cases, the feature engineering techniques may be based on the determined root categories from block214, or may be defined based on, for example, user input, such as a set of input arguments. Multiple transformations may also be applied based on a transformation tree family of primitives. The transformation tree families may be predefined or based on user inputs. These feature engineering transformations may be pulled from a library of feature transformations and may include, for example, one hot encoding, time series segregation by time scale (e.g., by months, days, hours, minutes, or seconds), time series bins segregation (e.g., identification of business hours, weekends, or holidays), z-score normalization, min-max scaling, power law transform such as via box-cox method, bins segregation, or mean absolute deviation scaling, etc. As an example, one hot encoding may spread the categories in a column of the training set to multiple columns and assign a binary value to each cell of the columns. As another example, for numerical data, z-score normalization based on the mean and standard deviation of the train set may be applied. As another example, z-score normalization may be supplemented by adding one or more columns containing binned Boolean identifiers indicating standard deviations of a particular value from a mean value. As another example, min-max scaling may be applied based on the minimum and maximum values of the train set. In certain cases, user provided sets of transformations may also be applied, which may incorporate transformation functions from a built-in library and may also incorporate user defined transformation functions. The feature engineering methods may also incorporate a preliminary infill technique, for example, to facilitate subsequent training of a predictive model for ML infill derivations in block220. Feature engineering transformations are discussed in more detail below in conjunction withFIG.4andFIG.5. Information indicating the feature engineering techniques applied to the columns and any parameters used to apply those techniques are output as a part of the metadata database. Saving and outputting the metadata database helps allow for consistent processing between multiple datasets across multiple runs and timeframes. For example, an initial training data set and, in some cases, an initial test data set may be processed for initially training a ML system. Once the ML system is trained, additional data may be provided to the ML system to, for example, generate predictions using the ML system. Additional data may be collected after training and this later acquired data may be processed in a manner similar to the initial training data sets to provide a constant and consistently formatted data for the ML system, such as to train a new ML system with consistently formatted data to iterate on the ML system techniques in isolation of random noise effects of data set processing, or to generate predictions from the trained ML system. At block218, labels may be processed. Labels associated with the different data sets and separated at block210may be processed via feature engineering transformations in a manner similar to that described above for data at block216and below with respect toFIG.3. Labels may also be assigned to various categories based on, for example, whether the label has a single numerical target, a single categorical target, or a multi-column categorical target set. In certain cases, infill may be omitted as to label columns and any rows associated with a blank label may be omitted as well. Metadata associated with the transformation applied to the labels may also be saved and outputted in the metadata database, for example, to support decoding of predictions from a trained ML system. At block220, data infilling may be applied, for example, using techniques such as a mean for a numerical set, most common value for a binary set, and Boolean identifiers for categorical data, based on the indication of whether infill in needed for a particular row in columns derived from a source column. In certain cases, the infilling operations of220may be omitted, such as when no update is desired from any infill methods that may have been applied in216or when the data set does not have any cells that need infill. In certain cases, a user may assign specific infill methods to distinct columns. In certain cases, an automated ML based infill may be applied which attempts to predict infill values based on one or more ML models trained on the rest of the set (i.e., the train set). In addition to filling in the indicted cells, column specific sets of training data, labels, and feature sets for the derivation of the technique used for the infill may be assembled. Where ML based infill is used, the column specific ML models used for the columns may also be outputted as a part of the metadata database. ML based infill is described in more detail below in conjunction withFIG.6andFIG.8. In certain cases, infill may be applied to both, or either the training or test data set. When applied to both data sets, the data infill techniques determined for the training data set may also be applied to the test data set. Metadata associated with the infill applied to the columns may also be saved and outputted in the metadata database. In an alternate configuration the column evaluation for data infill of block212may be performed in conjunction with block220. At block222, dimensionality reduction of the data sets may be performed. The dimensionality reduction may be based on, for example, evaluated feature importance metrics, Principal Component Analysis (PCA), or both. In certain cases, the dimensionality reduction operations may be omitted. In certain cases, a user may elect to reduce the dimensionality of the data sets. In such cases, user provided parameters may be received indicating how column trimming may be performed. For example, a parameter may be provided indicating an n percentage of the total number of columns to retain. In such cases, the column feature importance values associated with each column may be assessed and n percent of the total number of columns retained having the highest feature importance value. As another example, a threshold feature importance metric value may be provided, and in such cases, the feature importance value associated with each column may be compared to the threshold feature importance value to determine whether to keep the respective column. Column removal may also include updating the metadata database for later consistent processing of test data. Dimensionality reduction may also be applied to either the training data set or both the training and test data sets based on PCA, a type of unsupervised machine learning, based on user provided parameters or default methods. In certain cases, a type of PCA to be applied to the data sets may be user specified. In other cases, a type of PCA to be applied to the data sets, such as for example PCA, Sparse PCA or Kernel PCA, may be based on properties of the training data set. For example, where the data set includes all non-negative numbers, Kernel PCA, Sparse PCA, or PCA may be applied. When the data set includes negative numbers Sparse PCA or PCA may be applied. In certain cases, PCA may automatically be applied to a dataset based on, for example, a ratio of a number of columns to the number of data items. For cases where PCA is applied automatically or based on properties of the training data set, the PCA model is trained based on the training data set and the trained PCA model is used to transform either the training data set or both the training and test data sets to a new, reduced number of columns, which may include a different assigned column label naming convention. In certain cases, the PCA transformation may exclude application to Boolean or ordinal encoded columns, such as may be based on, for example, user passed parameters. In certain cases, the PCA model used on the training data set may be saved to, and outputted with, the metadata database. At block224, preparation of the data for oversampling may be applied. In certain cases, the preparation of the data for oversampling may be omitted. Oversampling helps increase a number of rows for labels having a lower frequency in the training data, which may benefit the training operation for a target ML system. In certain cases, oversampling may be based on a collection of sets derived from labels and based on determined label categories, such as a collection of one-hot encoded columns derived from a categorical label set, or a binary encoded column from a binary label set. In certain cases, oversampling may be based on label categories derived from binned groupings of a label set, such as, for example, number of standard deviations from the mean bins or label value powers of 10 bins for a numeric label set. A count for each label class may be collected or, when oversampling based on binned sets such as a number of standard deviations from the mean or bins based on numeric label values powers of 10, a count for each bin may be collected. A multiplier is derived for each label class or bin based on a ratio of a count of the label class or bin with the max count for each label class or bin. For each label class or bin, the corresponding rows of the associated training set, ID set, and labels set may be identified, copied a number of times based on the associated multiplier, and the copied rows attached to the associated training set, ID set, and/or labels set. At block226, if any validation data set was partitioned from the training data set in block210the partitioned validation data set may be consistently processed based on information in the metadata database. Processing the validation data set may be performed as discussed in detail below in conjunction withFIG.7. Processing the validation data set may be based on information in the metadata database, rather than in conjunction with the training data set. By processing this validation data separately from the training data, the potential for data leakage between training and validation sets through the derivation of transformation parameters for the transformation functions may be avoided. The validation data set or sets may be used after processing to, for example, tune hyperparameters or for final model validation of a target ML system. In some cases, a validation set for the target ML system may instead be sourced from the processed tabular test data received in block204. In some cases, the target ML system may not make use of a validation set. At block228, the processed data sets and the metadata database are output. A first category of output may include the processed training data set. A second category of output may include a consistently processed validation data set, if selected. In some cases, the processed validation set output may not be returned or may be returned as an empty set. If a test data set was initially received at block204, a third category of output may include a consistently processed test data. In certain cases, if a test data set was not initially received, then the processed test data set may not be returned or returned as an empty set. In certain cases, the application may output multiple categories of information for each of the returned categories of information for output, such as the processed training set, validation set, and/or test set. One such category of information may include the processed data set, a second such category of information may include the corresponding ID set, and a third such category of information may include the corresponding labels set. In certain cases, the ID sets and/or labels sets may not be returned or returned as an empty set. In certain cases, the rows of the returned sets may be randomly shuffled, and the corresponding labels or ID sets shuffled consistently with the rows of the returned corresponding sets. A fourth category of information may include the returned metadata database for use in subsequently processing additional data. A fifth category of information may include the feature importance results as determined in block206, if available, and returned as the feature importance evaluation results. The returned, processed, training data set, and zero or more consistently processed validation data sets may then be used to train the target ML system in a manner consistent with the specific ML system. In cases where processed test data sets are provided, the returned consistently processed test data sets may be used, for example, to generate predictions from the target ML system. FIG.3is a flow diagram illustrating a feature importance evaluation300, in accordance with aspects of the present disclosure. The feature importance evaluation300may be performed based on, for example, an indication from a user as discussed in conjunction with block206ofFIG.2. The feature importance evaluation300may be performed prior to final processing to create sets that are discarded at completion of the feature importance evaluation300. In certain cases, after feature importance evaluation300, processed columns may be returned for further operations as discussed inFIG.2. At block302, the training data set is processed for preparation of machine learning. For example, the type of data for a particular source column may be analyzed to determine a category of data contained within the column and transformations may be selected based on the determined category. These transformations may be selected from a library of transformations. As a more specific example, the data within a column may be determined to be floating point numerical data and based on this determination, a normalization transformation may be applied, and the normalized data saved into a new derived column associated with the original label. Additional derived columns may be created for additional transformations that may be applied to numerical data. In certain cases, these transformations may be specified by a user, for example, based on one or more transformation families. In certain cases, category specific suffix appendices may be added to the column labels to report the steps of transformation for each of the column labels of the resulting transformed columns. In certain cases, the preparation of data and/or labels at block302may be conducted by an implementation ofFIG.2and/orFIG.7. The predictive model may be initialized at block304. In identifying what type of ML methods are suitable for a category of labels, root category classification designations may be used to identify the type of predictive models for use. For example, the transformation category of the last transformation applied in the derivation of a target column or a target set of columns may be used to identify the type of predictive model. Examples of the types of classifications include numeric sets that will be targets for linear regression predictive algorithms, single column Boolean encoded sets targeted for classification predictive algorithms, or multi-column Boolean encoded sets that targeted for multi-column classification predictive algorithms. Such categorization may also be used to identify how to assemble sets of training data, labels, and features used to generate predictions for the predictive methods. In certain cases, the type of ML architecture initialized model (e.g., support vector machines, random forest regression or classifier, gradient boosting, neural networks, ensembles of architectures, layered ensembles of architectures, etc.) may be populated with one or more hyperparameters, such as may be derived based on properties of the data or by evaluation of experiments on sets of hyperparameter configurations impact towards model accuracy. Certain ML architectures may require different parameter considerations for the type of predictive model. These parameter considerations may be based on user input indicating, for example, specific ML model parameters, or a designated type of ML architecture. The training data set may then be split into a feature importance training data set, feature importance validation data set, and corresponding labels data sets to train a predictive model at block306. In certain cases, the performance of the model accuracy of the feature importance training and validation data sets may be algorithmically monitored throughout the training operation to identify an appropriate stopping point for the training operation, for example, to avoid overfitting the model to the training set properties. In certain cases, the feature importance model training306may be repeated with multiple configurations of candidate feature engineering transformation sets such as to identify transformation configurations that increase feature importance of columns derived from a source column. After training, the predictive model may be evaluated as against the feature importance validation data set at block308to determine a first accuracy metric. Feature importance metrics may be determined to evaluate impact to predictive accuracy from shuffling one or more target columns in the feature importance validation set to derive a new validation set evaluated against the predictive model. As an example, for a source column, a new feature importance validation data set may be obtained by looking up the derived columns associated with the source column in the column database and randomly shuffling values from the rows of the derived columns into the feature importance validation data set. The predictive model may then be evaluated as against the new feature importance validation data set to obtain a second accuracy metric. For each column at block310, a source column specific feature importance metric may then be determined at block312by subtracting the second accuracy metric from the first accuracy metric. In an alternate configuration, the loop through the columns of block310may be parallelized. A derived column specific accuracy metric may be determined at block314by looking up the derived columns associated with the source column in the column database and randomly shuffling values in the rows of all but the current derived column into the original feature importance validation set. The predictive model may then be evaluated as against this new feature importance validation set to obtain a third accuracy metric. The derived column specific feature importance metric may then be determined by subtracting the third accuracy metric from the first accuracy metric. Based on column specific feature importance metrics, the predictive significance for a column can be determined at block316. In this example, larger source column specific feature importance metrics imply a greater relative predictive importance of the source column and smaller derived column specific feature importance metrics imply greater relative predictive importance as between derived columns originating from the same source column. The results from the feature importance evaluation are returned at block318. In certain cases, the feature importance evaluation300may be performed independent of the preparation of data for machine learning. That is, a user may elect to perform a feature importance evaluation on a data set without transforming the data set, such as that described in the context ofFIG.2. In such cases, the data set may be received from the user, such as that described in conjunction with block202ofFIG.2and passed directly to block302. The feature importance evaluation proceeds as described above in conjunction with blocks302-316. At block318, the feature importance evaluation results are returned to the user, rather than, for example, back to the processing of the training data described in conjunction withFIG.2. The feature importance evaluation results may be returned to the user, for example, by saving the results into a file or other data object and returning the file or object. FIG.4is a flow diagram illustrating feature engineering transformations400, in accordance with aspects of the present disclosure. In certain cases, the feature engineering transformations can be applied to a training data set or both a training data set and test data set. The transformations performed to process a column may include maintaining entries to the metadata database. The feature engineering transformations400may also include the application of initial infill methods to missing or improperly formatted cells. Through the application of feature engineering transformations, a column or set of columns, derived from properties of the source column, may be returned. The specific feature engineering transformations applied may be based on one or more transformations defined for a root category associated with a given column. For example, a column may be categorized as having positive numerical values with high skewness distribution properties as described above with respect to block214. Based on this categorization, for example, a box-cox power law and z-score normalization transformations may be applied in that order. In another configuration, these box-cox power law and z-score normalization transformations may also be supplemented by a set of bins identifying each of the outputted column's cell values number of standard deviations from the mean and/or supplemented by a set of bins identifying the source column's numerical value powers of 10. Such configurations associated with root categories may be based on, for example, user passed parameters. In accordance with aspects of the present disclosure, one or more transformations and an order in which to apply the transformations may be based on a predefined transformation tree utilizing defined transformation category entries assigned for each root category's transformation primitives. In certain cases, portions of the transformation tree may be defined based on information provided by the user. For example, root category transformation tree primitive entries of transformation categories and/or their associated transformation functions may be defined for incorporation of custom transformations or custom sets of transformations into the platform, for example, by a user. In certain cases, default automated root categories of transformations to be applied based on evaluated data properties of the columns may be user assigned. Table 1 below illustrates an example set of transformation primitives. TABLE 1Upstream/GenerationColumnDownstreamPrimitiveDownstreamapplied toActionOffspringParentsUpstreamFirstReplaceYesSiblingsUpstreamFirstSupplementYesAuntsunclesUpstreamFirstReplaceNoCousinsUpstreamFirstSupplementNoNeicesnephewsDownstreamOffspringSupplementYesSiblingsChildrenDownstreamOffspringReplaceYesParentsCoworkersDownstreamOffspringReplaceNoAuntsunclesFriendsDownstreamOffspringSupplementNoCousins TABLE 2Root CategoryUpstreamDownstreamPrimitives:Entries:Primitives:Entries:ParentsCategory EntriesNeicesnephewsCategory EntriesSiblingsCategory EntriesChildrenCategory EntriesAuntsunclesCategory EntriesCoworkersCategory EntriesCousinsCategory EntriesFriendsCategory Entries As an example, for a given root category, each primitive may be defined to contain entries of zero or more transformation categories. Table 2 above illustrates an example of how these transformation category entries may be populated in a root category's transformation tree. Each category may have its own defined transformation tree, such that for a given root category, a set of transformations associated with upstream primitives are applied to the column associated with the root category. Where the upstream primitive category entry includes downstream offspring in that category's transformation tree, the downstream offspring categories are identified from the respective transformation tree of the upstream primitive category entry. Additional downstream offspring category entries of the downstream offspring categories may be similarly identified, and transformation functions associated with the one or more levels of downstream offspring are applied to the column returned from preceding upstream primitives with offspring category entry from which the offspring primitive category entries were derived. Where a category of transformation is applied with a Supplement primitive the preceding column upon which the transformation is applied may be left in place unaltered. Where a category of transformation is applied associated with a Replace primitive, the column upon which the transformation is applied may be subject to a deletion operation which may include maintenance of the metadata data for this and associated columns. In this example a root category may be populated as an entry in a primitive of its own transformation tree, for example the transformation function associated with the root category used to access the initial generation of transformation tree for a column may not be applied to the column unless that root category is populated as an entry to one of the primitives of its own transformation tree. The root category for a given source column of training data and/or test data may be assigned by the user or determined based on an evaluation of data properties such as one performed in block214ofFIG.2. As an example, based on the transformation primitives in Table 1, where a first column category is defined as an entry for an offspring primitive of an upstream primitive second column category entry, the transformations which are applied for the first column category may be applied to a column returned from the application of transformations which are applied for the second column category. In certain cases, result values from the transformations may replace current values in the first column category, and in other cases, the result values from these transformations may be appended to the current values in the first column category. In the example based on the transformation primitives in Table 1, results of the transformation are appended as a new column or new set of columns to the data set. In the example based on the transformation primitives in Table 1, categories of transformation that return a multi-column set may only be entries for defined primitives with no downstream offspring. In another configuration, additional primitives may be defined for the purposes of application of transformation functions to multi-column sets in aggregate such as would allow for transformations returning multi-column sets to be assigned as category entries to primitives with downstream offspring. In certain cases, one or more transformations may be defined for specific columns, for example by a user, in a manner similar to that described with respect to the transformation tree for a root category. For example, the user may pass in metadata defining a designated root category for transformations to be applied to specific columns or a user may pass in metadata defining a set of primitive category entries for custom root categories, in each case utilizing the transformation primitives and transformation tree format. Such user passed metadata may comprise a set of transformation categories from categories pre-defined in the library and may also include user-defined categories with user-defined transformation functions that incorporate consistent methods for assembling and returning metadata as the functions in the library. In certain cases, the user may pass parameters to library defined transformation functions to specify variations on the library defined transformation functions. In certain cases, the user provided metadata may be saved into and output with the metadata. In addition to the transformation primitives, each library defined category or user defined category may also be categorized based on a set of properties such as an identification of an associated transformation function, the categorization of types of data considered as suitable for infill, the categorization of types of ML methods suitable for targeting columns for this category, and the identification of a column or set of columns returned from the application of the category transformation tree suitable to serve as a target column or target set of columns for ML methods such as ML infill or feature importance. With respect to identifying transformation functions, a category may make use of different types of transformation functions depending on which data sets are to be targeted. For example, transformation functions may derive properties from a training set column for processing that column, transformation functions may use properties derived from a previously processed, corresponding training set column to subsequently process a test set column, transformation functions may process both a corresponding training and test set column based on properties derived from the training set column in application, or transformation functions may independently process either a train set column or test set column without the use of properties derived from a train set column. The training set properties for consistent processing between training and test sets with these transformation functions may be accessed from the metadata database or alternatively derived from the training set during application of a transformation function. Transformation functions may return a single column or a set of columns. A user may also define and pass custom transformation functions making use of consistent metadata assembly methods. In identifying what kind of data might be suitable for infill for a defined category, a category may be distinguished, for example, based on when data is expected as numeric floats, data is expected as numeric within a given range, data expected as integers, data where non-numeric values are allowed, data expected as a fixed range of categoric values, data that is expected to be already Boolean encoded, data that is expected to be in time series form, or data expected as strings with some kind of consistent formatting such as consistent prefixes or suffixes. In identifying what type of ML methods are suitable for targeting columns with this category, designations may identify the type of predictive models for use with the feature importance evaluation or the ML infill. Examples of the types of classifications include numeric sets that will be targets for linear regression predictive algorithms, single column Boolean encoded sets targeted for classification predictive algorithms, or multi-column Boolean encoded sets targeted for multi-column classification predictive algorithms. Such categorization may also be used to identify how to assemble sets of training data, labels, and features used to generate predictions for the predictive methods. If there are any additional columns at block402, at block404, the columns are checked to see if specific root categories have been assigned by the user to the columns. If specific root categories have not been assigned to certain columns, then categories based on an evaluation, such as those derived in block214, may be assigned to those specific columns as root categories. In an alternate configuration, the loop through the source columns of block402may be parallelized. At block406, columns are processed based on the transformation tree associated with the root categories of the columns. For example, for a root category of a column, the transformation tree associated with the root category may be accessed and a first upstream primitive category entry associated with the root category determined. Transformations associated with downstream primitive category entries associated with the transformation tree of the upstream primitive category entry, which apply to offspring generations, may be applied after transformations associated with the preceding upstream primitive category entry, either recursively cycling up and down branches through the generation layers of the transformation tree such as inFIG.5or sequentially through each layer of offspring. Columns which are identified for replacement based on their associated primitive, such as for the application of a Replacement primitive category entry, are marked for deletion and deleted at block408. Such deletion operation may include the maintenance of the metadata used to support infill or used for subsequent consistent processing of test data as discussed in conjunction withFIG.7. As discussed above in conjunction with block406, columns may be processed based on the transformation tree associated with the root category of a given column. FIG. is a flow diagram illustrating application of transformations based on a transformation tree500, in accordance with aspects of the present disclosure. In certain cases, the application of transformations can be applied to a training data set or both a training set and test data set. At block502, a transformation tree of primitives and their associated category entries may be accessed based on the root category associated with the column. Table 1 above illustrates an example set of transformation primitives and Table 2 illustrates an example of primitive category entries corresponding to this example. At block504, if the transformation primitive has upstream primitive category entries, transformations associated with those upstream primitive category entries may be accessed and applied to the column at block506. In this example the transformations applied to the data points are returned as an additional column or set of columns appended to the data set. If the upstream primitive was a Replacement primitive the column or set of columns from which it was derived is marked for deletion, for example in block408. The application of the categories of transformations also include the development and maintenance of associated metadata. The upstream primitive category entry whose transformation was applied in block506is then used as a key to access the downstream primitive category entries from that category's transformation tree. At block510, if the entries in block508find downstream primitive category entries, those categories are treated as a new layer of upstream primitives per the example in Table 1 and applied as a new layer to the methods starting in block504. If no downstream primitive category entries are identified in block510, the iteration reverts to the preceding application of the block504loop for the upstream primitive category entries. Once the loop of block504has cycled through all of the upstream primitive category entries of the current layer it reverts to the preceding layer of upstream primitive category entries per block514. Once the loop of block504has cycled through all of the upstream primitive category entries of the topmost layer of the original column root category transformation tree, the cycle is exited from block514and the process returns. In certain cases, category specific suffix appendices may be added to the column labels to report the steps of transformation for each of the column labels of the resulting transformed columns. FIG.6is a flow diagram illustrating infilling600, in accordance with aspects of the present disclosure. The infilling600may be performed for a target training data set, a target test data set, or both in parallel. The identification of rows needing infill may be based, for example, on the results of block212, or in an alternate configuration based on an evaluation comparable to block212performed preceding infilling600. At block602, derived columns of a data set, such as those discussed in conjunction withFIG.4, are looped through. As part of block602the columns may be checked against a metadata database to determine if infill has previously been performed in conjunction with another column from the same multi-column set, such as if a column was derived as part of a multi-column output transformation function. In certain cases, a user may designate infill techniques to be applied to the data sets. For example, a user may designate a particular infill technique, such as, for example, ML infill, one infill (infill with “1”), adjacent cell infill, median value infill, etc., on either all or specific columns. In other cases, the user may designate that infill should occur without specifying an infill technique and default infill techniques may be applied based on defaults for the column category, such as Boolean identifiers for categorical columns. In certain cases, the infill derivation and application may be performed in conjunction with feature engineering transformations. In certain cases, the infill application may be omitted. The type of infill technique to be applied is determined at block604. Where a non-ML infill technique is to be applied, the infill technique is applied at block606, and derivation of infill or insertion may be based on rows identified as needing infill. At block606, if the infill was inserted for a multi-column set, the columns of the multi-column set may be recorded in a metadata database as having received infill. If a ML infill technique is designated, at block608, the data sets for ML infill training may be prepared. As an example of preparing the infill training data set, the rows from the training data set corresponding to cells identified as needing infill may be partitioned from the training data set so as to serve as features for predicting infill once a ML model is trained, with columns from the training data set derived from the same source column currently subject to infilling removed. Removing these columns helps avoid data leakage. The rows from the training data set corresponding to cells identified as not needing infill may be partitioned from the training data set so as to serve as data for training a ML model, with columns from the training data set derived from the same source column currently subject to infilling removed. The column for which infill is to be predicted with cells not needing infill may be used as labels for the ML infill model training, with other columns from the training data set derived from the same source column as the column currently subject to infilling removed. In certain cases, such as when the target column was derived from a transformation function returning a multi-column set, the labels for the ML infill model training may be derived from a multi-column set for rows with cells not needing infill, with other columns from the training data set derived from the same source column as the column set currently subject to infilling removed. In certain cases, the set intended as the training set for the ML infill model and corresponding target labels sets may be split into subsets for training data set and validation data set to support hyperparameter tuning and final model validation. In certain cases, data from a test data set may be similarly processed, which may include the identification of rows of a source column needing infill and the assembling of partitioned feature sets for rows needing infill to generate infill predictions from the ML infill model trained on a corresponding training data set for insertion to the target column or set of columns of the test data set. At block610, the label column category is used to determine a type of predictive model associated with the category. For example, the transformation category of the last transformation applied in the derivation of a target column or a target set of columns may be used to identify the type of predictive model. For example, for numerical data, a regression model may be applied. As another example, for single column or multi column categorically encoded data, a classifier model may be applied. In certain cases, the type of ML architecture initialized model (e.g., support vector machines, random forest regression or classifier, gradient boosting, neural networks, ensembles of architectures, layered ensembles of architectures, etc.) may be populated with one or more hyperparameters, such as may be derived based on properties of the data or by evaluation of experiments on sets of hyperparameter configurations impact towards model accuracy. Certain ML architectures may require different parameter considerations for the type of predictive model. These parameter considerations may be based on user input indicating, for example, specific ML model parameters, and/or a designated type of ML architecture. At block612, the determined type of predictive model is initialized and trained on the infill training data set. In certain cases, the performance of the model accuracy of the ML infill column specific training and validation data sets may be algorithmically monitored throughout the training operation to identify an appropriate stopping point for the training operation such as to avoid overfitting the model to the training set properties. The predictive model may also be initialized based on the one or more determined parameters for the predictive model. At block614, the predictive model is applied to the set of features derived from rows of the training data set and/or test data set for which the target column or set of target columns were identified as subject to infill to obtain a set of infill value predictions corresponding to rows in the training data set and/or test data set with missing or improperly formatted items. At block616, this set of infill values may be inserted into the rows of the training data set and/or test data set. This insertion may be based on rows identified as needing infill. At block616, the predictive model is also saved as a part of the metadata database. At block616, if the infill was inserted for a multi-column set, the columns of the multi-column set may be recorded in a metadata database as having received infill. Additional consistent processing of training or test data for a ML system may be desired after the initial preparation of data for the ML system is performed. For example, additional training data may be obtained. There may be a desire to consistently process this additional data in order to maintain consistency and training efficacy with the initial training data. There may be a desire to consistently process data such as to experiment with architecture or parameters of the target ML system in isolation of any stochastic noise from the data preparation process. There may be a desire to consistently process subsequently available data to generate predictions from a model trained with the initially processed data. There may be a desire to consistently process data that was split from the training data so as to serve as validation sets for tuning machine learning hyperparameters or for final validation of a machine learning model, as any inclusion of validation data in the training sets used to derive column specific parameters for transformation functions may lead to data leakage between training and validation sets. As an example, a metadata database may be provided that includes records of the transformations and parameters used to prepare the initial training data. These records may include both user provided information as well as information that was determined as a part of processing the initial training data, such as the column types or the predictive model used for ML infill. In certain cases, processing additional training data may be based on information in the metadata database, thus reducing an amount of user provided information needed to prepare the additional training data. FIG.7is a flow diagram illustrating a technique for consistently processing additional test or training data700, in accordance with aspects of the present disclosure. As discussed in detail above in conjunction with block228, after preparing the initial training data set, the metadata database is output along with the processed initial training data set. At block702, the metadata database output from the processing of the initial training data set, such as from block114ofFIG.1, is received. At block704, a tabular additional test data set is received. In certain cases, an additional training data set may instead be received at block704. At block706, column labels of the additional test data set are identified. In certain cases, when columns are provided without labels, block706may include the automated assignment of column labels to the columns of the additional test data sets based on a list of column labels stored in the metadata database. For example, the automated assignment of column labels may be based on an assumption that order of columns in the train data set used to populate the metadata database and additional test data set are consistent. In certain cases, the additional test data set or additional training data set may be validated as against the initial training data for consistency. For example, the columns present in the additional test data set may be compared to the columns of the initial training data set for consistency in column identifiers or data set properties. At block708, ID and/or label columns may be placed into separate label sets and/or ID sets for the additional test data set in a way similar to block210ofFIG.2. In certain cases, this placement into separate sets may be omitted. At block710, the columns of the additional test data set are looped through. In an alternate configuration, the loop through the source columns of block710may be parallelized. At block712, information corresponding with the present column in the metadata database may be identified and accessed. At block714the column of the additional test data set is checked to determine if infill is needed for cells of the column in a manner similar to block212ofFIG.2. At block716, feature engineering transformations may be applied to the column of the additional test data set, based on information retrieved from the metadata database, in a manner similar to block216ofFIG.2and similar to the processes described inFIG.4andFIG.5. For the transformation functions associated with transformation tree primitive category entries applied as part of block716, parameters of transformation may be retrieved from the metadata database, wherein retrieved parameters were derived from the corresponding columns of the training dataset used to populate the metadata database. In certain cases, a preliminary infill may be applied as part of the feature engineering transformations of block716such as to prepare data for the subsequent predictive algorithms of ML infill in block720. At block718, if a labels column is designated for the additional test data set, labels may be processed in a manner similar to block218ofFIG.2and similar to the processes described inFIG.4andFIG.5. For the transformation functions associated with transformation tree primitive category entries applied as part of block718, parameters of transformation may be retrieved from the metadata database, wherein retrieved parameters were derived from the corresponding columns of the training dataset labels used to populate the metadata database. At block720, infill may be applied to the derived columns of the additional test data set based on information provided by the metadata databased in a manner described in more detail below in conjunction withFIG.8. In certain cases, this infill insertion may be omitted. At block722, dimension reduction may be performed to the additional test data set in a manner similar to block222ofFIG.2. If feature importance dimension reduction was performed in block222, the feature importance results derived from the initial training set in conjunction with block222may be accessed from the metadata database and used as a basis of the dimension reduction to the additional test data set. If PCA dimensionality reduction was performed in block222, the PCA model trained on the training set may be accessed from the metadata database for application to transform corresponding columns of the additional test data set. In certain cases, dimension reduction may be omitted. At block724, preparation of the additional test data set for oversampling may be applied in a manner similar to block224ofFIG.2, based on label sets processed in block718. In certain cases, preparation of data for oversampling may be omitted. At block726, the processed additional test data set may be returned for output. The outputted additional test data set may include corresponding sets of ID and/or label sets based on what may have been partitioned in block708. In certain cases, the metadata database may not be output as a part of outputting the processed additional training data. In certain cases, a feature importance evaluation may be incorporated into the technique700based on an implementation ofFIG.3such as may evaluate an additional test data set and/or an additional training data set. FIG.8is a flow diagram illustrating a technique for consistent infilling800, in accordance with aspects of the present disclosure. As discussed above in conjunction with block720ofFIG.7, infilling may be applied based the technique for consistent infilling800. The identification of rows needing infill may be based, for example, on the results of block714, or in an alternate configuration based on an evaluation comparable to block714performed preceding consistent infilling800. At block802, the derived columns of a data set are looped through. In an alternate configuration, the loop through the columns of block802may be parallelized. As part of block802the columns may be checked against a metadata database to determine if infill was previously performed in conjunction with another column, for example, as a part of a multi-column output transformation function. At block804, a type of infill to be performed may be determined. In certain cases, a user may designate infill techniques to be applied to the data sets. The type of infill technique to be applied may be determined based on information stored in the metadata database for the corresponding column or the set of columns from the training set used to populate the metadata database. In certain cases, the infill derivation and application may be instead performed in conjunction with the feature engineering transformations, such as those discussed above in conjunction with block714. Where a non-ML infill technique is to be applied, the infill technique may be applied at block806. This infilling may be based on rows identified as needing infill. At block806, if the infill was inserted for a multi-column set, the columns of the multi-column set may be recorded in a metadata database as having received infill. If a ML infill technique is designated, at block808, feature sets for ML infill predictions may be prepared from partitions of the data set. As an example of preparing the infill predictions feature sets, rows corresponding to cells identified as needing infill may be partitioned from the data set so as to serve as features for predicting infill, with columns from the data set derived from the same source column currently subject to infilling removed in a way similar to that discussed in conjunction with block608. At block810, the ML infill model trained from the corresponding column or set of columns from the training data set used to populate the metadata database is accessed from the metadata database. At block812, the predictive model is applied to the set of features derived from rows for which target column or set of target columns that were infilled to obtain a set of infill value predictions corresponding to rows in the training data set or test data set with missing or improperly formatted items. At block814, this set of infill values are inserted into the corresponding rows of the data set based on rows identified as needing infill. At block814, if the infill was inserted for a multi-column set, the columns of the multi-column set may be recorded in a metadata database as having received infill. FIG.9is a block diagram of an embodiment of a computing device900, in accordance with aspects of the present disclosure. As illustrated inFIG.9, device900includes a processing element such as processor905that contains one or more hardware processors, where each hardware processor may have a single processor core or multiple processor cores. Examples of processors include, but are not limited to, a central processing unit (CPU) or a microprocessor. Although not illustrated inFIG.9, the processing elements that make up processor905may also include one or more other types of hardware processing components, such as graphics processing units (GPUs), tensor processing units (TPUs), application specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), digital signal processors (DSPs), and/or quantum computing processors such as quantum annealing devices, noisy intermediate-scale quantum (NISQ) devices, or universal quantum computing devices. Generally, device900may perform any of the functionality described above (e.g., in conjunction withFIGS.1-8). FIG.9illustrates that memory910may be operatively and communicatively coupled to processor905. Memory910may be a non-transitory computer readable storage medium configured to store various types of data. For example, memory910may include one or more volatile devices such as random access memory (RAM). Non-volatile storage devices920can include one or more disk drives, optical drives, solid-state drives (SSDs), tap drives, flash memory, electrically programmable read only memory (EEPROM), and/or any other type memory designed to maintain data for a duration time after a power loss or shut down operation. The non-volatile storage devices920may also be used to store programs that are loaded into the RAM when such programs executed. Software programs may be developed, encoded, and compiled in a variety of computing languages for a variety of software platforms and/or operating systems and subsequently loaded and executed by processor905. In one embodiment, the compiling process of the software program may transform program code written in a programming language to another computer language such that the processor905is able to execute the programming code. For example, the compiling process of the software program may generate an executable program that provides encoded instructions (e.g., machine code instructions) for processor905to accomplish specific, non-generic, particular computing functions. In certain cases, the software program may be configured for parallelized operations, for example on a GPU, co-processor, ML processor, quantum computing processor, or other processor provided in addition to processor905. After the compiling process, the encoded instructions may then be loaded as computer executable instructions or process steps to processor905from storage920, from memory910, and/or embedded within processor905(e.g., via a cache or on-board ROM). Processor905may be configured to execute the stored instructions or process steps in order to perform instructions or process steps to transform the computing device into a non-generic, particular, specially programmed machine or apparatus. Stored data, e.g., data stored by a storage device920, may be accessed by processor905during the execution of computer executable instructions or process steps to instruct one or more components within the computing device900. Storage920may be partitioned or split into multiple sections that may be accessed by different software programs. For example, storage920may include a section designated for specific purposes, such as storing program instructions or data for updating software of the computing device900. In one embodiment, the software to be updated includes the ROM, or firmware, of the computing device. In certain cases, the computing device900may include multiple operating systems. For example, the computing device900may include a general-purpose operating system which is utilized for normal operations. In certain cases, elements coupled to the processor may be included on hardware shared with the processor. For example, the communications interfaces925, storage920, and memory910may be included, along with other elements such as the digital radio, in a single chip or package, such as in a system on a chip (SOC). Computing device may also include input and/or output devices, not shown, examples of which include sensors, cameras, human input devices, such as mouse, keyboard, touchscreen, monitors, display screen, tactile or motion generators, speakers, lights, etc. Processed input, for example from a camera device930, may be output from the computing device900via the communications interfaces925to one or more other devices. | 75,971 |
11861463 | DETAILED DESCRIPTION The illustrative embodiments recognize that, while conversation-based collaboration tools provide an easy, natural way of communicating, the result is an undifferentiated flow of messages. An interaction, especially among more than two users, can include many threads, each proceeding on its own timeline and including numerous messages. When a user generates a new message in an interaction, it can be difficult to identify to which previous message (or action) is being addressed, or if the new message has no previous message (i.e. is the start of a new thread). Just as in face-to-face interactions, an interaction about one topic may segue into another topic, or the two topics may become intermingled, even if a tool attempts to divide interactions by topic area. Participants may implicitly refer to subject matter discussed previously or answer questions asked several messages back in the interaction. Identifying the preceding message can help track and visualize conversational workflows, reduce user confusion, and increase team productivity. Prior-art applications allow messages to be marked or created as in reply to another message, creating user defined threads that clarify a relationship between messages, but this is a manual process that is commonly skipped in a rapid, informal interaction. The illustrative embodiments also recognize that the messages exchanged in conversation-based collaboration tools can include information on tasks to perform, commitments to perform a task, appointments, and other project or time management items participants often track in a separate project or time management tool. Such task, appointments, and other project or time management items are collectively referred to herein as commitments. For example, one conversation participant might ask for a status report by Friday, and another conversation participant might agree to provide one, but only by Monday. However, because of the number of messages in a conversation, the intermingled nature of topics in a conversation, and the rapidity and informality of the conversation, it might be difficult for the second participant to record in a to-do list that she will provide this status report by Monday. In addition, a follow-up to a commitment may come sufficiently later in an interaction and sufficiently removed from the original commitment's context that it can be difficult for a human user to associate the follow-up with the original commitment. Although prior-art applications allow users to mark messages as actions or decisions manually and search, filter or sort by those labels, this marking is also performed manually. The illustrative embodiments also recognize that requiring users to perform manual steps to indicate a specific message being replied to, define threads, or manually record commitments is cumbersome, time consuming, and undermines the benefits of the rapid, informal interaction collaboration tools provide. Thus, the illustrative embodiments recognize that there is an unmet need to automatically identify messages that are related to each other, for use in automatically extracting data within those messages to form commitments. The illustrative embodiments recognize that the presently available tools or solutions do not address these needs or provide adequate solutions for these needs. The illustrative embodiments used to describe the invention generally address and solve the above-described problems and other problems related to identifying related messages in a natural language interaction. An embodiment can be implemented as a software application. The application implementing an embodiment can be configured as a modification of an existing conversation-based collaboration system, as a separate application that operates in conjunction with an existing conversation-based collaboration system, a standalone application, or some combination thereof. Particularly, some illustrative embodiments provide a method of determining, using a trained message predictor model, a probability of a previous message in an interaction having resulted in a current message, and extracting the previous message from the interaction. Once a previous message has been determined, an embodiment performs other tasks associated with the now-identified message thread, such as presenting the thread to a user in threaded form or assembling information within the thread into a commitment in a project or time management tool. An embodiment classifies a message, or a portion of a message into a message class. To classify the message, an embodiment uses any suitable natural language analysis classification technique. A message can be classified into more than one class. One embodiment uses a set of classification modules, each configured to identify a particular natural language feature or pattern. For example, one classification module identifies messages in which someone appears to be looking for an expert, and messages including a query to which the answer is likely to be a person's name. Another example classification module identifies an action within a message, and a commitment in another message. Another example classification module identifies meetings and meeting-related information, such as the meeting subject, time, or place. Another example classification module identifies messages that are confirmations or negations. A simpler example classification module identifies messages that include an account number, or a stock ticker symbol. In one classification module implementation, each module performs its own classification independently of the other modules. Thus, for example, the account number and stock ticker symbol modules could independently classify a message (for example, one including a customer's account number and an order to sell500shares of a particular stock and place the proceeds in the referenced account) as including both an account number and a stock ticker symbol. In another classification module implementation, a module has the ability to consult prior classifications of other messages, or other classifications of a current message, in determining a classification. Classifying messages aids an embodiment in determining conversational patterns that can be used to predict which messages will follow which other messages. For example, a message asking for an expert is often followed by others suggesting names of experts, or saying they do not know the right expert. As another example, a request for a meeting is often followed by a series of messages involving the time and place of the meeting. An embodiment models conversational patterns using a Markov inference model. Message classes are represented by nodes in the Markov model. In probability theory, a conditional probability is a measure of the probability of an event occurring given that another event has occurred. Thus, using the Markov model, if a message is in class 1, denoted by C1, there is a conditional probability P(C2|C1) (i.e., the probability of C2given C1) that this message in C1will be followed by a message in class 2, denoted by C2. In the model, a single class need not lead to only one other class. Instead, multiple classes may lead to one class, and one class may lead to multiple classes. An embodiment trains the model using pairs of messages. In one training set implementation, when a new message arrives at an embodiment for analysis, the embodiment asks a user to identify a parent message of the new message, where a parent message is a previous message related to the new message. For example, if a new message is, “Let's meet at six,” a parent message might be, “What time should we meet?” Conversely, a child message is a successor of a parent message. Thus, a new message can be a child of a parent message. In another training set implementation, instead of asking a user to identify a parent message when a later message is received, a message classifier can be configured to identify a parent message class as part of new message classification. In another training set implementation, instead of asking a user to identify a parent message when a later message is received, an entire interaction is formed into a thread and parent and child messages identified using any suitable technique. An embodiment classifies both the new (or child) and parent messages into one or more message classes, in a manner described herein. A class of the new message is denoted by Cm, and a class of the parent message is denoted by Cp. Then an embodiment trains the model by updating the conditional probability P(Cm|Cp) with the expression (number of previous instances of Cppreceding Cm)/(number of previous Cpinstances), where an instance refers to an occurrence of Cpor Cmwithin interactions the model has processed. In addition, because the number of Cpinstances is incremented each time a new message having Cpas the class of the parent message is processed, all conditional probabilities using Cprequire corresponding updates. An embodiment uses the trained model as a message class prediction model to determine a probability of a previous message class having resulted in a current message class. In mathematical notation, a probability of a previous message class having resulted in a current message class is the conditional probability P(Cp|Cm), where a class of a message currently being analyzed is denoted by Cm, and a class of a parent message is denoted by Cp. An embodiment reverses the conditional probability P(Cm|Cp) with which the model was trained and determines the conditional probability P(CpCm) using any suitable mathematical technique. Another embodiment uses a different model as a message class prediction model. This different model can be trained, if required, using any training method suitable to the model used. Once an embodiment has determined a probability of a previous message class having resulted in a current message class, an embodiment uses that probability to extract one or more previous messages from the interaction. In one embodiment, for every message class probability above a threshold probability level, the embodiment extracts the corresponding message from the interaction and presents the messages to a user. This set of extracted messages are all likely to be a parent of the current message. Another embodiment selects at most a predetermined number of the highest message class probabilities and, for each selected probability, extracts the corresponding message from the interaction and presents the messages to a user. Another embodiment selects at most a predetermined number of the message class probabilities corresponding to the most recent messages and, for each selected probability, extracts the corresponding message from the interaction and presents the messages to a user. When presenting the messages to a user, one embodiment sorts the messages according to the corresponding message class probabilities, and another embodiment sorts the messages according to their recency. Other selection methods of which messages to present and the order in which they are presented are also possible and contemplated within the scope of the illustrative embodiments. An embodiment allows a user to select one or more of the presented messages as actual parent messages. If there are too many candidate parent messages to present to a user for selection, or to implement an automatic selection process, an embodiment uses a trained message ranking model to reduce the size of the presented message set or select one parent message. To use the message ranking model, an embodiment encodes as a numerical representation, or vector, each candidate parent message to be ranked. Each dimension of the vector corresponds to a different feature of a message. For example, one set of features includes a known entity mentioned in a message, a proximity of the message sender, reader, and mentioned entity in a social graph, a message's recency, coincidence of terms relative to previously processed narrative text, features of the message for which candidate parents are being ranked, and vector space comparison of sentence embedding, each using a 0-1 scale. A sentence embedding is a vector representing a subset of an entire message. A sentence embedding can represent an entire grammatical sentence, but can also represent another subset of a message. Other sets of features, and other encoding schemes, are also possible and contemplated within the scope of the illustrative embodiments. Encoding messages as vectors allows the model to process numbers rather than the natural language text directly. During training, the message ranking model learns a pairwise mapping for the relative ranking of two candidate parent messages for a current message, where each message is encoded into a vector in a manner described herein. In other words, for a given candidate message, a model output of 0 indicates that one of the candidate parents is a better parent message and a model output of 1 indicates that the other candidate parent is a better parent message. An embodiment trains the message ranking model to learn the pairwise mapping using any suitable, machine learning technique. Some suitable techniques use linear models such as logistic regressions, or more expensive models such as a multi-layer neural network. An embodiment uses the trained message ranking model to rank a set of candidate parent messages given a current message. In particular, the embodiment encodes each message into a vector in a manner described herein, then applies pairs of candidate parent messages to the trained model for relative ranking using any suitable technique. Algorithms for ranking a set of objects using pairs of relative rankings are known. When the set of candidate parent messages has been ranked, an embodiment designates the highest ranking candidate parent message as the actual parent message. Once an embodiment has identified a parent-child relationship between messages, an embodiment assembles the related messages into a message thread according to the parent-child relationship. The message thread can also include additional messages, such as a parent message to a parent message already in the thread, or a child message to a child message already in the thread. In addition, a parent message can have multiple child messages, and a child message can have multiple parent messages within a thread. Assembling messages into a thread, without intervening messages that are irrelevant to the thread, allows a user to focus on one thread at a time. Once an embodiment has identified a parent-child relationship between messages, if information in the messages includes data relating to a commitment, an embodiment assembles the information into a commitment. The commitment can also include information from additional messages, such as a parent message to an already-identified parent message, or a child message to an already-identified child message. For example, if a parent message suggests a meeting and a child message specifies a time for the meeting, an embodiment assembles the information in the messages into a calendar item for the meeting at the specified time. The manner of identifying related messages in a natural language interaction described herein is unavailable in the presently available methods in the technological field of endeavor pertaining to natural language analysis. A method of an embodiment described herein, when implemented to execute on a device or data processing system, comprises substantial advancement of the functionality of that device or data processing system in determining, using a trained message predictor model, a probability of a previous message in an interaction having resulted in a current message, and extracting the previous message from the interaction. The illustrative embodiments are described with respect to certain types of messages, interactions, encodings, models, probabilities, threads, commitments, rankings, devices, data processing systems, environments, components, and applications only as examples. Any specific manifestations of these and other similar artifacts are not intended to be limiting to the invention. Any suitable manifestation of these and other similar artifacts can be selected within the scope of the illustrative embodiments. Furthermore, the illustrative embodiments may be implemented with respect to any type of data, data source, or access to a data source over a data network. Any type of data storage device may provide the data to an embodiment of the invention, either locally at a data processing system or over a data network, within the scope of the invention. Where an embodiment is described using a mobile device, any type of data storage device suitable for use with the mobile device may provide the data to such embodiment, either locally at the mobile device or over a data network, within the scope of the illustrative embodiments. The illustrative embodiments are described using specific code, designs, architectures, protocols, layouts, schematics, and tools only as examples and are not limiting to the illustrative embodiments. Furthermore, the illustrative embodiments are described in some instances using particular software, tools, and data processing environments only as an example for the clarity of the description. The illustrative embodiments may be used in conjunction with other comparable or similarly purposed structures, systems, applications, or architectures. For example, other comparable mobile devices, structures, systems, applications, or architectures therefor, may be used in conjunction with such embodiment of the invention within the scope of the invention. An illustrative embodiment may be implemented in hardware, software, or a combination thereof. The examples in this disclosure are used only for the clarity of the description and are not limiting to the illustrative embodiments. Additional data, operations, actions, tasks, activities, and manipulations will be conceivable from this disclosure and the same are contemplated within the scope of the illustrative embodiments. Any advantages listed herein are only examples and are not intended to be limiting to the illustrative embodiments. Additional or different advantages may be realized by specific illustrative embodiments. Furthermore, a particular illustrative embodiment may have some, all, or none of the advantages listed above. With reference to the figures and in particular with reference toFIGS.1and2, these figures are example diagrams of data processing environments in which illustrative embodiments may be implemented.FIGS.1and2are only examples and are not intended to assert or imply any limitation with regard to the environments in which different embodiments may be implemented. A particular implementation may make many modifications to the depicted environments based on the following description. FIG.1depicts a block diagram of a network of data processing systems in which illustrative embodiments may be implemented. Data processing environment100is a network of computers in which the illustrative embodiments may be implemented. Data processing environment100includes network102. Network102is the medium used to provide communications links between various devices and computers connected together within data processing environment100. Network102may include connections, such as wire, wireless communication links, or fiber optic cables. Clients or servers are only example roles of certain data processing systems connected to network102and are not intended to exclude other configurations or roles for these data processing systems. Server104and server106couple to network102along with storage unit108. Software applications may execute on any computer in data processing environment100. Clients110,112, and114are also coupled to network102. A data processing system, such as server104or106, or client110,112, or114may contain data and may have software applications or software tools executing thereon. Only as an example, and without implying any limitation to such architecture,FIG.1depicts certain components that are usable in an example implementation of an embodiment. For example, servers104and106, and clients110,112,114, are depicted as servers and clients only as example and not to imply a limitation to a client-server architecture. As another example, an embodiment can be distributed across several data processing systems and a data network as shown, whereas another embodiment can be implemented on a single data processing system within the scope of the illustrative embodiments. Data processing systems104,106,110,112, and114also represent example nodes in a cluster, partitions, and other configurations suitable for implementing an embodiment. Device132is an example of a device described herein. For example, device132can take the form of a smartphone, a tablet computer, a laptop computer, client110in a stationary or a portable form, a wearable computing device, or any other suitable device. Any software application described as executing in another data processing system inFIG.1can be configured to execute in device132in a similar manner. Any data or information stored or produced in another data processing system inFIG.1can be configured to be stored or produced in device132in a similar manner. Application105implements an embodiment described herein. Application105can execute in any of servers104and106, clients110,112, and114, and device132. Servers104and106, storage unit108, and clients110,112, and114, and device132may couple to network102using wired connections, wireless communication protocols, or other suitable data connectivity. Clients110,112, and114may be, for example, personal computers or network computers. In the depicted example, server104may provide data, such as boot files, operating system images, and applications to clients110,112, and114. Clients110,112, and114may be clients to server104in this example. Clients110,112,114, or some combination thereof, may include their own data, boot files, operating system images, and applications. Data processing environment100may include additional servers, clients, and other devices that are not shown. In the depicted example, data processing environment100may be the Internet. Network102may represent a collection of networks and gateways that use the Transmission Control Protocol/Internet Protocol (TCP/IP) and other protocols to communicate with one another. At the heart of the Internet is a backbone of data communication links between major nodes or host computers, including thousands of commercial, governmental, educational, and other computer systems that route data and messages. Of course, data processing environment100also may be implemented as a number of different types of networks, such as for example, an intranet, a local area network (LAN), or a wide area network (WAN).FIG.1is intended as an example, and not as an architectural limitation for the different illustrative embodiments. Among other uses, data processing environment100may be used for implementing a client-server environment in which the illustrative embodiments may be implemented. A client-server environment enables software applications and data to be distributed across a network such that an application functions by using the interactivity between a client data processing system and a server data processing system. Data processing environment100may also employ a service oriented architecture where interoperable software components distributed across a network may be packaged together as coherent business applications. Data processing environment100may also take the form of a cloud, and employ a cloud computing model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. With reference toFIG.2, this figure depicts a block diagram of a data processing system in which illustrative embodiments may be implemented. Data processing system200is an example of a computer, such as servers104and106, or clients110,112, and114inFIG.1, or another type of device in which computer usable program code or instructions implementing the processes may be located for the illustrative embodiments. Data processing system200is also representative of a data processing system or a configuration therein, such as data processing system132inFIG.1in which computer usable program code or instructions implementing the processes of the illustrative embodiments may be located. Data processing system200is described as a computer only as an example, without being limited thereto. Implementations in the form of other devices, such as device132inFIG.1, may modify data processing system200, such as by adding a touch interface, and even eliminate certain depicted components from data processing system200without departing from the general description of the operations and functions of data processing system200described herein. In the depicted example, data processing system200employs a hub architecture including North Bridge and memory controller hub (NB/MCH)202and South Bridge and input/output (I/O) controller hub (SB/ICH)204. Processing unit206, main memory208, and graphics processor210are coupled to North Bridge and memory controller hub (NB/MCH)202. Processing unit206may contain one or more processors and may be implemented using one or more heterogeneous processor systems. Processing unit206may be a multi-core processor. Graphics processor210may be coupled to NB/MCH202through an accelerated graphics port (AGP) in certain implementations. In the depicted example, local area network (LAN) adapter212is coupled to South Bridge and I/O controller hub (SB/ICH)204. Audio adapter216, keyboard and mouse adapter220, modem222, read only memory (ROM)224, universal serial bus (USB) and other ports232, and PCI/PCIe devices234are coupled to South Bridge and I/O controller hub204through bus238. Hard disk drive (HDD) or solid-state drive (SSD)226and CD-ROM230are coupled to South Bridge and I/O controller hub204through bus240. PCI/PCIe devices234may include, for example, Ethernet adapters, add-in cards, and PC cards for notebook computers. PCI uses a card bus controller, while PCIe does not. ROM224may be, for example, a flash binary input/output system (BIOS). Hard disk drive226and CD-ROM230may use, for example, an integrated drive electronics (IDE), serial advanced technology attachment (SATA) interface, or variants such as external-SATA (eSATA) and micro-SATA (mSATA). A super I/O (SIO) device236may be coupled to South Bridge and I/O controller hub (SB/ICH)204through bus238. Memories, such as main memory208, ROM224, or flash memory (not shown), are some examples of computer usable storage devices. Hard disk drive or solid state drive226, CD-ROM230, and other similarly usable devices are some examples of computer usable storage devices including a computer usable storage medium. An operating system runs on processing unit206. The operating system coordinates and provides control of various components within data processing system200inFIG.2. The operating system may be a commercially available operating system for any type of computing platform, including but not limited to server systems, personal computers, and mobile devices. An object oriented or other type of programming system may operate in conjunction with the operating system and provide calls to the operating system from programs or applications executing on data processing system200. Instructions for the operating system, the object-oriented programming system, and applications or programs, such as application105inFIG.1, are located on storage devices, such as in the form of code226A on hard disk drive226, and may be loaded into at least one of one or more memories, such as main memory208, for execution by processing unit206. The processes of the illustrative embodiments may be performed by processing unit206using computer implemented instructions, which may be located in a memory, such as, for example, main memory208, read only memory224, or in one or more peripheral devices. Furthermore, in one case, code226A may be downloaded over network201A from remote system201B, where similar code201C is stored on a storage device201D. in another case, code226A may be downloaded over network201A to remote system201B, where downloaded code201C is stored on a storage device201D. The hardware inFIGS.1-2may vary depending on the implementation. Other internal hardware or peripheral devices, such as flash memory, equivalent non-volatile memory, or optical disk drives and the like, may be used in addition to or in place of the hardware depicted inFIGS.1-2. In addition, the processes of the illustrative embodiments may be applied to a multiprocessor data processing system. In some illustrative examples, data processing system200may be a personal digital assistant (PDA), which is generally configured with flash memory to provide non-volatile memory for storing operating system files and/or user-generated data. A bus system may comprise one or more buses, such as a system bus, an I/O bus, and a PCI bus. Of course, the bus system may be implemented using any type of communications fabric or architecture that provides for a transfer of data between different components or devices attached to the fabric or architecture. A communications unit may include one or more devices used to transmit and receive data, such as a modem or a network adapter. A memory may be, for example, main memory208or a cache, such as the cache found in North Bridge and memory controller hub202. A processing unit may include one or more processors or CPUs. The depicted examples inFIGS.1-2and above-described examples are not meant to imply architectural limitations. For example, data processing system200also may be a tablet computer, laptop computer, or telephone device in addition to taking the form of a mobile or wearable device. Where a computer or data processing system is described as a virtual machine, a virtual device, or a virtual component, the virtual machine, virtual device, or the virtual component operates in the manner of data processing system200using virtualized manifestation of some or all components depicted in data processing system200. For example, in a virtual machine, virtual device, or virtual component, processing unit206is manifested as a virtualized instance of all or some number of hardware processing units206available in a host data processing system, main memory208is manifested as a virtualized instance of all or some portion of main memory208that may be available in the host data processing system, and disk226is manifested as a virtualized instance of all or some portion of disk226that may be available in the host data processing system. The host data processing system in such cases is represented by data processing system200. With reference toFIG.3, this figure depicts a block diagram of an example configuration for identifying related messages in a natural language interaction in accordance with an illustrative embodiment. Application300is an example of application105inFIG.1and executes in any of servers104and106, clients110,112, and114, and device132inFIG.1. Message classifier310classifies a message, or a portion of a message into a message class. To classify the message, module310uses any suitable natural language analysis classification technique. A message can be classified into more than one class. One implementation of module310uses a set of classification modules, each configured to identify a particular natural language feature or pattern. For example, one classification module identifies messages in which someone appears to be looking for an expert, and messages including a query to which the answer is likely to be a person's name. Another example classification module identifies an action within a message, and a commitment in another message. Another example classification module identifies meetings and meeting-related information, such as the meeting subject, time, or place. Another example classification module identifies messages that are confirmations or negations. A simpler example classification module identifies messages that include an account number, or a stock ticker symbol. In a classification module implementation, each module performs its own classification independently of the other modules. Related message class identification module320models conversational patterns. In particular, module320trains the model using pairs of messages. In one training set implementation, when a new message arrives at an embodiment for analysis, training data collector340asks a user to identify a parent message of the new message. In another training set implementation, instead of asking a user to identify a parent message when a later message is received, an entire interaction is formed into a thread and parent and child messages identified using any suitable technique. Module310classifies both the new (or child) and parent messages into one or more message classes, in a manner described herein. A class of the new message is denoted by Cm, and a class of the parent message is denoted by Cp. Then module320trains the model by updating the conditional probability P(Cm|Cp) with the expression (number of previous instances of Cppreceding Cm)/(number of previous Cpinstances). Module320uses the trained model as a trained message class prediction model to determine a probability of a previous message class having resulted in a current message class. In particular, module320reverses the conditional probability P(Cm|Cp) with which the model was trained and determines the conditional probability P(Cp|Cm) using any suitable mathematical technique. Once module320has determined a probability of a previous message class having resulted in a current message class, related message selector330uses that probability to extract one or more previous messages from the interaction. Module330can be configured to extract, for every message class probability above a threshold probability level, the corresponding message from the interaction. Module330can also be configured to select at most a predetermined number of the highest message class probabilities and, for each selected probability, extract the corresponding message from the interaction. Module330can also be configured to select at most a predetermined number of the message class probabilities corresponding to the most recent messages and, for each selected probability, extract the corresponding message from the interaction. Once a set of candidate parent message has been selected, module330presents the set to a user, sorted according to the corresponding message class probabilities or messages according to the messages' recency. If there are too many candidate parent messages to present to a user for selection, or to implement an automatic selection process, module330uses a trained message ranking model to reduce the size of the presented message set or select one parent message. To use the message ranking model, module330encodes each message as a numerical representation, or vector. Each dimension of the vector corresponds to a different feature of the message. During training, the message ranking model learns a pairwise mapping for the relative ranking of two candidate parent messages for a current message, where each message is encoded into a vector in a manner described herein. In other words, for a given candidate message, a model output of 0 indicates that one of the candidate parents is a better parent message and a model output of 1 indicates that the other candidate parent is a better parent message. The message ranking model can be trained to learn the pairwise mapping using any suitable, machine learning technique. Module330uses the trained message ranking model to rank a set of candidate parent messages given a current message. In particular, module330encodes each message into a vector in a manner described herein, then applies pairs of candidate parent messages to the trained model for relative ranking using any suitable technique. Algorithms for ranking a set of objects using pairs of relative rankings are known. When the set of candidate parent messages has been ranked, module330designates the highest ranking candidate parent message as the actual parent message. Once module330has identified a parent-child relationship between messages, thread assembler350assembles the related messages into a message thread according to the parent-child relationship. The message thread can also include additional messages, such as a parent message to a parent message already in the thread, or a child message to a child message already in the thread. In addition, a parent message can have multiple child messages, and a child message can have multiple parent messages within a thread. Assembling messages into a thread, without intervening messages that are irrelevant to the thread, allows a user to focus on one thread at a time. Once module330has identified a parent-child relationship between messages, if information in the messages includes data relating to a commitment, commitment assembler360assembles the information into a commitment. The commitment can also include information from additional messages, such as a parent message to an already-identified parent message, or a child message to an already-identified child message. With reference toFIG.4, this figure depicts an inference model for use as part of an example configuration for identifying related messages in a natural language interaction in accordance with an illustrative embodiment. The model is part of related message class identification module320inFIG.3. In particular,FIG.4depicts Markov inference model400, used to model conversation patterns. C1, C2, C3, C6, and C7represent message classes. Model400has already been trained, and conditional probabilities for moving from one message class to another have been determined. Thus, if a message is in class 1, denoted by C1, there is a conditional probability P(C2|C) (i.e., the probability of C2given C1) that this message in C1will be followed by another message in C2. Similarly, there is a conditional probability P(C3|C2) that a message in C2will be followed by a message in C3. Note that C3can also be accessed from C7, with the conditional probability P(C3|C7). Module320uses model400as a message class prediction model to determine a probability of a previous message class having resulted in a current message class. In mathematical notation, a probability of a previous message class having resulted in a current message class is the conditional probability P(Cp|Cm), where a class of a message currently being analyzed is denoted by Cm, and a class of a parent message is denoted by Cp. For example, during training model400learned a probability P(C2|C1) that a message in C1will be followed by a message in C2. Thus, module320determines the reverse probability P(C1|C2) (the probability that, if a current message is in C2, that message's parent is in C1) using any suitable mathematical technique. With reference toFIG.5, this figure depicts an example of identifying related messages in a natural language interaction in accordance with an illustrative embodiment. The example can be executed using application300inFIG.3. Interaction500, including a plurality of participants, includes messages502,504,506,508,510,512,514, and516, sent to a group. As can be seen, most of the messages involve a discussion of lunch. However, interleaved within the lunch discussion are messages508and512, which, as is typical of an interaction in a conversation-based collaboration tool, do not deal with lunch but another matter prompted by the lunch discussion. As depicted, application300has classified message502into class520, a meeting subject (“lunch”). Application300has classified message510into class530, a meeting place (“downstairs”). Application300has classified message514into class540, a meeting time (“11:30”). With reference toFIG.6, this figure depicts a continuing example of identifying related messages in a natural language interaction in accordance with an illustrative embodiment. Messages502,510, and514, and classes520,530, and540are the same as messages502,510, and514, and classes520,530, and540inFIG.5. The example can be executed using application300inFIG.3. As depicted, application514is the current message, and is in class540, a class of messages dealing with a meeting time. Application300has determined that messages in class520(meeting subject) and class530(meeting place) have above a predetermined threshold probability of having resulted in a message in class540. As a result, application300has used those probabilities to extract previous messages510and502from interaction500, forming related message set610. Both messages510and502are parent messages to message514. Once application300has identified related message set610and determined that information in message514and message set610includes data relating to a commitment, application300assembles the information into commitment620. In particular, because message514and message set610include meeting information, commitment620assembles the information in the messages into a calendar item and offers to store the calendar item for a user. With reference toFIG.7, this figure depicts a flowchart of an example process for identifying related messages in a natural language interaction in accordance with an illustrative embodiment. Process700can be implemented in application300inFIG.3. In block702, the application receives a current message that is part of a narrative text form interaction. In block704, the application uses a natural language analysis to classify the current message into a current message class. In block706, the application uses a trained message class prediction model to predict a probability of a previous message class having resulted in the current message class. In block708, the application uses the probability to extract a previous message in the interaction that has been classified into the previous message class. In block710, the application assembles the previous and current messages into a message thread. In block712, the application uses the information in the previous and current messages to assemble a commitment. Then the application ends. With reference toFIG.8, this figure depicts a flowchart of an example process for identifying related messages in a natural language interaction in accordance with an illustrative embodiment. Process800can be implemented in application300inFIG.3. In block802, the application receives a current message that is part of a narrative text form interaction. In block804, the application uses a natural language analysis to classify the current message into a current message class. In block806, the application receives a message identified as a parent message of the current message in the interaction. In block808, the application uses a natural language analysis to classify the parent message into a parent message class. In block810, the application trains a message class prediction model by updating the probability of the current message class given a parent in the parent message class according to (number of current message class instances)/(number of parent message class instances). Then the application ends. With reference toFIG.9, this figure depicts a flowchart of an example process for identifying related messages in a natural language interaction in accordance with an illustrative embodiment. Process900can be implemented in application300inFIG.3. In block902, the application presents a current message that is part of a narrative text form interaction, and two possible parent messages, to the current message. In block904, the application collects an annotation as to the relative ranking of the two possible parent messages. In block906, the application encodes each message as a feature vector. In block908, the application uses a machine learning technique and the annotated relative rankings to train a message ranking model to learn which of the possible parents is a better parent to the current message. Then the application ends. Thus, a computer implemented method, system or apparatus, and computer program product are provided in the illustrative embodiments for identifying related messages in a natural language interaction and other related features, functions, or operations. Where an embodiment or a portion thereof is described with respect to a type of device, the computer implemented method, system or apparatus, the computer program product, or a portion thereof, are adapted or configured for use with a suitable and comparable manifestation of that type of device. Where an embodiment is described as implemented in an application, the delivery of the application in a Software as a Service (SaaS) model is contemplated within the scope of the illustrative embodiments. In a SaaS model, the capability of the application implementing an embodiment is provided to a user by executing the application in a cloud infrastructure. The user can access the application using a variety of client devices through a thin client interface such as a web browser (e.g., web-based e-mail), or other light-weight client-applications. The user does not manage or control the underlying cloud infrastructure including the network, servers, operating systems, or the storage of the cloud infrastructure. In some cases, the user may not even manage or control the capabilities of the SaaS application. In some other cases, the SaaS implementation of the application may permit a possible exception of limited user-specific application configuration settings. The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. | 53,437 |
11861464 | DETAILED DESCRIPTION The present disclosure involves using graph data structures that simulate the inter-feature dependencies of datasets to generate datasets for input to machine-learning models. As explained above, conventional solutions for using these machine-learning models (e.g., for predictive modeling of user behavior) are limited by failing to identify and use inter-feature dependence, which can reduce the accuracy of results obtained with these machine-learning models. Certain embodiments described herein improve the performance of automated modeling systems by, for example, generating graph data structures that use inter-feature dependencies to define input features for predictive models. For example, automated modeling systems described herein are used to generate a graph data structure, such as a directed acyclic graph, that models dependencies of features configured to be input into a machine-learning model. The graph data structure includes nodes that represent the input features and edges that link pairs of nodes indicating that an input feature of one node is dependent on the input feature of the other node. The automated modeling systems build predictive models by varying one or more input features and using the graph data structure to define the values for the remaining input features. The automated modeling systems then apply a trained machine-learning model to the input features to produce an accurate predictive output. The following non-limiting example is provided to introduce certain embodiments. In this example, an automated modeling system includes one or more computing systems that use predictive modeling for marketing simulations. The predictive model takes, as input, a set of input features that corresponds to aspects of the simulation such as webpage visits, marketing emails, number and type of social media posts, and the like. The predictive model is applied to the set of input features to generate a predictive output such as a probability that users will perform an action such as acquire a particular product or service. Continuing with this example, the automated modeling system generates a graph data structure that models the set of input features. The graph data structure could be, for example, a directed acyclic graph that is generated from an analysis of the input features. The graph data structure represents the input features as nodes. A given edge can link two nodes: a source node and a destination node. The destination node represents an input feature that has a value that is dependent on the value of an input feature of the source node. Each edge is assigned a weight, which indicates the degree to which the input feature of the destination node is dependent on the input feature of the source node. For instance, decreasing the social media posts associated with a product or service may have a large impact on webpage visits, whereas search advertisements may have a marginal impact on webpage visits. Therefore, an edge connecting a source node associated with social media to the destination node associated with webpage visits will have a larger weight as compared to an edge connecting a source node associated with search advertisement to the destination node associated with webpage visits. An automated modeling system can use the graph data structure to account for interdependencies in various input features. For instance, the automated modeling system could simulate marketing scenarios using the graph data structure with a predictive model. The set of input features are passed to the predictive model to provide a predictive result. For instance, the values of the input features can correspond to the current marketing inputs such that the predictive model outputs what is already known regarding users prevalence for acquiring the particular product or service. In a simulation, a first marketing scenario determines whether changes in the prediction will occur if the number of emails transmitted are increased 100% and the number of search advertisements are decreased 50%. Simply modifying two input features independently, will cause the predictive model to output an inaccurate result. To avoid this inaccurate result, the automated modeling system can use the graph data structure to propagate a change to a first input feature to those input features that depend on the first input feature. The graph data structure defines updated values for the dependent input features based on the values of the modified input features and the edge weights linking the input features. For instance, increasing the number of emails transmitted will cause a proportional increase in the number of webpage visits. By representing this dependency via the graph data structure, the automated modeling system can, for example, respond to a manual modification of the “number of emails transmitted” input feature by automatically modifying the “number of webpage visits” input feature. The automated modeling system can apply the predictive model to these modified values of the “number of emails transmitted” input feature and the “number of webpage visits” input feature, thereby obtaining a more accurate predictive output than would be computed if these two input features were assumed to be independent of one another. Thus, after the automated modeling systems generates an updated set of input features using feature dependencies modeled by the graph data structure, the automated modeling system applies the predictive model to the updated set of input features. The predictive model thereby generates a more accurate predictive output using the updated input features. In this example, the predictive model generates an indication as to whether the modifications to the one or more input features will increase or decrease the probability that users will acquire the particular product or service. In some embodiments, the edge weights that represent dependencies between different input features are computed using suitable probability techniques. In one example, a computing system that builds the graph data structure is used to define a probability distribution. The probability distribution indicates probabilities of an input feature of a destination node having potential destination values given the input feature of the source node having source values. For example, a second node (e.g., destination node) corresponding to webpage visits depends on a source value of a first node (e.g., source node) corresponding to social media posts as some users are likely to visit the webpage after seeing social media posts. Increasing social media posts will increase webpage visits. The probability distribution associated with webpage visits indicates a probability for each possible number of webpage visits (e.g., destination value) given a particular number of social media posts (e.g., source value). The probability can be determined from the input dataset or from historical datasets. For example, the input dataset (and/or a historical dataset) provides various values of the input features including the webpage visits and social media posts that are observed (e.g., either contemporaneously or historically). For instance, as observed from the data, the webpage has received 1000 visits in a month when the current number of social media posts was also 1000 within the same time interval. In addition, as observed from the data, the webpage has received 2000 visits when the current number of social media posts was 1500. The processing device extrapolates these values to determine a probability that the webpage visits will be 1001, 1002, . . . etc. given a particular value of the social media posts. For instance, if the social media posts is 1000, the webpage visits between 750 and 1250 (e.g., the values close to 1000) will have high probabilities in the probability distribution. Webpage visits 500-749 and 1251-1500 (e.g., further away from 1000) will have lower probability values and webpage visits below 500 or above 1500 will have extremely low probability values. In other words, the values that are closer to an observed data point (e.g., 1000) for the given source value (e.g., also 1000) will have a higher likelihood of occurring than those values further away. The computing system determines that a subset of potential destination values have a probability that exceeds a probability threshold. The probability threshold is set to separate those destination values that are likely to occur from those that are not. For instance, the probability threshold is 0.8 to ensure that only those webpage visits values that have 80% probability of occurring will be added to the subset of potential destination values while values that are less likely to occur will be omitted. The computing system selects the subset of potential destination values based on these values exceeding the probability threshold. The computing system uses the selected subset to update a weight of the edge between the source node and the destination node. For example, the computing system computes an updated weight from a correlation between the subset of the potential destination values and a subset of the source values. As used herein, the term “input feature” is used to refer to any data point that is configured to be input into a machine-learning model. Input features can correspond to aspects of a marketing scenario. Examples of these input features include, but are not limited to, webpage visits, emails transmitted (e.g., marketing emails or emails that are associated with a product or service), social media posts (e.g., frequency of posts or posts corresponding to a particular subject), direct advertisements, search advertisements, web based advertisements, other webpages (e.g., those associated with a particular product or service), searches associated with a particular product or service, etc. In some embodiments, a value of an input feature is determined, in part, by a graph data structure that models inter-feature dependencies of the input features. As used herein, the term “simulation” is used to refer to an application of a predictive model to a set of input features to produce a predictive outcome. A system, such as a marketing system associated with a product or service, is represented by the set of feature points. The predictive model simulates the system by generating a prediction of an outcome that will result given the set of input features. As used herein, the term “scenario” is used to refer to any modification to one or more input features of a simulation to predict an outcome that will result from the modification. A scenario is an interventional process. Scenarios can be used to determine how modifications to the input features will affect the predicted outcome. Scenarios can also be used for sensitivity analysis input that determines a degree in which input features affects the predicted outcome. Sensitivity analysis can include ranking the input features based on the degree in which input features affects the predicted outcome. Certain embodiments described herein facilitate the improved performance of machine-learning models. For instance, these embodiments can be used to define input features for machine-learning models that predict the behaviors of consumers or other end users. Examples of predicted behaviors include a conversion of a prospective consumer, a defection of an existing consumer, positive or negative feedback about electronic content available via an online service (e.g., content describing a brand on a social media website), number of purchases in a particular time interval (e.g., weekly, monthly, etc.), etc. In some embodiments, relationship management tools are used to assess the value of certain consumers based on these predicted behaviors. The predicted behaviors, the assigned values, or both allow a user of the relationship management tool to take appropriate action in response to a certain prediction (e.g., changing a salesperson's response to a consumer's inquiry if the conversation indicates an expression of concern rather than an expression of interest). Scenarios are defined by varying one or more input features of a set of input features. The scenarios predict how the behaviors of consumers or other end users will change as a result of modifying the one or more input features. Example of an Operating Environment for Generating a Graph Data Structure Referring now to the drawings,FIG.1depicts an example of a network environment for generating graph data structures for using inter-feature dependencies in machine-learning models, according to certain embodiments of the present disclosure. In the example depicted inFIG.1, processing device104receives marketing data from remote sources via network108and generates various simulations of the versions of the marketing data to determine a particular, desirable outcome. For instance, a baseline simulation of the marketing system includes applying a predictive model to the marketing data to predict the outcome associated with product or service, such as number of purchases or purchase by a particular end user, etc. Processing device104simulates alternative scenarios to the baseline simulation in which one or more data points of the marketing data are modified to predict how varying the one or more data points can increase or decrease the likelihood of a particular outcome. Processing device104generates additional scenarios that include different values for one or more other data points to maximize or minimize the likelihood of the particular outcome. The modified data points are then implemented within a marketing system to increase (or decrease) the predict outcome. Processing device104includes memory112that stores program instructions116for generating scenarios of a system. Program instructions116include discrete functions and/or applications that are executed to define scenarios and execute simulations. Program instructions116include instruction that receive marketing data and parse the marketing data to define input features for a predictive model. For instance, marketing data can include structured and unstructured data that represent various data and data points of the system. Program instructions116parse the data marketing data to determine input features and their values for the predictive model. Program instructions116include instructions that generate graph data structures120using the input features. Graph data structures model inter-feature dependencies of the input features to represent the relationship between the values of various input features. The graph data structure represents input features as nodes that are linked by edges. An edge links a source node to a destination node and indicates that the value of the input feature represented by the destination is dependent on the value of the input feature of the source node. Edges include a weight that indicates a degree of dependence between the source node and the destination node. The weight is defined using current and historical marketing data in which correlations between input features can be observed. In some instances, the graph data structure is generated using structured learning with continuous optimization. Input features generator124defines an updated set of input features for a particular scenario. Input features generator124receives input for the particular scenario that includes a modification to one or more input features of baseline simulation. Input features generator124generates an updated set of input features by using the graph data structure120to propagate the modifications to the one or more input features to those input features dependent on the modified input features. Machine-learning model128is a predictive model that predicts an outcome given a set of input features. Machine-learning model128simulates the scenario using the updated set of input features and predicts the outcome. The outcome indicates an effect of modifying the one or more input features on the baseline simulation. For instance, the simulation of the scenario determines whether modifying the one or more input features predicts an increase or decrease in the likelihood of a particular outcome. Input features generator124can generate sets of input features for multiple related scenarios to identify a set of input features that maximizes or minimizes the outcome. The baseline simulation and the simulation of the scenario are stored in simulations140for later retrieval and further processing. Some embodiments of the network environment100include user devices132. Examples of a user device include, but are not limited to, a personal computer, a tablet computer, a desktop computer, a processing unit, any combination of these devices, or any other suitable device having one or more processors. One or more data points of the marketing data are received from instrumentation or analytics that execute on user devices132. For instance, user interaction with a display advertisement can be captured by the user device and transmitted to processing device104or stored in input datasets144(e.g., a database or other storage medium). Servers136direct the operation of processing device104and other processing devices (not shown). For instance, servers136manage input datasets144received from various sources including user devices132. Servers136transmit requests to processing device104for particular simulations and/or scenarios. The requests include an identification of a set of marketing data and a definition of the simulation and/or scenario. The processing device104obtains the set of marketing data (e.g., from local storage or from input datasets144) and executes the simulation and/or scenario. Processing device104transmits the results of the simulation and/or scenario to servers136via network108. Servers136may direct one or more other processing devices (not shown) to process marketing data in parallel with processing device104or servers136may direct the one or more other processing devices to operate with processing device104to process marketing data in a distributed process. Simulations140is a database that stores historical simulations performed by processing device104. Each simulation stored in simulations140includes an identification of the marketing data used to define and run the simulation enabling the simulation to be rerun by processing device104. The simulations stored in simulations140can be used as baselines for scenario executed by processing device104. For instance, a new scenario predicts a particular likelihood that a user will acquire a good or service. A baseline simulation that corresponds to the same or similar marketing data is obtained from simulations140and used as a point of comparison. Processing device104or servers136compares the results of the new scenario with the baseline simulation to determine a degree in which the scenario altered the baseline simulation. User devices132, servers136, simulations140, and input datasets144are communicatively coupled to processing device104via network108. Examples of network102include, but are not limited to, Internet, local area networks (“LAN”), wireless area networks, personal area networks, wide area networks, and the like. As described in detail with respect to the various examples below, graph data structure120is used to improve the output of machine-learning model128according to various embodiments. The machine-learning model128is used to predict outcomes such as product purchases or consumer behavior. For illustrative purposes, the machine-learning model128described herein are described using simplified examples involving consumers, sales personnel, and sales journey. But the operations described herein can be applied to any automated modeling system that defines alternative scenarios for machine-learning model128. FIG.2depicts an example of a graph data structure that models inter-feature dependencies for machine-learning based predictions, according to certain embodiments of the present disclosure. The input features of a simulation are modeled by a graph data structure that represents the inter-feature dependencies of the input features. The graph data structure represents each input feature as a node. Edges connect pairs of nodes together. Edges are directed indicating one node of the pair (e.g., the destination node) is dependent on the other node (e.g., the source node). Edges are assigned a weight that indicates a degree in which the destination node is dependent on the source node. Graph data structure204includes three input features: X1208, X2212, and X3216. Edge210connects X1208to input feature X2212indicating that the value of X2212is dependent on the value of X1208. Edge214connects X2212to input feature X3216indicating that the value of X3216is dependent on the value of X2212. The process device assigns a weight wi,j(e.g., where i identifies the source node and j identifies the destination node) that indicates the degree of dependence between the source node and the destination node. Simulator220executes the simulation by applying machine-learning model128to the input features to predict an outcome y=M(X) (e.g., simulation result220). In some instances, simulator220is software executing within a processing device such as processing device104ofFIG.1. In other instances, simulator is a hardware platform (e.g., a processing device, system-on-chip, FPGA, etc.). Simulating a scenario includes varying one or more of X1208, X2212, and X3216and predicting an updated outcome y′=M(X′). Since the input feature are not independent, modifying one input feature will cause machine-learning model128to predict in inaccurate or incorrect outcome. For instance, the predictive model, y=M(X), predicts the number of products purchased weekly, where X1208represents a number of promotional emails transmitted to users, X2212represents display advertisements, and X3216represents a number of webpage visits. Increasing the number of promotional emails will likely increase the number of webpage visits in addition potentially increasing the number of products purchased. The increase in X1208will render the value of X3216invalid if not also modified. Simulator220uses graph data structure204to exploit the inter-feature dependencies and generate values for the input features that accurately reflect a given scenario. The modifications are propagated to dependent input features to update the value assigned to those input features. The modification is based on the weight of the edge connecting the input features. For instance, the input features can be represented by f(X1), f(X2|X1), and f(X3|X2). If X1208is modified to be X1=X1+ΔX1the modification is propagated to X2212as X2=X2+w12ΔX1and X3216as X3=X3+w23ΔX2where ΔX2=w12ΔX1. If X2212is modified instead, the modification will only propagate to X3216since X1208is not dependent on X2212. For illustrative purposes,FIG.2depicts a graph data structure204that includes three input features. Graph data structure204can include any number of input features provided that the graph data structure is acyclic. In addition, a source node may be linked to zero or more destination nodes and a destination may be linked to zero or more source nodes. In some embodiments, simulator220provides, or is included in, a simulation software tool for simulation of certain events (e.g., probability of a user taking a certain action). Such a software tool can include the simulator220and a user interface engine. The user interface engine can include code that, when executed by one or more processing devices, configures one or more input/output device (e.g., a touchscreen, a monitor and keyboard, etc.) to present a simulation interface. The simulation interface can include graphical interface elements for inputting and/or displaying values of input features of a machine-learning model. Examples of these graphical interface elements include a set of fields for inputting and/or displaying values of input features. Each of these graphical interface elements can include one or more simulation event listeners. The simulation event listener detects events, such as data entry in certain fields or menu selections, that are used to set one or more values of input features that are used in a particular simulation performed by the simulator220with a machine-learning model232. The user interface engine can also present, in a graphical interface, one or more interface elements (e.g., menus, pop-up dialogs, etc.) that allow an input device to manually specify, select, or otherwise input values of input features. The user interface engine detects one or more modifications to input features (i.e., a simulation parameter) using a simulation event listener. The simulation event listener can identify which input feature is modified by the input and provide this information to the simulator220. The simulator220can use the identified input feature to reference a corresponding node of the graph data structure204and thereby determine any feature dependencies with respect to the identified input feature. The simulator220can thereby compute any corresponding changes to one or more other input features for use by a simulation. In some embodiments, the simulator220can also instruct the user interface engine to update the user interface to display these corresponding changes to other input features (e.g., by updating the relevant graphical interface elements for inputting and/or displaying values of input features. These embodiments can provide improvements to software tools for performing simulations. For example, as described above, conventional automated modeling techniques may be unable to effectively manage simulation scenarios that involve interdependent input features. Since users of conventional tools are therefore required to manually track dependencies and modify various input feature values in order to perform a simulation, the effectiveness of conventional simulation tools is undercut by these burdensome, time-consuming manual modifications. By contrast, the embodiments described above solve this problem with simulation software tools by providing an intuitive, user-friendly interface in which a user is only required to modify an input feature of interest, with the graph data structure being used to identify and apply feature dependencies in a manner that is transparent to the user. Such an improvement can allow automated modeling systems to rapidly and accurately simulate certain scenarios while reducing users' manual efforts (and associated errors in the application of a machine-learning model). The graph data structure204can be generated using one or more operations described herein.FIG.3depicts an example of a process for generating the graph data structure ofFIG.2, according to certain embodiments of the present disclosure. In some embodiments, one or more processing devices implement operations depicted inFIG.3by executing suitable program code. For illustrative purposes, the process300is described with reference to certain examples depicted in the figures. Other implementations, however, are possible. At block304, the process300involves accessing, by a processing device, an input dataset that includes input features for a trained machine-learning model. The input dataset can be accessed from local sources (e.g., local memory or locally connected storage devices) or remote sources (e.g., databases, servers, user devices, etc.). The processing device can run simulations using input features of input datasets. The simulation includes applying a machine-learning model to the input features to predict an outcome. For instance, the input dataset corresponds to marketing data from which the processing device runs simulations that predict average purchases over a time interval or user behavior. At block308, the processing device receives a request to modify a first input feature. The processing device simulates alternative scenarios by modifying one or more input features and predicting a new outcome. The new outcome is compared to the previous outcome to determine a change in outcomes (e.g., a delta) that results from modifying the one or more input features. Alternative scenarios are defined by the processing device automatically or by user input received from an input/output device or from a remote device over a network. At block312, the processing device modifies a second input feature of the input dataset based on the modification to the first input feature. Since the values of input features can be dependent on the values of other input features, a modification to one input feature of a dataset will cause some input features to have invalid values. If the predictive model is applied to these input features the predicted outcome will be less accurate or incorrect. The processing device uses a directed graph to model the inter-feature dependencies of the input dataset. The directed graph indicates which input features are dependent on other input features as well as a degree in which each input feature is dependent on another input feature. In some embodiments, the directed graph is received with the input dataset or from another source. In other embodiments, the processing device generates the directed graph using, for example, structured learning, a linear structural causal model, or the like. For instance, the processing device can use the input dataset and historical dataset to observe correlations between the values of pairs of input features. The processing device initializes an empty directed graph, then adds each feature set. The processing devices then iteratively adds, removes, or reverses an edge between two input features. With each iteration, the processing device computes a score of the resulting directed graph. For instance, the score of the directed graph increases when an edge connecting a source node to a destination is added, while the score of the directed graph decreases when an edge connecting two uncorrelated nodes is added. If the score increases, the addition, removal, or reversal of the edge during that iteration is maintained. If the score decreases, the addition, removal, or reversal of the edge during that iteration is omitted. The processing device continues to interactively add, remove, or reverse edges until a particular score threshold is reached or until the score can no longer be increased. Alternatively, a processing device uses a loss function that accounts for the least square loss between the estimated data (e.g., the directed graph) and the actual data of the input dataset. A smooth directed acyclic graph constraint can be applied during building of the directed graph to smooth and continuously optimize the directed graph. The directed graph can then be generated using gradient decent-based approaches. The processing device assigns weights to each edge to indicate a degree of dependence between the source node and the destination. The weights can be determined by the correlations observed from the input dataset and/or historical dataset. The processing device uses the directed graph to determine updated input values for other input features resulting from the modification to the first input feature. For instance, the first input feature corresponds to search advertising and the second input feature corresponds to a number of webpage visitors. If search advertising is increased there will be an increase in the number of webpage visitors. The directed graph models these inter-feature dependencies and enables the processing device to use the directed graph to identify input features dependent on the first input feature and propagates the modification of the first input feature to the dependent input features. The modification to the dependent input features (e.g., the second input feature) is a function of at least (a) the modification of the first input feature and (b) a weight assigned to an edge linking the first input feature to the second input feature within the directed graph. The processing device executes control code to implement blocks304-308. For example, the control code of the processing is stored in a non-transitory computer-readable medium and is executed by one or more processing devices. Executing the control code causes the processing device to access the input datasets from the same non-transitory computer-readable medium or a different non-transitory computer-readable medium. In some embodiments, accessing the input datasets includes communicating, via a data bus, suitable signals between a local non-transitory computer-readable medium and the processing device. In additional or alternative embodiments, accessing the input datasets involves communicating, via a data network, suitable signals between a computing system that includes the non-transitory computer-readable medium and a computing system that includes the processing device. At block316, the process300involves applying a trained machine-learning model to the modified input dataset. The trained machine-learning model predicts an outcome based on the modified input dataset. In some embodiments, the processing device generates a sequence of modified input datasets with each modified input dataset including different values for the input features. The processing device applies the trained machine-learning model to each modified input dataset in the sequence to predict a corresponding sequence of outputs. The processing device compares the sequence of outputs and identifies values for the input features that correspond to a deliverable outcome (e.g., that predicts an increase in the number of weekly purchases). The processing device outputs the outcome (or sequence of outcomes) and an indication as to whether the modification to the first input feature increased or decreased the probability of the outcome. Examples of the output include transmitting the output to a remote device, storing the output in local and/or remote memory, displaying the output within a graphical user interface (e.g., alone or in a side-by-side presentation with another output), etc. In an illustrative example involving an average number of weekly sales of a particular product, the observable input features include, frequency/volume of advertisements (e.g., search, printed, web-based, direct, etc.), number of webpage visits, social media associated with the particular product, promotional emails, number of visits or similar types of products, etc. The processing device applies the machine-learning model to the current observable input features to predict a baseline outcome. In this example, the baseline example, is the average number of weekly sales of the particular product. In some embodiments, the baseline outcome corresponds to the real-world outcome since the input features correspond to observable input features. The processing device receives a request to modify the value of a first input feature corresponding to the number of promotional emails to predict an outcome (e.g., an increase or decrease to the number of weekly sales of the particular product). The other input features may be dependent on the first input feature such that the increase in promotional emails would result in an observable increase of the number of webpage visits by a first amount, increase social media posts associated with the particular product by a second amount, and decrease webpage visits to related webpages by a third amount. The directed graph captures these inter-feature decencies. The processing device propagates the modification to the first input feature to the other input features generating a modified set of input features. The processing device applies the machine-learning model to the modified set of input features to predict an updated outcome that would result in the modification to the one or more input features. For instance, the processing device, using the machine-learning model, predicts that increasing proportional emails increases the average weekly sales by 2%. For illustrative purposes,FIG.3and the accompanying description above refer to first and second input features. But implementations with any number of input features are possible. The directed graphs ofFIG.4andFIG.7described below illustrate directed graphs representing larger quantities of input features. FIG.4depicts an example of graph data structure that models the inter-feature dependencies of a set of input features, according to certain embodiments of the present disclosure. Graph data structure400is a directed acyclic graph that includes seven nodes each of which represents an input feature. The nodes of graph data structure400include various dependencies on other nodes. Graph data structure400link pairs of nodes using directed edges. Directed edges indicate that a value of a second node (e.g., a destination node) is dependent on a value of a first node (e.g., a source node) with the degree of dependency represented by an edge weight (not shown). Directed edges are depicted inFIG.4as arrows with an arrow pointing from the source node and to the destination. Graph data structure400models the inter-feature dependencies of the seven input features. For instance, node404is linked to node424and node408in which node404is a source node and node424and node408are destination nodes. Node408is linked to node424and node416in which node408is a source node and node424and node416are destination nodes. Since the graph is acyclic (e.g., no feedback loops), the value of dependent input features terminate in a finite number of iterations. For instance, the value of node404does not depend on other nodes and can be represented by a probability distribution p(N0). The value of node408depends on the value of node404, node412, and node416and can be represented by the probability distribution p(N1|N0, N2, N6). The processing device uses the graph and the input dataset to define a probability distribution for each node such as: node404p(N0), node408as p(N1|N0, N2, N6), node412as p(N2), node416as p(N3|N1, N2, N4, N6), node420as p(N4|N2), node424as p(N4|N0, N1, N2), and node6as p(N6|N2, N4). The combined probability distribution p(N) is equal to p(N0)p(N1)p(N2)p(N3)p(N4)p(N5)p(N6). The probability distributions represent a probability that an input feature will have a particular value. For dependent nodes, it is the probability that an input feature will have a particular value given a particular value of the one or more source nodes from which it depends. A processing device defines an initial probability distribution for each input feature assuming independence from other input features and based on the input dataset and/or historical datasets. The processing device uses the graph data structure (or some other dependency analysis) to qualify the probability distribution of dependent input features to account for the dependencies. The processing device uses the probability distribution of the input features to refine weights assigned to each edge. The processing device generates a weighting dataset by sampling the probability distribution of input features that are independent (e.g., node404and node412) to define a set of values for these nodes. For instance, the processing device samples based on the values that exceed a threshold probability (e.g., likely to occur in an observed dataset), particular predefined values, etc. Then, for each value of the independent nodes, a corresponding set of values are determined for the dependent nodes by sampling the probability distributions for those nodes given the selected value of the independent nodes, where the sampling is based on the same criterion as the sampling of the independent nodes. For instance, if the value of node404, N0, is N0=α0, and node412, N2, is N2=α2, the probability distribution for node408, N1, becomes p(N1|N0=α0, N2=α2, N6=α6) where α6is a value sampled from the probability distribution p(N6|N2=α2, N4=α4) and α4is a value sampled from the probability distribution p(N4|N2=α2). The probability distribution for node408(e.g., p(N1|N0=α0, N2=α2, N6=α6)) is sampled to define a set of values for node408. This process is repeated for each sampled value of the independent node404and node412. The processing device aggregates the sets of values sampled from the probability distributions of each node into the weighting dataset. The processing device uses the weighting dataset to define weights for each edge of graph data structure400. The processing device executes a correlation algorithm such as one based on Pearson's correlation coefficient to define the correlation between pairs of input features based on the values of each feature in the weighting dataset. The processing device uses the correlation coefficient (e.g., a number between −1 meaning no correlation and 1 meaning a high degree of correlation) to define a weighting value for the edge. With the weights assigned to each edge, graph data structure400can be used to propagate modifications to one input feature to the dependent input features. FIG.5depicts an example of edge weight matrices that measure the accuracy of the graph data structure in modeling inter-feature dependencies, according to certain embodiments of the present disclosure. The edge weight matrices depicted are based on the same input dataset that generated graph data structure400ofFIG.4. Rows of an edge weight matrix represent a source node and the columns represent a destination nodes such that each cell represents an edge between a source node and a destination node. Each cell of the matrix includes an addressable memory region that stores the weight value of the edge associated with the cell. The processing device uses edge weight matrices504-512to measure the accuracy of graph data structures such as graph data structure400. For instance, if the accuracy (e.g., matric512) diverges by more than a threshold amount, the processing device may rebuild the graph data structure using historical data (e.g., the same data used to originally construct graph data structure) or using contemporaneously acquired data. Edge weight matrices include a cell for each possible pair of linked input features. A blank cell represents a null space in which the two referenced input features are not linked by an edge. In some instances, the diagonal cells (e.g., cell00,11,22, etc.) will also be blank as edges do not link a node to itself. The numerical value assigned to a cell represents the edge weight. Although integers are depicted inFIG.5, any value may be used such as a real number, a string, a function, a pointer, etc. Edge weight matrix504represents the true edge weights as predetermined based on known correlations in an input dataset. Data processing devices receive the input dataset along with labels that indicate the correlations between input features and the degree in which the input features are correlated. In other words, the input dataset and corresponding labels indicate the nodes, edges, and edge weights for the graph data structure. Edge weight matrix508represents a graph data structure generated using techniques described above in connection toFIG.3andFIG.4. The processing device received the input data, but nothing else, and determined the relationships between the input features and the degree of correlation between the input features for the raw data. The processing device generated a graph data structure based on the observed correlations. The resulting graph data structure is depicted inFIG.4and represented by matrix508. Delta matrix512is an edge weight matrix that represents the delta between the matrix504and matrix508. Since matrix504represents the true graph data structure and matrix508represents the graph data structure generated from observations using the techniques described above, the delta matrix512represents the accuracy of matrix508and subsequently the graph data structure generated from the observed correlations of the input dataset. Matrix512indicates that the graph data structure of matrix508included an edge linking nodes420to node412and edge linking node412to node420. These edges are not included in matrix504. In addition, matrix508assigned a different weight to the edge linking node404to node428. Various accuracy algorithms can be used to define an accuracy score for an edge weight matrix. For instance, the cells of the delta matrix512can be aggregated into a singular score. In this instance, the score may not take into account the additional/missing edges. In some instances, if matrix508includes an edge (or omits an edge) the graph data structure may be re-generated. An extra edge may propagate a modification to an improper input feature corrupting the updated input features. Similarly, an omitted edge will fail to propagate the modification to a dependent input feature corrupting the updated input features and prevent the predictive model from generating accurate predictions. The processing device generates edge weight matrices from graph data structures and stores them in local (or remote) storage. The processing device can load the edge weight matrix from memory to test the accuracy of the graph data structure at any time. In some instances, the processing device distributes the delta matrix512with the predicted outcome. Delta matrix512can be used as an accuracy signature that verifies the integrity of the predicted outcome. FIG.6depicts an example of graph data structure that models the inter-feature dependencies of a set of input features, according to certain embodiments of the present disclosure. Graph data structure600represents an input dataset of a marketing system. Graph data structure600includes ten nodes with various inter-feature dependencies. Each node can include zero or more edges that link nodes to other nodes. For instance, node632is independent and no other nodes are dependent on node632. Node608is also independent in that the value of the input feature represented by node608is not dependent on another node. Nodes604-636represent input features of a marketing system for a particular product or service. For instance, node604represents travel data, node608represents display advertisements (e.g., media distributed or placed in public places), node612represents search advertisements, node616represents other entities, node628represents other products and services offered by the same entity, node620represents social media (e.g., types of posts, post content, frequency of posts, location of posts, etc.), node624represents promotional email, node628represents other products and services offered by the same entity, node632represents other webpages such as those associated with competing products or services or related products or services (e.g., number of other webpages, frequency of visits, etc.), and node636represents direct information distribution (e.g., advertisements distributed directly to end users, other information associated with the product or service). FIG.7depicts an example of a modified graph data structure ofFIG.6based on modification of one or more input features, according to certain embodiments of the present disclosure. Graph data structure700is a modified version of graph data structure600ofFIG.6in which two input features are modified and the modifications are propagated to dependent input features. For instance, a processing device receives a request to predict an outcome given a modification to node608and node624. The value of each input is modeled by the function f(Ni)=αiwhere i represents a particular node. The value of node608prior to the modification is represented by f(N1)=α1and after the modification by f(N1)=α1+Δα1, where Δα1is the modification made to the input feature of node608. Similarly, the value of node624is represented by f(N5)=α5before being modified and by f(N5)=α5+Δα5after the modification, where Δα5is the modification made to the input feature of node608. The processing device uses graph data structure700to propagate the modifications of nodes608and624to nodes616(e.g., destination node of node608), node628(e.g., destination node of node608) and node636(e.g., destination node of node608,624and616). In some instances, a modification to a node prevents the propagation of modifications to that node. For instance, the modification of the input feature associated with node608, which is a source node to node624will not be propagated to node624due to the modification of node624. The value of node624as a result of the modification remains, f(N5)=α5+Δα5) even though node624is a destination node to another node with a modified value. The value of dependent nodes is a function of at least (a) the modification of the input feature and (b) a weight assigned to an edge linking the first input feature to the second input feature within the directed graph. For instance, the processing device represents the value of node616, which is dependent on node608by f(N3)=α3+w13Δα1where w13is the weight of edge640. The processing device represents the value of node628, which is dependent on node608by f(N6)=α6+w16Δα1where w16is the weight of edge644. Node636is a destination node that depends on the value of node608, node624, and node616. The processing device represents the value of node636, by f(N9)=α9+w19Δα1+w59Δα5+w39Δα3where w19is the weight of edge648, w59is the weight of edge652, w39is the weight of edge656, and Δα3=w13Δα1. The processing device represents the value of destination nodes (of a modified source node) as f(Ni)=αi+wjiΔαj, where i represents the destination node and represents the source node. FIG.8depicts an example graph representing the effect of modifications to two input features as described inFIG.7on the predicted outcome, according to certain embodiments of the present disclosure. The dotted line660represents an application of the machine-learning model to an input feature set in which modifications are not propagated (e.g., a static approach) to dependent input features. The solid line664represents an application of the machine-learning model to an input feature set in which modifications are propagated (e.g., a propagation approach) to dependent input features. At x=0, the predicted outcome using a propagation approach or static approach are the same. For x>0, the propagation approach includes higher value of the predicted outcome. This is due to the increase in the two modified input features causing an increase in the value of dependent input feature. For x<0, the propagation approach includes lower predicted outcomes from the static approach due to the lower values further decreasing the values of dependent input features. As the absolute value of x increases, the difference between the propagation approach and the static approach increases due to the larger modification causing a larger variation in the predictive outcome. The variation between the static approach and the propagation approach at higher absolute values of x, indicate that the static approach has a higher error rate with larger modifications to input features. A processing device utilizing the propagation approach generates updated input features (resulting from modifications of one or more input features) with lower error rates and higher accuracy over the static approach. FIG.9depicts an example of a process predicting the probability of an outcome resulting from variance of one or more input features, according to certain embodiments of the present disclosure. At block904a processing device receives an input dataset that includes input features for a trained machine-learning model. The processing device receives the input dataset from local storage or from a remote device such as a server or network accessible database. In some instances, the processing device receives a request to execute a simulation of a scenario with an identification of the data to use for the simulation. The scenario can include modified values for a first input feature to predict an outcome that will result as a result of the modification. In response to the request, the processing device accesses the input dataset that correspond to the identification (e.g., using a remote procedure call to a database, etc.). At block908, the processing device modifies the input dataset by propagating modification to the first input feature to a second input feature dependent on the first input feature. Modifying the input dataset includes blocks912-928. At block912, the processing device generates a directed graph that includes nodes that represent the input features and edges that link pairs of nodes. The edges are directed to indicate that a second input feature represented by a destination node is dependent on a first input feature represented by a source node. The directed graph may be a directed acyclic graph. The processing device generates the directed graph using continuous optimization in which a graph of nodes representing input features is initialized. The processing device iteratively modifies the graph by adding, removing, or reversing a directed edge and then determining a score of the resulting modification. If the score increases, the modification is retained. If the score decreases, the modification is discarded. The processing device continuously optimizes the directed graph until the score exceeds a threshold value or the score no longer increases between iterations. The score may be based on correlations observed from the input dataset and/or historical datasets. For instance, a correlation can be observed when the value of an input features increases when another value of an input feature increases. In some instances, the processing device defines a correlation coefficient that defines relationships between two or more input features. The coefficient can be a value between −1 (e.g., no correlation) and 1 (e.g., high correlation). Adding an edge between two correlated input features (e.g., a correlation coefficient between 0 and 1) causes a positive score (e.g., which is equal to the correlation coefficient, proportional to the correlation coefficient, equal to some other positive value, etc.). At block916, the processing device defines a probability distribution for each input feature taking into account the dependencies identified by the directed graph. Probability distributions indicate, for independent input features, a probability that the input feature will be a particular value, and for dependent input features, a probability that the input feature will be a particular value given a value of another input feature. For instance, the first (and independent) input feature is represented by p(N1) and the input feature is represented by p(N2|N1). The probability of values can be determined from generated data (randomized data), from the input dataset, and/or from historical datasets. At block920, the processing device selects a subset of potential destination values from the probability distribution of the second input device based on the subset of potential destination values having a probability that exceeds a probability threshold. For instance, the probability distributions of each of the first input feature and the second input feature can be sampled to generate a weighting dataset. The processing device samples the probability distribution of the first input feature by selecting values for the first input feature that exceed a probability threshold. The processing device samples the probability distribution of the second input feature by selecting values for the first input feature (e.g., the sampled values). For instance, since the probability distribution for the second is represented by p(N2|N1), the processing device samples the probability distribution of second input feature by p(N2|N1=x, where x is a sampled value of the first input feature. The sampled values from the probability distribution of the first and second input features are aggregated into a weighting dataset. At block924, the processing device updates a weight of the edge between the source node and the destination node. The processing device uses the weighting dataset to define correlations between the input features connected by an edge. The processing device assigns edge linking nodes with a high degree of correlation with a higher value than edge linking nodes with a low degree of correlation. In some instances, the processing device uses the degree of correlation to add new edges, remove existing edges, or revise edges. For instance, if a degree of correlation between two input features is low (e.g., less than zero), then the edge may be removed. In other instances, the processing device rebuilds the directed graph in response to a low degree of correlation between two input features. At block928, the processing device updates the destination value of the second input feature as a function of at least (a) the value of the input feature of the source node and (b) the updated weight. The destination value (e.g., the value of destination nodes that are dependent on source nodes) is modified as the result of the modification to the first input feature of the input dataset. For instance, the processing device executes a simulation of a scenario in which the machine-learning model predicts an outcome resulting from an increase/decrease in the first input feature. The processing propagates the modification to the first input feature to the second input feature based on the dependency of the second input feature on the first input features. The value of the updated second input feature is f(N2)=α2+w12Δα1, where w12is the edge weight of the edge linking the first input feature to the second input feature, and Δα1is the modification to the first input feature. Once the values of modified input features are set, the values remain unchanged during graph traversal (e.g., during propagation of the modifications to the input features). For instance, the graph data structure ofFIG.7illustrates a modification to node604and node624. The modified value of node624remains static even if the node624is dependent on node608. That is, the propagation of the modification to the value of the input node represented by node608would modify the value of the input feature represented by node624. Yet, since node624was modified as per the scenario (in this example), the value of the value of the input feature represented by node624remains equal to the modification (as originally defined by the scenario). At block932, the processing device applies a trained machine-learning model on the modified input dataset. The machine-learning model may be any type of machine-learning model trained in any particular manner (e.g., supervised learning, semi-supervised learning, or unsupervised learning). The machine-learning model can be a predictive model that predicts an outcome (or a probability of an outcome's occurrence) based on the input features. The outcome can be compared to a previous outcome (e.g., such as a baseline outcome) to determine a degree in which the modification of the first input feature increased/decreased the outcome. The process device defines multiple scenarios to iteratively test the sensitivity of the outcome to particular input features. For instance, the scenarios may modify one input feature at a time to determine which input features had a greater effect on the outcome. The processing device can then rank the input features. The ranked list of input features can be output along with the outcome. In some instances, the outcome of the machine-learning model can be displayed via the graphical user interface of a display device. The graphical user interface receives input defining simulations or scenarios and automatically displays the predicted outcome based on the scenario. Previous simulations or scenarios can be displayed with the current simulations or scenarios. The blocks ofFIGS.3and9, though presented in a particular order, may be executed in any particular order. In addition, each block ofFIG.3orFIG.9may be executed one or more times before moving on to the next block. Example of a Computing System for Implementing Certain Embodiments Any suitable computing system or group of computing systems can be used for performing the operations described herein. For example,FIG.10depicts examples of computing system1000that executes market simulations. In some embodiments, the computing system1000generates graph data structure120that is used to define input features124for particular scenarios, as depicted inFIG.10. In other embodiments, graph data structure may be generated by a separate computing device. The depicted examples of a computing system1000includes a processor1004communicatively coupled to one or more memory devices1008. The processor1004executes computer-executable program code stored in a memory device1008, accesses information stored in the memory device1008, or both. Examples of the processor1004include a microprocessor, an application-specific integrated circuit (“ASIC”), a field-programmable gate array (“FPGA”), or any other suitable processing device. The processor1102can include any number of processing devices, including a single processing device. The memory device1008includes any suitable non-transitory computer-readable medium for storing data, program code, or both. A computer-readable medium can include any electronic, optical, magnetic, or other storage device capable of providing a processor with computer-readable instructions or other program code. Non-limiting examples of a computer-readable medium include a magnetic disk, a memory chip, a ROM, a RAM, an ASIC, optical storage, magnetic tape or other magnetic storage, or any other medium from which a processing device can read instructions. The instructions may include processor-specific instructions generated by a compiler or an interpreter from code written in any suitable computer-programming language, including, for example, C, C++, C#, Visual Basic, Java, Python, Perl, JavaScript, and ActionScript. The computing system1000may also include a number of external or internal devices, such as input or output devices. For example, the computing system1000is shown with one or more input/output (“I/O”) interfaces1016. An I/O interface1016can receive input from input devices or provide output to output devices. One or more buses1012are also included in the computing system1000. The bus1012communicatively couples one or more components of a respective one of the computing system1000. The computing system1000executes program code that configures the processor1102to perform one or more of the operations described herein. The program code includes, for example, the machine-learning models128, code that updates input features to correspond to a particular scenario, code for generating directed graphs, or other suitable applications that perform one or more operations described herein. The program code may be resident in the memory device1008or any suitable computer-readable medium and may be executed by the processor1004or any other suitable processor. In some embodiments, the program code can execute in a cloud environment where portions of the program code are executed by multiple devices in parallel. The computing system1000can access input datasets144and the graph data structure120in any suitable manner. In some embodiments, some or all of one or more of these datasets, models, and functions are stored in the memory device1008, as in the example depicted inFIG.10. For example, a computing system1000that executes the machine-learning model128to predict an outcome can obtain the input datasets120and use the graph data structure to generate updated input features124within memory1008. In additional or alternative embodiments, one or more of these datasets, models, and functions are stored in the same memory device (e.g., one of the memory device1104). For example, a common computing system, such as the processing device104depicted inFIG.1, can host the machine-learning models128and the graph data structure120. In additional or alternative embodiments, one or more of the programs, datasets, models, and functions described herein are stored in one or more other memory devices accessible via a data network. The computing system1000also includes a network interface device1020. The network interface device1020includes any device or group of devices suitable for establishing a wired or wireless data connection to one or more data networks. Non-limiting examples of the network interface device1020include an Ethernet network adapter, a modem, and the like. The computing system1000is able to communicate with one or more other computing devices (e.g., server136that directs the operations of processing device103) via a data network using the network interface device1020. General Considerations Numerous specific details are set forth herein to provide a thorough understanding of the claimed subject matter. However, those skilled in the art will understand that the claimed subject matter may be practiced without these specific details. In other instances, methods, apparatuses, or systems that would be known by one of ordinary skill have not been described in detail so as not to obscure claimed subject matter. Unless specifically stated otherwise, it is appreciated that throughout this specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining,” and “identifying” or the like refer to actions or processes of a computing device, such as one or more computers or a similar electronic computing device or devices, that manipulate or transform data represented as physical electronic or magnetic quantities within memories, registers, or other information storage devices, transmission devices, or display devices of the computing platform. The system or systems discussed herein are not limited to any particular hardware architecture or configuration. A computing device can include any suitable arrangement of components that provide a result conditioned on one or more inputs. Suitable computing devices include multi-purpose microprocessor-based computer systems accessing stored software that programs or configures the computing system from a general purpose computing apparatus to a specialized computing apparatus implementing one or more embodiments of the present subject matter. Any suitable programming, scripting, or other type of language or combinations of languages may be used to implement the teachings contained herein in software to be used in programming or configuring a computing device. Embodiments of the methods disclosed herein may be performed in the operation of such computing devices. The order of the blocks presented in the examples above can be varied—for example, blocks can be re-ordered, combined, and/or broken into sub-blocks. Certain blocks or processes can be performed in parallel. The use of “adapted to” or “configured to” herein is meant as open and inclusive language that does not foreclose devices adapted to or configured to perform additional tasks or steps. Additionally, the use of “based on” is meant to be open and inclusive, in that a process, step, calculation, or other action “based on” one or more recited conditions or values may, in practice, be based on additional conditions or values beyond those recited. Headings, lists, and numbering included herein are for ease of explanation only and are not meant to be limiting. While the present subject matter has been described in detail with respect to specific embodiments thereof, it will be appreciated that those skilled in the art, upon attaining an understanding of the foregoing, may readily produce alterations to, variations of, and equivalents to such embodiments. Accordingly, it should be understood that the present disclosure has been presented for purposes of example rather than limitation, and does not preclude the inclusion of such modifications, variations, and/or additions to the present subject matter as would be readily apparent to one of ordinary skill in the art. | 68,374 |
11861465 | DETAILED DESCRIPTION This present disclosure relates to systems and methods for facilitating the derivation of additional features (e.g., data columns) associated with a dataset, based on the semantic context (e.g., actual or implied meaning or significance) of existing features in the dataset. Such additional features can then be used to provide an administrator or user, such as a data scientist, additional insight pertaining to the dataset. For instance, every column of a dataset may be annotated with basic pieces of information such as, its semantic type, its outcome variable, and how each column relates to other fields (e.g., other columns). These pieces of information may then be used to create, generate, or derive additional features, in some cases with different semantic contexts and/or types than those of the input feature(s), pertaining to the dataset. Specifically, the semantic type of a feature may describe the kind of information that the data in the features represents. The outcome variable may describe something that an administrator of the dataset renders important to track such as, a Key Performance Indicators (KPI) in a business environment related to revenue or cost of a product. The information about how each column relates to other fields may indicate the interdependencies or relationship between each of the columns in the dataset. For example, a zip code feature may be marked as containing information intimately correlated to the city and state features in the record. All of these pieces of information may be identified from a dataset so that one or more particular subsets of the dataset may be identified as being associated with respective feature(s). That is, the information of the dataset may be used in a processing environment in order to identify certain features of data before data is processed by one or more semantic algorithms (e.g., feature derivation algorithms) for analysis. In some embodiments, this stage as well as the identification of the semantic contexts of features is executed manually (e.g. via input via a graphical or command-line interface), while in some embodiments, the pre-processing is performed programmatically using heuristics, etc. Datasets (e.g., input data or data from source files) may be obtained from various computing services or data stores and each of these datasets may contain columns of information with varying feature types. These varying feature types may then be identified and tagged to form a subset of the dataset. The subset of the dataset may subsequently be tagged and identified based on the semantic type declaration, semantic metadata, and/or semantic information of the input data. In some instances, the subset of the dataset may then be formatted, normalized, and/or cleansed before being sent to a semantics processor, for example, to apply algorithms that may derive many other additional features than the features that were present in the dataset when it was first obtained. As a result, an automated processing technique such as one described herein would be an optimized method of deriving new features. A technical effect and advantage to the techniques described herein is the creation, and population, of a greater number of relevant features than would be feasible by the manual intervention of administrators (e.g., data scientists), while also lowering the defect rates pertaining to user error since the techniques described herein allow for a simple annotation of data and all other work is automated. Additionally, by virtue of the consideration of a practically unlimited number of input features, as well as being able to take advance account of their semantic context, the derived features are considerably more pertinent to the desired outcome (the semantic context of which may also be defined ahead of time) and result in more efficient processing of the overall data set by systems implementing machine learning algorithms (which may be the same system, or other downstream systems). As mentioned, the mechanisms described herein provides a more efficient way to process large data set(s) with varying types of information (e.g., data sets with varying feature types) with little or no oversight and/or input by an administrator. Administrators typically have to manually identify or deduce features in a given dataset, as well as the informational relationships therebetween, for use as input for machine learning processing. This manual configuration requires detailed knowledge about the systems, the data itself, and/or their respective interdependencies and external dependencies. However, as the number, size, and complexity of the source files increase, the effort and knowledge required increases rapidly, and the deductive and/or inductive nature of this manual processing (i.e., working from the raw input data and imputing semantic meaning of various groupings perceived therefrom) necessarily results in poor scaling and incomplete and/or incorrect feature identification. Further, this is typically done every time the data is analyzed, leading to duplication of this extensive effort. Thus, by providing a framework by which the high-level semantic contexts and relationships therebetween are definable in connection with the input data, the techniques described herein provide many technical advantages for processing data in a technical environment. That is, techniques described in the present disclosure facilitate the process of deriving additional features not by automating an existing manual process, but by integrating semantic information—context/meaning, type, etc.,—into a processing flow that has, to this point, required human induction and deduction to derive such semantic information from syntactic information (i.e., data types and groupings thereof, such as integers, strings, floats, and the like). In order to facilitate such processing, a system first obtains and processes input data from a file or multiple files. For example, when input data (e.g., from a source file or files) is received, a system identifies features pertaining to the input data. Identifying the features of the input data provides the system with information of how each column of the input data are related to one another. Once identified, a subset of the input data can be tagged with semantic metadata—that is, metadata that includes or is otherwise associated with information that describes the semantic context/significance of each feature (e.g., rather than just identifying a feature as containing integers, identifying the feature as “temperature” with unit “Fahrenheit degrees”)—and sent to a semantics processor which implements algorithms (e.g., heuristics, machine learning, etc.) that process the tagged input data to generate/derive additional features, based at least in part on the semantic metadata, based on a variety of factors. These factors may be driven by system configurations and/or predetermined user-defined policies (e.g., specifying a desired outcome, the manner in which a given input feature results in a derived feature, how identified features interrelate, etc.). The semantic metadata, in some instances, may include information with the predefined user-defined policies. The semantics processor may be configured, in an non-limiting example, such that it may process a practically unlimited number of input features simultaneously or in a short amount of time without negative scaling constraints (e.g., O(n) or O(1) scaling, rather than multiplicative, logarithmic, or exponential computational requirements to achieve linear time-to-completion relative to feature quantity and/or complexity) to generate new features in addition to the features already present in the input data originally obtained. In the preceding and following description, various techniques are described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of possible ways of implementing the techniques. However, it will also be apparent that the techniques described below may be practiced in different configurations without the specific details. Furthermore, well-known features may be omitted or simplified to avoid obscuring the techniques being described. FIG.1illustrates an example environment100where input data102is processed in accordance with an embodiment. Input data102may include a set of values in a column or multiple columns obtained from a file. As an example, input data102may contain a column or multiple columns with integer values that indicate a timestamp of when a particular customer purchased a product online. The column or columns would provide values pertaining to the date and/or time that the customer purchased a product online. In some instances, date and time may not be integer values but rather textual strings that refer to a month, day, and/or year. In the example illustrated inFIG.1, input data102may provide data106to a processing service108. The data106may be provided from a computing service (not depicted inFIG.1) either automatically based on system policies or may be directed by a user associated with the one or more computing devices (not depicted inFIG.1). A computing device or a user in connection with a computing device may generate data related to the operation of a business or research project. For example, data may be generated or gathered using a computing device to track timestamps of a customer that purchases a product online or other such data related to purchasing products online. It should be noted that, while the examples described herein refer to data pertaining to timestamps, other types of streaming data, streaming textual data, non-streaming data, and non-streaming textual data may also be processed using the techniques described herein. For example, a corpus of English text from, for example, a collection of books and their titles may be similarly collected using the techniques described herein. It should also be noted that, while the examples described herein are based on a solution to the problem of dynamically deriving new and additional features based on a collection of data, the techniques described herein may also be used to, for example, gather statistics on the data, analyze trends in the data, produce reports about the data, or other such operations. For example, the processes illustrated below deriving additional features of input data may also be used to search for all occurrences of a specific data item in the data, and thereby produce a frequency count of elements for that particular data item in the dataset. As shown inFIG.1, a processing service108may receive input data102(e.g., one or more source files) on behalf of a user or administrator in connection with a computing device. In some instances, the processing service108may first process the data in the input data102to properly clean or normalize the values contained therein. That is, the processing service108may, in some instances, do some cleaning, normalizing, and/or formatting of the data before other processing is performed on the input data102. The processing service108may be a service or a computing device provided by a computing environment100that receives, obtains, or sends a request for the input data102. The processing service108may receive the input data102using a direct connection such as, for example, a network connection either via a wired or wireless connection. The processing service108may also receive the input data106by, for example, interfacing with a separate service, monitoring one or more network addresses, monitoring a storage location, or subscribing to a data notification service. The data storage device116(e.g., data store) may be configured to push or provide data122based on system policies or it may be provided based on a user's input in connection with other computing devices in the computing environment100. The processing service108may obtain the input data102and elect to send114the input data102for storage in a data store116, a buffer, or a cache before any processing is performed on the input data102. That is, in the example illustrated inFIG.1, the processing service108may provide the input data102for storage in a data storage service116, which may be one or more storage locations provided by the computing environment100. The data storage service116may include the same or entirety of data as the input data102or it may include a subset of the input data102. For example, the data storage service116may include a filtered subset of the data, or it may include data for a predetermined time period, or may include some other subset of the input data102as predetermined by system configurations. The data storage service116may be a computing device configured to store data. In an embodiment, the processing service108and the data storage service116may be that of the same service and share the similar configurations. That is, the one or more computing device may actually send data106directly to the data storage service116which may incorporate the functionalities of the processing service114, such as, receiving data106. In the example illustrated inFIG.1, the processing service108receives the data from the input data102and processes the data110in a semantics processor112to identify features124. Once the input data is processed and features identified, one or more subsets may be tagged for processing. The one or more subsets of the input data may be tagged based on the feature types of the data. The tagged set of data may then be processed by the semantics processor112to generate additional features126for the input data102. Once the additional features have been generated, the semantics processor112may generate new data128to include the additional features. Subsequently, the generated new data128may then be sent from a semantics processor back to the processing service108or to a data storage service116for further processing. That is, the generated new data128may be used as an input again into the semantics processor112to generate yet more features or requested and used by administrators (e.g., data scientists) for additional insight into the input data102that was originally received. Each of these steps as described inFIG.1to derive additional features and generate new data is described in greater detail below in connection withFIGS.2-9. In an embodiment, the semantics processor112may also include functionalities such as cleaning and normalizing the data before processing. In another embodiment, the semantics processor112may be instructed upon by a user in connection with a computing device (not depicted inFIG.1) to use the new data to identify additional features in the new data to derive even more features. Although the example illustrated inFIG.1illustrates the semantics processor112as a separate processor from the processing service108, in an embodiment, the semantics processor112may be the same as the processing service108and provides the functionality associated with the processing service108described herein. In the example illustrated inFIG.1, the data received by the processing service108includes external input data118received from outside of the computing environment100. That is, the external input data118may be from another administrator associated with computing devices of the computing environment100, from services, applications, modules, or interfaces hosted outside of the computing environment100, or from services, applications, modules, or interfaces configured to connect and/or communicate with the processing services108of the computing environment100. In an embodiment, the external input data118comes from services, applications, modules, or interfaces hosted in an isolated private network (e.g., a virtual private network), but logically isolated from the other services102of the computing environment100. In an embodiment, the semantics processor112may be implemented in a cloud computing instance (e.g., virtual machine, data bucket, etc.) in a virtual environment. That is, the semantics processor112may, in some instances, be spun up on-demand and implemented using a virtual machine supported by computing resources hosted by a computing resource service provider. The virtual machine may be spun up on-demand based on a request for a virtual machine from a processing service108, from a user in connection with a computing device, or based on a service level agreement (SLA) of the computing resource service provider. As further shown inFIG.1, in an embodiment, the input data102and the external input data118may first be processed by the semantics processor112to generate additional features and, in turn, new data128, such that the input data102and the external input data118and the new data128may in turn be all stored in the data storage service116. However, in an embodiment, the data from input data102and external input data118may also first be stored in the data storage service116before being processed by the semantics processor112. The data storage service116may be a storage device configured to store data, a buffer, and/or virtual storage hosted by a computing resource service provider. FIG.2illustrates an example environment200where additional input data (e.g., external input data) is processed. As described above in connection withFIG.1,FIG.2illustrates that external input data218may be requested by an administrator in connection with a computing device to join with newly generated data214after input data210has been processed by a semantics processor212. That is, input data210may be obtained by a semantics processor212and features of the input data210may be identified. Once the features have been identified, a subset of the input data210may be tagged or identified to form a tagged subset. Once the subset of the input data210has been tagged, the semantics processor may process the tagged subset of input data210to generate additional features pertaining to the tagged subset of input data210. In an embodiment, new data214or a new file containing new data may be generated to include at least the original input data210, the original features identified with the input data210, and also the new data214with the newly generated additional features that were derived based on the tagged subset of the input data210. In some instances, external input data218may be submitted to or obtained by a semantics processor212to process with the new data214. That is, the semantics processor212may perform the same or similar operations as described in connection with the input data210mentioned above to derive even more additional features associated with the input data210and the external input data218. As an example, the input data210may include a column describing how many online purchases a customer makes on a daily basis. In another column, the column may indicate the purchase price of the online purchases that the customer makes on a given day. Additional features may be generated based on these two columns. That is, the semantics processor212may obtain the input data210with these two columns and derive one or more additional features for additional insight to the information. That is, for example, an additional feature derived may be an average price per item the customer made on any given day. Specifically, the average price the customer spends per day can be derived by the semantics processor212from these two columns of information. Once the one or more additional features have been generated, new data may be created to include the original two columns of information and then a new column of information pertaining to the average price of purchase for each day is also included. A new file may contain all of this information and be sent to a data store216for storage. In some instances, a data scientist in connection with a computing device may then request the new file or new data214from the data store216and perform one or more additional operations to the new file or new data214accordingly. As further illustrated inFIG.2, in some embodiments, the new data with this new additional feature214may elicit additional or external input data218to join in order to generate even more features. In an embodiment, the external input data218may include one or more columns that pertain to how long it takes for a customer to make an online purchase decision. In other words, the one or more columns in the external input data218may include timestamps of a customer in connection with how long it takes the customer to make a purchase while surfing the web. This external input data218may then join with the new data214and be sent to the semantics processor again to generate another additional new feature. In an embodiment, the external input data218may be elicited based on the metadata associated with the input data210. That is, the input data210may indicate where to pull the additional or external input data218from to join with the input data210to generate features. For example, the metadata associated with the input data210may indicate that external input data associated with the weather may need to be pulled from a weather server. This external input data218associated with the weather may provide columns of what the temperature was on specific days a customer makes online purchases. The pulling or request of information from a weather server, for instance, may be performed either prior to the input data210being processed by the semantics processor212or after. Moreover, the joining of the weather data from the external input data218and the input data210may, in some instances, be performed before the semantics processor212processes the data. Nevertheless, the additional or external input data218may be solicited to join with the input data210to generate additional features. In some instances, the semantics processor212may run the new data214again by receiving the new data214from the data store216. In the alternative, and in some instances, the new data214may be directly sent to the semantics processor212for further or additional processing without first being stored in the data store216. That is, by running the new data216again, in some instances, the semantics processor212may receive or request a portion or the entirety of the new data216to identify features and further generate additional features associated with the new data216. Note that, the examples of types of data and what information are included in the columns of the input data are just illustrative examples and that multiple columns could be used and different types of features could be identified outside of online purchases, timestamps, and/or temperatures. FIG.3illustrates an example process300as described in connection withFIG.2for processing input data to generate additional features and processing additional input data to generate more additional features. That is, in302, feature types associated with input data are identified. The input data may be obtained first by a processing service associated with a semantics processor or, in some instances, directly by the semantics processor. In an embodiment, the input data is obtained from source data from varying sources. In an embodiment, the input data is a stream of data (structured or unstructured, depending on the implementation). For example, the source data may come from a weather server, an online purchase research group's server, an external storage device such as a Universal Serial Bus (USB) device, unstructured or structured data (e.g., from sensors and/or other Internet of Things (IoT) device or groups thereof), or any server or storage device capable of storing, generating, and/or transferring data. Once the feature types associated with the input data have been identified, a first set of new data may be generated304to include any additional features that were generated by the semantics processor. That is, the semantics processor, as described in connection withFIGS.1-2, may parse the input data to identify features associated with the input data to generate a subset of the input data such that additional features are derived. The result of this is a new set of data that include, in some embodiments, the input data, the features originally identified for the input data, and/or the new additional features in any combination thereof as pertinent to the implementation. As further illustrated inFIG.3, in306, additional or external input data may be obtained to join with the newly generated data. That is, for example, weather data from a weather server may be obtained such that the weather data (e.g., temperatures for each given day) are joined together with the new data pertaining to online purchases. The source and/or format of this data may be determined, in whole or in part, on the semantic context of input feature(s) and/or the derived new feature(s). Based on this semantic context, in some embodiments, an appropriate programmatic interface is identified, and the semantics processor generates one or more request(s) to the programmatic interface that includes information that cause the programmatic interface to provide appropriate data in return. For example, based on an input feature and semantic context associated with that feature (e.g., a column of integers that is semantically defined as a date), the system may derive a new feature with a different semantic context (e.g., temperature on that date), and based on that semantic context, identify an appropriate data source and/or API through which to retrieve the data (e.g., a weather API). In this example, the semantics processor forms the appropriate request(s) (e.g., get average temperature on day range1through n based on the values of the source feature and the desired data associated with the new feature), retrieves that data, and further processes the retrieved data into a format contextually usable in connection with that of the input data and/or the derived feature (e.g., into a tabular or other format to match up with the rows of data in the input data). In308, features using the new data and the additional or external input data (e.g., other set of data) may be identified and tagged to create a subset of the joined new data and the other set of data accordingly. Once the tagged subset is created, a second set or another new set of data is generated with even more features310. For example, the weather data and the online purchases data are joined together and an additional feature such as the amount of times a customer makes a purchase during the warmest time of a day is generated. That is, now the second set of new data includes the original input data, the originally identified features pertaining to the input data, the first set new data and additional features pertaining to the first set of new data, and a second set of new set with additional features. FIG.4illustrates an example environment400where input data is processed via a semantics processor410. That is, in402, the semantics processor410receives input data420with one or more columns of data and processes input data. As described in more detail above in connection withFIGS.1-3, the semantics processor410processes input data (e.g., data from source files) from a variety of sources, servers, and/or storage services. More specifically, in404, metadata for each feature that is identified from the input data may be obtained. Metadata may include information such as the semantic type. Thus, features can then be identified and tagged to form a subset406of the input data based at least in part on the metadata that includes information pertaining to semantic types. The subset of the input data may then be processed to generate or derive additional features accordingly. That is, based on the features with semantic context, one or more additional features for the input data may be generated, the one or more additional features also correspond to a semantic context that is in association with the semantic context of the subset of the input data. Ultimately, new data430may then be generated with these additional features and access to the new data430may be provided to one or more users in connection with a computing device. As an example to illustrate the process400ofFIG.4, below is an insurance policy table with six columns or fields and the metadata (subsections of each of the six fields) associated with the table:1. Policy_IDa. Semantic Type: ID2. Quote_timestampa. Semantic Type: Timestamp3. Policy_purchase timestampa. Semantic Type: Timestamp4. Monthly_premiuma. Semantic Type: Currencyb. Unit: USD5. Cancellation timestampa. Semantic Type: Timestampb. KPI_Derived: Truec. Allow_nulls: True6. Is_Active_Account?a. Semantic Type: Booleanb. KPI: Truec. Good_Value: “True” Based on those six columns or fields, the semantics processor410may generate the following fields of additional information:1. Quote_to_policy_purchase_perioda. Policy_purchase_timestamp—Quote_timestampb. Derived_from: policy_purchase_timestamp, quote_timestampc. Semantic_type: period2. Quote_to_cancellation_perioda. Cancellation_timestamp—Quote_timestampb. Null if quote_timestamp is nullc. Marked as being KPI_Derived since one of its constituents was KPI_Derivedd. Derived_from: quote_timestamp, cancellation_timestampe. Semantic_type: periodf. Unit: Seconds (assuming timestamp precision is seconds)3. Policy_purchase_to_cancellation_perioda. Cancellation_timestamp—Quote_timestampb. Null if quote_timestamp is nullc. Marked as being KPI_Derived since one of its constituents was KPI_Derivedd. Derived_from: cancellation_timestamp, policy_purchase timestampe. Semantic_type: period4. Quote_minute_of_daya. Just the time component of the quote_timestampb. Semantic_type: Minute_of_dayc. Derived_from: quote_timestamp5. Quote_part_of_daya. Morning/Afternoon/evening/nightb. Semantic_type: Part of dayc. Derived_from: quote_timestamp6. Quote_weekday?a. Is the day a weekday?b. Semantic_type: Booleanc. Derived_from: quote_timestamp7. Quote_day_of_week_inta. 0-6 where each number represents a day of the weekb. Semantic_type: Ordinal Day of Weekc. Derived_from: quote_timestamp8. Quote_montha. 0-11 where each integer represents a month of the yearb. Semantic_type: Ordinal Monthc. Derived_from: quote_timestamp9. Quote_yeara. Ex. 2018b. Semantic_type: Year10. Derived_from: quote_timestampQuote_days_from_epocha. Integer count of days until or since an arbitrary dateb. Semantic_type: Epoch_Datec. Derived_from: quote_timestamp11. Quote_next_holidaya. Christmas/Easter/Labor Day etcb. Semantic_type: US Holidayc. Derived_from: quote_timestamp12. Quote_days_to_next_holidaya. Integer count of days until the next holidayb. Semantic_type: Periodc. Derived_from: quote_timestampd. Unit: Day13. Quote_during_workday?a. Is it between 8 am and 5 pm mon-frib. Semantic_type: Booleanc. Derived_from: quote_timestamp14. <Repeat 11-21 for policy_purchase and cancellation>15. Cancellation timestamp_is_null?a. Boolean for if that timestamp is null since we chose to allow nulls for it.b. Semantic_type: Booleanc. Derived_from: cancellation_timestamp FIG.5illustrates an example process500for processing input data to generate additional features for new data. In502, a system, such as a computing environment as described in connection withFIG.1, may use a semantics processor to process input data to identify a first feature and a second feature in the input data. The semantics processor may be part of the computing system environment by running on a computing device connected to other devices or the semantics processor may be separate from a computing system environment and running on a virtual machine hosted by a computing resource service provider. In an embodiment, the semantics processor may be the system itself. In an embodiment, the semantics processor may identify, based on a semantic context, a programmatic interface to retrieve additional data associated with the features. The semantics processor or a separate device associated with the computing environment may then retrieve the additional data via the programmatic interface and further cause the system to generate the new data based at least in part on the retrieved additional data. In an embodiment, the first feature and the second feature respectively correspond to a first subset of the input data and a second subset of the input data. Moreover, in an embodiment, the first subset of input data may have a first semantic type and the second subset of input data may have a second semantic type. For example, a semantic type may indicate that a column of information is a “Product Name.” Specifically, a column of data or information may contain a list of all the products that a customer purchased and the semantic type for that column may be indicated as the “Product Name.” In some instances, the first semantic type is identical to the second semantic type. In other instances, the first semantic type is different from the second semantic type. Moreover, as further illustrated inFIG.5, in504the system may cause a semantics processor to obtain a first semantic metadata for the first feature and a second semantic metadata for the second feature. In an embodiment, the first and the second semantic metadata respectively indicate information about a first semantic context and a second semantic context for the first feature and the second feature respectively. In some instances, the system uses the semantics processor so that the first feature is derived from the input data as a result of the input data having been processed with other semantic metadata associated with a different feature. In506, the system in connection with a semantics processor may then process the input data with the obtained first semantic metadata and the obtained second semantic metadata to generate a tagged set of data. The tagged set of data may comprise of the first subset of the input data, the second subset of the input data, the first semantic metadata, and the second semantic metadata. The tagged set of data may also include an identifier to identify this tagged set of data. A user (e.g., data scientist) in connection with a computing device may direct instructions to perform additional operations on a tagged set of data by identifying which subset of input data that should be run through the semantics processors to derive additional features. In508, the system in connection with a semantics processor may process the tagged set of data to determine, based at least in part on the first semantic context and the second semantic context, a third feature. The third feature may correspond to a third semantic context associated with both the first semantic context and the second semantic context. In510, the system in connection with a semantics processor may generate, from the tagged set of data, new data to correspond to the third feature. In some instances, the new data may be generated by processing the tagged set of data with the third semantic context. In an embodiment, the system may tag the new data with third semantic metadata associated with the third semantic context. The third semantic metadata may comprise of a third semantic type corresponding to the third semantic context. In an alternate embodiment, the third semantic metadata may also comprise an indication of mutual information between the third feature and at least one of the first feature and the second feature. Mutual information may be a measure of the mutual dependence between the two features. Additionally, in512, the access to the new data may then be provided. Note that in the example process500ofFIG.5, additional features and metadata associated with any additional features may be identified beyond just the three features indicated in the description pertaining toFIG.5. FIG.6illustrates an alternate example process600for a system utilizing a semantics processor, for instance, to process input data to generate additional features. In some instances, the system may be a third party system running the example process600. In602, the system may process input data to identify a subset of the input data where the subset of the input data corresponds to a feature in the input data. The feature may include a first semantic type or semantic identifier to describe or define the input data. In604, the system may cause the semantics processor to obtain metadata for the feature. The metadata may be associated with a first semantic context for the feature. In an embodiment, the metadata identifies the first semantic context. The metadata may be heuristically determined based at least in part on the identified subset of the input data. In606, the system may cause the semantics processor to process the input data to determine, based at least in part on the first semantic context, a second feature that corresponds to a second semantic context. The second feature, in some instances, may be determined by the system based on information other than the first semantic context. In608, the system may generate, from the input data, new data to correspond to the second feature. The new data may be generated to include an identifier for the second feature based at least in part on other metadata. Additionally, in610, the system may provide access to the new data as associated with a corresponding subset of the input data. In an embodiment, the system may provide access to the new data by processing it through or applying a machine learning algorithm to the new data. In an embodiment, the system may have a different computer system to process the new data after providing access to the new data. In an embodiment, after the new data is generated, the system may determine the second feature using an algorithm identified in a policy as applicable to the feature. FIG.7illustrates an example process700for a system in connection with a semantics processor, when performing a computer-implement method, to generate elements for input data. As shown inFIG.7, in702, the system may process input data to identify a feature in the input data. In an embodiment, the feature may correspond to a subset of the input data and the subset of the input data may include a semantic type. In704, the system may cause the semantics processor to obtain or extract semantic metadata for the feature. The semantic metadata may indicate a first semantic context for the feature. In706, the system may process the input data with the obtained semantic metadata using a semantics processor by applying one or more semantic algorithms to derive features. Semantic algorithms generally consist of algorithms that know how to derive new data feature(s) from a plurality of features based on the semantic types of data. In another instance, semantic algorithms may also consist of algorithms that know how to derive new data feature(s) from a given syntax of data being automatically applied to a given feature because of the declaration of the semantic and/or type of a feature. In some instances, semantic algorithms may be generated by explicit instructions from human input as well. In708, the system may cause the semantics processor to process input data based on the parameter(s) associated with the input data. That is, the input data may contain the parameter(s) that specifies an argument that could be passed in with the request to determine how to aggregate the data. For example, a request to aggregate data may be received and the parameter associated with the request may identify that all semantic types pertaining to “policy ID” are to be aggregated. In an embodiment, the parameter also identifies the manner for which to aggregate the subset of the input data pertaining to the data with semantic type “policy ID”. Based on this parameter and the subset of the input data, the system may identify a first plurality of elements. In710, the system may cause the semantics processor to aggregate the first plurality of elements by generating, in a manner determined at least in part on the first semantic context, a second element. In an embodiment, the manner is determined based at least in part on the parameter and/or the first semantic context and a second semantic context corresponding to a different feature in the input data. The second element may be derived from a subset of the first plurality of elements that has a different second semantic context. Additionally, in712, the system may provide, with the parameter, the second element as associated with the parameter. Note that, the example processes500,600, and700as described in connection withFIGS.5-7respectively may be implemented by a semantics processor in any order and that the steps and examples provided in the description of those steps may not be the only order that is processed by the semantics processor to achieve the same result. That is, for example, the semantics processor may, in some instances, receive from another computing device a tagged subset of data ready to derive additional features. Thus, the steps of502-506, for example, in connection withFIG.5may be skipped or not performed by the semantics processor before additional features in new data are generated as described in steps508-510. FIG.8illustrates an example process800for aggregating or pivoting new data with additional features. In an embodiment, a system may process input data802as described in connection withFIGS.1-7. The input data may include features associated with the input data that are identified804. Moreover, the input data is then processed to generate new additional features to generate new data806. After the new data is generated, the system may decide whether to join multiple datasets of input data together and automatically aggregate columns in one or more of those datasets based on rules defined for the semantic types of that data. The datasets may be extracted from one or more files. For example, the new data generated may include information about online purchase orders for a customer and based on predefined user-defined policies or system configurations, the system may aggregate and join the new data associated with weather from a weather server together to generate even more features. The user-defined policies may be policies associated with the system, defined as part of a request for processing input data, administered by a policy management system, service of a computing resource service provider, and/or as part of the semantic processor. The request to join the datasets together may include a parameter that specifies which features (e.g., semantic types) to aggregate and the manner in which to generate additional features. As an example, a feature of a dataset is “car model year.” The system identifies that for each “policy id” in the dataset there is an indeterminate number of cars, and thus the car model years associated with them. Thus, to aggregate, the aggregation may indicate “year” and, in some instances, it would not make sense to aggregate or add the integers that are in the columns pertaining to “years” together. Thus, the system may parse through the parameter, user-defined policies, and/or some combination thereof, to identify certain rules and if the rules indicate that if the semantic type is “years,” then the aggregation is to average the car model years. The result of this is an additional feature in new data with the additional feature pertaining to the “average year of the car model per policy.” In an embodiment, a simple case of aggregating the datasets may be that of a car insurance policy included in the new data may be joined with data related to cars on those policies and when the two datasets are aggregated, the sum of the all car values and the average of all car values may be generated as new features. Hence, new data containing these new features810may be provided to another computing device for a data scientist to analyze accordingly. As an example of the aggregate process808, a second file containing datasets that pertain to vehicles, with the following columns and declarations may be added to the new data:1. vehicle_ida. Semantic Type: ID2. Policy_ida. Semantic Type: Foreign Keyb. Target: Policies3. Makea. Semantic Type: Brand4. Modela. Semantic Type: Product Name5. Yeara. Semantic Type: Year6. Doorsa. Semantic Type: Count7. Stylea. Semantic_Type: Categorical_Small8. Original Pricea. Semantic Type: Currencyb. Unit: USD9. Current_valuea. Semantic Type: Currencyb. Unit: USD By adding this vehicle dataset, the system may run the semantics processor again and the processor may automatically join the vehicle data to the policy data. In another embodiment, the system may cause the semantics processor to extract metadata from the input data and identify the rules on how to handle the joining of datasets and/or pivoting and aggregation within one or both. Further, this might be limited to aggregation or pivoting within a single file rather than a pair. The system may make a decision as to joining the datasets by either aggregating the data and/or pivoting as well to make the cardinality of the files match (i.e. aggregate cars on policies so that its cardinality is based on policies just as the policies data is). In some instances, if the system pivots, there may be some limitations on the number of pivot columns that may be processed. As an example, the system may choose to pivot and limit it to three vehicles of pivot. In that case the resulting data would include everything shown in the above example for the basic policy table plus the following columns:1. <Repeats 3 times>a. <Vehicle>_<#>_makei. Semantic Type: Brandb. <Vehicle>_<#>_modeli. Semantic Type: Product Namec. <Vehicle>_<#>_yeari. Semantic Type: Yeard. <Vehicle>_<#>_doorsi. Semantic Type: Counte. <Vehicle>_<#>_stylei. Semantic_Type: Categorical_Smallf. <Vehicle>_<#>_original_pricei. Semantic Type: Currencyii. Unit: USDg. <Vehicle>_<#>_current_valuei. Semantic Type: Currencyii. Unit: USDh. <Vehicle>_<#>_original_price_less_current_valuei. Original price minus the current valueii. Semantic Type: Currencyiii. Unit: USDiv. Derived_From: Vehicle_<#>_original_price, Vehicle_<#>_current_valuev. Scale_column: <Vehicle>_<#>_original_pricei. <Vehicle>_<#>_current_value_to_priginal_price_ratioi. Ratio of the current value over the Original priceii. Semantic Type: Percentageiii. Derived_From: Vehicle_<#>_current_value, Vehicle_<#>_original_pricej. <Vehicle>_<#>_original_price_per_doori. Original price divided by door countii. Semantic Type: Currencyiii. Unit: Dollarsiv. Derived_from: Vehicle_<#>_original_price, Vehicle_<X>_doorsk. <Vehicle>_<#>_current_value_per_doori. Current price divided by door countii. Semantic Type: Currencyiii. Unit: Dollarsiv. Derived_from: Vehicle_<#>_current_value, Vehicle_<X>_doors2. vehicle_counta. Integer counting how many vehicles each policy hadb. Semantic Type: Count3. <for every make represented in the vehicles list>a. vehicle_<make>_counti. Semantic Type: Countii. Integer count of how many cars of that make the policy has4. <for every model represented in the vehicles list>a. vehicle_<model>_counti. Semantic Type: Countii. Integer count of how many cars of that model the policy has5. vehicle_earliest_yeara. Year of the oldest vehicleb. Semantic Type: Year6. vehicle_newest_yeara. Year of the newest vehicleb. Semantic Type: Year7. vehicle_average_yeara. Average year of the vehiclesb. Semantic Type: Year8. Vehicle_door_averagea. Average number of doors per vehicleb. Semantic Type: Count9. Vehicle_door_suma. Total number of doors amongst all vehiclesb. Semantic Type: Count10. <for each style represented in the vehicles list>a. vehicle_<style>_counti. How many vehicles of this style did the policy haveii. Semantic Type: Count11. Vehicle_original_price_suma. Sum of all the original pricesb. Semantic Type: Currencyc. Unit: USD12. Vehicle_original_price_mina. Cheapest original priceb. Semantic Type: Currencyc. Unit: USD13. Vehicle_original_price_maxa. Most expensive original carb. Semantic Type: Currencyc. Unit: USD14. Vehicle_original_price_avga. Average original car priceb. Semantic Type: Currencyc. Unit: USD15. Vehicle_current_value_suma. Sum of all the current valuesb. Semantic Type: Currencyc. Unit: USD16. Vehicle_current_value_mina. Cheapest current valueb. Semantic Type: Currencyc. Unit: USD17. Vehicle_current_value_maxa. Most expensive current valueb. Semantic Type: Currencyc. Unit: USD18. Vehicle_current_value_avga. Average current valueb. Semantic Type: Currencyc. Unit: USD19. Vehicle_original_price_less_current_value_suma. Semantic Type: Currencyb. Unit: USD20. Vehicle_original_price_less_current_value_mina. Semantic Type: Currency21. Unit: USDVehicle_original_price_less_current_value_maxa. Semantic Type: Currencyb. Unit: USD22. Vehicle_original_price_less_current_value_avga. Semantic Type: Currencyb. Unit: USD The result of a joining operation with pivots greatly expands the dataset with additional information. In the example provided above, the system identifies features and derived features on the vehicles table itself. The system then aggregated and joined the datasets pertaining to the vehicles based on the semantic type declarations themselves. The following are some examples of semantic type declarations indicated above:1. Type: IDa. Base Type: Integerb. Not_analytically_useful: truec. Allow_nulls: falsed. Aggregations: [ ]e. Processors: Nonef. Compare_type: None2. Type: Timestampa. Base Type: Stringb. Normalization: ISO8601c. Allow_nulls: falsed. Compare_type: Differencee. Processors: [DateFromTimestamp, MinuteOfDayFromTimestamp, SecondsFromEpochFromString]f. Aggregations: [Average]3. Type: SecondsFromEpocha. Base Type: Integerb. Compare_type: Differencec. Unit: Seconds4. Type: Datea. Base Type: Dateb. Processors: [EpochDateFromDate]5. Type: Epoch_Datea. Base Type: Integerb. Note for document reviewers: EpochDate is a custom format for us that is days since Jan. 1, 1970 (the unix epoch). This is far more useful than human date strings.c. Compare_type: Differenced. Unit: Dayse. Processors: [EpochDateToWeekday, EpochDateToDayOfWeek, EpochDateToMonth, EpochDateToYear, EpochDateToNextHoliday. EpochDateToDaysToNextHoliday]6. Type: MinuteOfDaya. Base Type: Integerb. Compare_type: Differencec. Unit: Minutesd. Processors: [MinuteOfDayToPartOfDay]7. Type: Boolean8. Type: Currencya. Base Type: Decimalb. Requires_unit: Truec. Aggreagations[Sum, Average, Percentage, Min, Max] FIG.9illustrates an example dataset900illustrating features and additional features associated with the dataset (e.g., input data) generated by a system in connection with a semantics processor. For example, input data may include columns pertaining to timestamps and zip codes of customers who purchase an automobile insurance policy. By selecting or identifying features of the columns to tag, a subset of the input data may be generated. The subset of the input data may then be sent to a semantics processor, for example, to generate or derive additional features. For example, “Column X: Timestamp” may be identified along with “Column Y: Zip Code” and the two columns may be tagged to create the subset of data. The subset of data may then be processed by the semantics processor to derive additional feature, such as the temperature of the weather at a specific time (e.g., timestamp) and place (e.g., zip code). In some instances, the system may identify that the exact same columns with the same features may not be tagged to generate a subset and the system may return a NULL value or will fail to process the two columns for additional features. In some instances, for two features of the same type, it does not necessarily imply that the two features of the same type or semantic context result in no derived features but some other feature may be derived. As another example, the system in connection with the semantics processor may derive new features using only one feature instead of two or more. That is, a semantic type pertaining to “year” can be the sole feature and the derived feature or features may be “number of years elapsed.” In another example, the sole feature of a column may be “dates” and the derived feature may the “month of the year.” The feature or features derived from the sole feature may, in some instances, be of different semantic types as well. FIG.10illustrates aspects of an example system1000for implementing aspects in accordance with an embodiment. As will be appreciated, although a web-based system is used for purposes of explanation, different systems may be used, as appropriate, to implement various embodiments. In an embodiment, the system includes an electronic client device1002, which includes any appropriate device operable to send and/or receive requests, messages, or information over an appropriate network1004and convey information back to a user of the device. Examples of such client devices include personal computers, cellular or other mobile phones, handheld messaging devices, laptop computers, tablet computers, set-top boxes, personal data assistants, embedded computer systems, electronic book readers, and the like. In an embodiment, the network includes any appropriate network, including an intranet, the Internet, a cellular network, a local area network, a satellite network or any other such network and/or combination thereof and components used for such a system depend at least in part upon the type of network and/or system selected. Many protocols and components for communicating via such a network are well known and will not be discussed herein in detail. In an embodiment, communication over the network is enabled by wired and/or wireless connections and combinations thereof. In an embodiment, the network includes the Internet and/or other publicly-addressable communications network, as the system includes a web server1006for receiving requests and serving content in response thereto, although for other networks an alternative device serving a similar purpose could be used as would be apparent to one of ordinary skill in the art. In an embodiment, the illustrative system includes at least one application server1008and a data store1010and it should be understood that there can be several application servers, layers or other elements, processes or components, which may be chained or otherwise configured, which can interact to perform tasks such as obtaining data from an appropriate data store. Servers, in an embodiment, are implemented as hardware devices, virtual computer systems, programming modules being executed on a computer system, and/or other devices configured with hardware and/or software to receive and respond to communications (e.g., web service application programming interface (API) requests) over a network. As used herein, unless otherwise stated or clear from context, the term “data store” refers to any device or combination of devices capable of storing, accessing and retrieving data, which may include any combination and number of data servers, data storage devices and data storage media, in any standard, distributed, virtual or clustered system. Data stores, in an embodiment, communicate with block-level and/or object level interfaces. The application server can include any appropriate hardware, software and firmware for integrating with the data store as needed to execute aspects of one or more applications for the client device, handling some or all of the data access and business logic for an application. In an embodiment, the application server provides access control services in cooperation with the data store and generates content including, but not limited to, text, graphics, audio, video and/or other content that is provided to a user associated with the client device by the web server in the form of HyperText Markup Language (“HTML”), Extensible Markup Language (“XML”), JavaScript, Cascading Style Sheets (“CSS”), JavaScript Object Notation (JSON), and/or another appropriate client-side or other structured language. Content transferred to a client device, in an embodiment, is processed by the client device to provide the content in one or more forms including, but not limited to, forms that are perceptible to the user audibly, visually and/or through other senses. The handling of all requests and responses, as well as the delivery of content between the client device1002and the application server1008, in an embodiment, is handled by the web server using PHP: Hypertext Preprocessor (“PHP”), Python, Ruby, Perl, Java, HTML, XML, JSON, and/or another appropriate server-side structured language in this example. In an embodiment, operations described herein as being performed by a single device are performed collectively by multiple devices that form a distributed and/or virtual system. The data store1010, in an embodiment, includes several separate data tables, data documents, dynamic data storage schemes and/or other data storage mechanisms and media for storing data relating to a particular aspect of the present disclosure. In an embodiment, the data store illustrated includes mechanisms for storing production data1012and user information1016, which are used to serve content for the production side. The data store also is shown to include a mechanism for storing source files1014, which is used, in an embodiment, for analysis or other such purposes. In an embodiment, other aspects such as page image information and access rights information (e.g., access control policies or other encodings of permissions) are stored in the data store in any of the above listed mechanisms as appropriate or in additional mechanisms in the data store1010. The data store1010, in an embodiment, is operable, through logic associated therewith, to receive instructions from the application server1008and obtain, update or otherwise process data in response thereto and the application server1008provides static, dynamic, or a combination of static and dynamic data in response to the received instructions. In an embodiment, dynamic data, such as data used in web logs (blogs), shopping applications, news services, and other such applications are generated by server-side structured languages as described herein or are provided by a content management system (“CMS”) operating on, or under the control of, the application server. In an embodiment, a user, through a device operated by the user, submits a search request for a certain type of item. In this example, the data store accesses the user information to verify the identity of the user, accesses the catalog detail information to obtain information about items of that type, and returns the information to the user, such as in a results listing on a web page that the user views via a browser on the user device1002. Continuing with example, information for a particular item of interest is viewed in a dedicated page or window of the browser. It should be noted, however, that embodiments of the present disclosure are not necessarily limited to the context of web pages, but are more generally applicable to processing requests in general, where the requests are not necessarily requests for content. Example requests include requests to manage a plurality of source files. In an embodiment, each server typically includes an operating system that provides executable program instructions for the general administration and operation of that server and includes a computer-readable storage medium (e.g., a hard disk, random access memory, read only memory, etc.) storing instructions that, if executed (i.e., as a result of being executed) by a processor of the server, cause or otherwise allow the server to perform its intended functions. The system1000, in an embodiment, is a distributed and/or virtual computing system utilizing several computer systems and components that are interconnected via communication links (e.g., transmission control protocol (TCP) connections and/or transport layer security (TLS) or other cryptographically protected communication sessions), using one or more computer networks or direct connections. However, it will be appreciated by those of ordinary skill in the art that such a system could operate in a system having fewer or a greater number of components than are illustrated inFIG.10. Thus, the depiction of the system1000inFIG.10should be taken as being illustrative in nature and not limiting to the scope of the disclosure. The various embodiments further can be implemented in a wide variety of operating environments, which in some cases can include one or more user computers, computing devices or processing devices which can be used to operate any of a number of applications. In an embodiment, user or client devices include any of a number of computers, such as desktop, laptop or tablet computers running a standard operating system, as well as cellular (mobile), wireless and handheld devices running mobile software and capable of supporting a number of networking and messaging protocols and such a system also includes a number of workstations running any of a variety of commercially-available operating systems and other known applications for purposes such as development. In an embodiment, these devices also include other electronic devices, such as dummy terminals, thin-clients, gaming systems and other devices capable of communicating via a network, and virtual devices such as virtual machines, hypervisors, software containers utilizing operating-system level virtualization and other virtual devices or non-virtual devices supporting virtualization capable of communicating via a network. In an embodiment, a system utilizes at least one network that would be familiar to those skilled in the art for supporting communications using any of a variety of commercially-available protocols, such as Transmission Control Protocol/Internet Protocol (“TCP/IP”), User Datagram Protocol (“UDP”), protocols operating in various layers of the Open System Interconnection (“OSI”) model, File Transfer Protocol (“FTP”), Universal Plug and Play (“UpnP”), Network File System (“NFS”), Common Internet File System (“CIFS”) and other protocols. The network, in an embodiment, is a local area network, a wide-area network, a virtual private network, the Internet, an intranet, an extranet, a public switched telephone network, an infrared network, a wireless network, a satellite network, and any combination thereof. In an embodiment, a connection-oriented protocol is used to communicate between network endpoints such that the connection-oriented protocol (sometimes called a connection-based protocol) is capable of transmitting data in an ordered stream. In an embodiment, a connection-oriented protocol can be reliable or unreliable. For example, the TCP protocol is a reliable connection-oriented protocol. Asynchronous Transfer Mode (“ATM”) and Frame Relay are unreliable connection-oriented protocols. Connection-oriented protocols are in contrast to packet-oriented protocols such as UDP that transmit packets without a guaranteed ordering. In an embodiment, the system utilizes a web server that run one or more of a variety of server or mid-tier applications, including Hypertext Transfer Protocol (“HTTP”) servers, FTP servers, Common Gateway Interface (“CGI”) servers, data servers, Java servers, Apache servers, and business application servers. In an embodiment, the one or more servers are also capable of executing programs or scripts in response to requests from user devices, such as by executing one or more web applications that are implemented as one or more scripts or programs written in any programming language, such as Java®, C, C# or C++, or any scripting language, such as Ruby, PHP, Perl, Python or TCL, as well as combinations thereof. In an embodiment, the one or more servers may include, without limitation, those commercially available from Oracle®, Microsoft®, Sybase®, and IBM® as well as open-source servers such as MySQL, Postgres, SQLite, MongoDB, and any other server capable of storing, retrieving, and accessing structured or unstructured data. In an embodiment, the system includes a variety of data stores and other memory and storage media as discussed above which can reside in a variety of locations, such as on a storage medium local to (and/or resident in) one or more of the computers or remote from any or all of the computers across the network. In an embodiment, the information resides in a storage-area network (“SAN”) familiar to those skilled in the art and, similarly, any necessary files for performing the functions attributed to the computers, servers or other network devices are stored locally and/or remotely, as appropriate. In an embodiment where a system includes computerized devices, each such device can include hardware elements that are electrically coupled via a bus, the elements including, for example, at least one central processing unit (“CPU” or “processor”), at least one input device (e.g., a mouse, keyboard, controller, touch screen, or keypad), at least one output device (e.g., a display device, printer, or speaker), at least one storage device such as disk drives, optical storage devices, and solid-state storage devices such as random access memory (“RAM”) or read-only memory (“ROM”), as well as removable media devices, memory cards, flash cards, etc., and various combinations thereof. In an embodiment, such a device also includes a computer-readable storage media reader, a communications device (e.g., a modem, a network card (wireless or wired), an infrared communication device, etc.), and working memory as described above where the computer-readable storage media reader is connected with, or configured to receive, a computer-readable storage medium, representing remote, local, fixed, and/or removable storage devices as well as storage media for temporarily and/or more permanently containing, storing, transmitting, and retrieving computer-readable information. In an embodiment, the system and various devices also typically include a number of software applications, modules, services, or other elements located within at least one working memory device, including an operating system and application programs, such as a client application or web browser. In an embodiment, customized hardware is used and/or particular elements are implemented in hardware, software (including portable software, such as applets), or both. In an embodiment, connections to other computing devices such as network input/output devices are employed. In an embodiment, storage media and computer readable media for containing code, or portions of code, include any appropriate media known or used in the art, including storage media and communication media, such as, but not limited to, volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information such as computer readable instructions, data structures, program modules or other data, including RAM, ROM, Electrically Erasable Programmable Read-Only Memory (“EEPROM”), flash memory or other memory technology, Compact Disc Read-Only Memory (“CD-ROM”), digital versatile disk (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices or any other medium which can be used to store the desired information and which can be accessed by the system device. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the various embodiments. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the invention as set forth in the claims. Other variations are within the spirit of the present disclosure. Thus, while the disclosed techniques are susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the invention to the specific form or forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention, as defined in the appended claims. The use of the terms “a” and “an” and “the” and similar referents in the context of describing the disclosed embodiments (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. Similarly, use of the term “or” is to be construed to mean “and/or” unless contradicted explicitly or by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. The term “connected,” when unmodified and referring to physical connections, is to be construed as partly or wholly contained within, attached to, or joined together, even if there is something intervening. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein and each separate value is incorporated into the specification as if it were individually recited herein. The use of the term “set” (e.g., “a set of items”) or “subset” unless otherwise noted or contradicted by context, is to be construed as a nonempty collection comprising one or more members. Further, unless otherwise noted or contradicted by context, the term “subset” of a corresponding set does not necessarily denote a proper subset of the corresponding set, but the subset and the corresponding set may be equal. The use of the phrase “based on,” unless otherwise explicitly stated or clear from context, means “based at least in part on” and is not limited to “based solely on.” Conjunctive language, such as phrases of the form “at least one of A, B, and C,” or “at least one of A, B and C,” (i.e., the same phrase with or without the Oxford comma) unless specifically stated otherwise or otherwise clearly contradicted by context, is otherwise understood with the context as used in general to present that an item, term, etc., may be either A or B or C, any nonempty subset of the set of A and B and C, or any set not contradicted by context or otherwise excluded that contains at least one A, at least one B, or at least one C. For instance, in the illustrative example of a set having three members, the conjunctive phrases “at least one of A, B, and C” and “at least one of A, B and C” refer to any of the following sets: {A}, {B}, {C}, {A, B}, {A, C}, {B, C}, {A, B, C}, and, if not contradicted explicitly or by context, any set having {A}, {B}, and/or {C} as a subset (e.g., sets with multiple “A”). Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of A, at least one of B and at least one of C each to be present. Similarly, phrases such as “at least one of A, B, or C” and “at least one of A, B or C” refer to the same as “at least one of A, B, and C” and “at least one of A, B and C” refer to any of the following sets: {A}, {B}, {C}, {A, B}, {A, C}, {B, C}, {A, B, C}, unless differing meaning is explicitly stated or clear from context. In addition, unless otherwise noted or contradicted by context, the term “plurality” indicates a state of being plural (e.g., “a plurality of items” indicates multiple items). The number of items in a plurality is at least two, but can be more when so indicated either explicitly or by context. Operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. In an embodiment, a process such as those processes described herein (or variations and/or combinations thereof) is performed under the control of one or more computer systems configured with executable instructions and is implemented as code (e.g., executable instructions, one or more computer programs or one or more applications) executing collectively on one or more processors, by hardware or combinations thereof. In an embodiment, the code is stored on a computer-readable storage medium, for example, in the form of a computer program comprising a plurality of instructions executable by one or more processors. In an embodiment, a computer-readable storage medium is a non-transitory computer-readable storage medium that excludes transitory signals (e.g., a propagating transient electric or electromagnetic transmission) but includes non-transitory data storage circuitry (e.g., buffers, cache, and queues) within transceivers of transitory signals. In an embodiment, code (e.g., executable code or source code) is stored on a set of one or more non-transitory computer-readable storage media having stored thereon executable instructions that, when executed (i.e., as a result of being executed) by one or more processors of a computer system, cause the computer system to perform operations described herein. The set of non-transitory computer-readable storage media, in an embodiment, comprises multiple non-transitory computer-readable storage media and one or more of individual non-transitory storage media of the multiple non-transitory computer-readable storage media lack all of the code while the multiple non-transitory computer-readable storage media collectively store all of the code. In an embodiment, the executable instructions are executed such that different instructions are executed by different processors—for example, a non-transitory computer-readable storage medium store instructions and a main CPU execute some of the instructions while a graphics processor unit executes other instructions. In an embodiment, different components of a computer system have separate processors and different processors execute different subsets of the instructions. Accordingly, in an embodiment, computer systems are configured to implement one or more services that singly or collectively perform operations of processes described herein and such computer systems are configured with applicable hardware and/or software that enable the performance of the operations. Further, a computer system that implement an embodiment of the present disclosure is a single device and, in another embodiment, is a distributed computer systems comprising multiple devices that operate differently such that the distributed computer system performs the operations described herein and such that a single device does not perform all operations. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate embodiments of the invention and does not pose a limitation on the scope of the invention unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention. Embodiments of this disclosure are described herein, including the best mode known to the inventors for carrying out the invention. Variations of those embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description. The inventors expect skilled artisans to employ such variations as appropriate and the inventors intend for embodiments of the present disclosure to be practiced otherwise than as specifically described herein. Accordingly, the scope of the present disclosure includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the scope of the present disclosure unless otherwise indicated herein or otherwise clearly contradicted by context. All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein. | 76,730 |
11861466 | DETAILED DESCRIPTION Example System Architecture FIG.1shows an example of a multi-machine distributed learning system100for solving a machine learning problem. The system100includes a taskmaster106operating on a master computer102and distributed workers104including workers104a,104b, . . . ,104poperating on respective distributed slave computers108including slave computers108a,108b, . . . ,108p, and a submitter104soperating on a slave computer108s. Each of the taskmaster106, workers104, and the submitter104scan be in the form of computer programs. The master computer102and the slave computers108can be connected in a network and each can be a classical computer or can include a quantum processor that carries out instructions of the computer programs. In some implementations, the taskmaster106responds to queries from the workers104and the submitter104sand receives information, stores information received, and updates stored information based on information received, from the workers and the submitter, on the master computer102. The information includes information about work tasks110to be carried out by the workers, result tasks112that contain results of the work tasks110carried out by the workers, and summary task114s. The taskmaster106does not carry out actual computations for machine learning. In some implementations, the workers104each work independently of each other on a sub-problem of the machine learning problem. The sub-problem can be defined by a task in the work tasks110stored on the master computer102. The workers104communicate with the taskmaster106, which coordinates the work on the different tasks by the different workers to collectively solve the machine learning problem. The submitter104sdoes not work on any of the tasks for the machine learning problem. Instead, the submitter works on updating and maintaining the summary task114. The workers104and the submitter104shave different authorizations with regard to access and use of the information stored on the master computer102. The workers104and the submitter104scan access authorized information stored on the master computer102using one or more of the following procedures:Query: obtain information, e.g., of a task;QueryandOwn: obtain and acquire ownership of information, e.g., a task, and prevent other workers/submitter from acquiring the information for a predetermined amount of time;Update: update information, e.g., of a task. An example of Query is a read action in which a worker104or the submitter104sreads information stored by the taskmaster106on the master computer102. Query to the same piece of information can be performed simultaneously by one or more workers and the submitter without causing any conflicts at the piece of information. The piece of the information being queried does not have to be locked for the query. An example of QueryandOwn is a use action in which a worker104or the submitter104srequests to use certain information and the use may result in updating the information. For example, the worker may be carrying out an iteration of computation using a current set of parameters and producing an updated set of parameters. The use of the information precludes other workers from using the same piece of information to allow the information to be properly updated. Typically, upon the worker's request, the taskmaster106sends a copy of the information to the worker and at the same time locks the information at the master computer102for the predetermined amount of time. The worker obtaining the information has to complete use of the information and update the information at the master computer102, if necessary, within the predetermined amount of time, so that when the other workers are allowed to access the information, the information has been updated. In another example, the submitter104smay need to update the information of a result task in the summary task114so the workers104are precluded from accessing the summary task114until the update is completed. An example of Update is a write action in which a worker or the submitter104swrites or requests the taskmaster106to write updated information to replace the corresponding stored information in the master computer102. Although the three access procedures, Query, QueryandOwn, and Update, are available to all workers104and the submitter104s, the workers and the submitter104scan only perform authorized procedure(s) on authorized information stored or to be stored on the master computer102. The work tasks110are n work tasks110a,110b, . . . ,110n. In some implementations, the number n of work tasks is determined based on the machine learning problem to be solved by the system100and the number of workers104. The machine learning problem can be divided into the n work tasks to be carried out by the workers104. Each work task contains a subset of variables of the machine learning problem or the statistics of a subset of random variables. In some implementations, the number n is the same as the number of workers104. However, n does not have to be equal to the number of workers104. Each worker can use any of the three access procedures to access any of the available work tasks that are not currently owned by another worker. The submitter104sdoes not access the work tasks110. The result tasks112contains p result tasks112a,112b, . . . ,112p, each owned by a corresponding worker104a,104b, . . . ,104p. Each result task can only be updated by its owner or by the taskmaster106upon the request of its owner. Other workers and the submitter104swho are not the owners of a result task cannot update the result task, but can only query, e.g., read, the result task at the master computer102. The summary task114contains summary of the tasks carried out by the workers104. The summary task114is exclusively owned by the submitter104s, who is allowed to update or request the taskmaster106to update the information of the summary task114. For example, the submitter104smay query the result tasks112to obtain information for updating the summary task114. The workers104cannot update the summary task114, but can only query, e.g., read, the summary task114. In solving a machine learning problem, the workers104and the submitter104scan work together without using mutex locks. The configuration of the system100ensures that at any given time, the same piece of information stored or to be stored in the master computer102is not updated or written simultaneously by more than one of the workers104and the submitter104s. Furthermore, because the information about the machine learning problem is stored and constantly updated by the taskmaster in the master computer, any failure of workers or the submitter does not have any major impact on the process of solving the problem. As a result, the system100can have high error tolerance. Example Implementations Many algorithms in machine learning can be implemented using the system100. A few examples are described below. 1. Matrix Completion In a matrix completion problem, an incomplete data matrix X having N×D dimensions is decomposed into the product of two smaller matrices, A having N×K dimension and B having K×D dimension, where K is called the base number and is much smaller than both N and D: X=(x11x12…x1Dx21x22…x2D∶∶⋯∶xN1xN2…xND)=(x1x2∶xN),A=(a11a12…a1Ka21a22…a2K∶∶⋯∶aN1aN2…aNK)=(a1a2∶aN),B=(b11b12…b1Db21b22…b2D∶∶⋯∶bK1bN2…bND)=(b1b2…bD),wherexl=(xl1xl2…xlD),aj=(aj1aj2…ajK,)bi=(b1ib2i∶bKi),andl=1,…,N;j=1,…,N;andi=1,…,D. The incomplete data matrix X has at least some data elements xijunknown. Matrices A and B are to be determined so that the residual of ∥X−A B∥ is smaller than a predetermined value. Solutions to a matrix completion problem, i.e., finding the matrices A and B with all matrix elements aijand bijknown, can have many uses, including in movie/music recommendation, player matching, advertisement matching, and so on. For example, in movie recommendation, each row of the matrix X can represent a user and each column of the matrix X can represent a movie. Each matrix element xijcan represent the ith user's rating of the jth movie. At least some of the N users may have rated less than all of the D movies. However, the ratings of those unrated movies by these users can be predicted using a machine learning process based on the known ratings of these users and the other users. The matrix X can be completed using the system100ofFIG.1by computing a minimum of an objective function: minA,BF(A,B)=∑i,j∈I(xij-AiBj)2+∑iλ||Ai||2+∑iλ||Bj||2 where λ>0 is a scalar, Ai, Bjare sub-matrices ai, bj. FIG.2shows how a system200that has the same hardware and software architectures as the system100ofFIG.1is used in solving the matrix completion problem described above. Typically, the matrix X is very large. In the example of movie rating, the matrix X can have millions of rows. The matrix X is partitioned row-wise into p sub-matrices Xm, m=1, . . . , p. Each sub-matrix Xmcan contain one or more row sub-matrices xi. Different sub-matrices Xmcan have different numbers of rows. The division of the matrix X can be done by a computer different from all computers in the system200or by the master computer102. Sometimes a user can make the division. The division can be made based on various factors, e.g., load balancing of the different slave computers, or the number of unknown matrix elements in each sub-matrix. Each sub-matrix Xmis stored by a worker104mon its corresponding slave computer108m. Corresponding to the division of the matrix X, the matrix A is divided row-wise into sub-matrices Am, where m=1, . . . , p. Each sub-matrix Amhas the same number of rows as its corresponding sub-matrix Xmand can be initialized to have random values for its matrix elements. The initialized values for each sub-matrix Amare stored by a worker104mon its slave computer108mwith the corresponding sub-matrix Xm. In computing a minimum of the objective function, the values of the matrix elements for the sub-matrix Amare updated in iterations based on the computations performed by the workers104; and the worker104mstores the updated values on the slave computer108m. The matrix B is stored in work tasks column-wise such that each row sub-matrix biis stored as one work task110i. Like the sub-matrix Ameach sub-matrix bican be initialized to have random values for its matrix elements. In computing a minimum of the objective function, the values of the matrix elements for the sub-matrix biare updated in iterations based on the computations performed by the workers104; and the taskmaster106stores the updated values on the master computer102. By dividing the matrices A, B, and X, computing a minimum of the objective function F(A, B) is decomposed into sub-problems Fm(A, B) each only depending on sub-matrices Amand bi, where i=1, . . . , D. Each sub-problem completes a sub-matrix Xm. Each worker104muses its slave computer108mto work on a sub-problem Fm(A, B) and determine an optimized sub-matrix Xm. Different workers104work on different sub-problems. However, the optimization of a sub-matrix Xmby the worker104mdepends on the optimization of the sub-matrices bi, and therefore, the other sub-problems being solved by the other workers. To optimize a sub-matrix Xm, a worker104mhas to use the entire matrix B based on: Xm=∑i=1DAmbi. However, in carrying out the matrix completion task, instead of using the entire matrix B, each worker can perform a QueryandOwn to use a mini-batch of the tasks {bi}, where i is a sub-group of 1, . . . , D. The size of the mini-batch can be predetermined or can be dynamically determined, e.g., based on load balancing and/or progress of the different completion processes at different slave computers. As a result, different workers can work on a part of their corresponding sub-matrix Xmsimultaneously. Over multiple iterations and multiple QueryandOwn procedures, a worker can own the entire matrix B and work on the entire sub-matrix Xm. FIG.3shows an example process300of solving a sub-problem Fm(A, B) by a worker104m. The worker104mperforms a QueryandOwn302to use a mini-batch of tasks {bit-1}, where i is a sub-group of 1, . . . , D and t is the current number of iterations of computation the worker104mis to perform. Upon receiving the requested mini-batch from the taskmaster102, the worker104mcomputes304Amtand {bit} and performs an Update304on the tasks at the master computer102. The worker104malso computes306residual: ∑i(Xmi-Amtbit)2 and sends306the computed residual to the taskmaster to be stored at its corresponding result task112m. Effectively, the worker104mperforms an Update to store the residual at the result task112m. The worker104mthen performs a Query308to read the summary task114and determines310whether the value in the summary task114is smaller than a predetermined value S0. The summary task contains a summary of all residuals from the result tasks112. The submitter104sregularly performs a Query to read each of the result tasks112and performs an Update on the summary task114. If the value in the summary task114is smaller than a predetermined value S0, the optimization of the sub-matrix Xmends312. The matrix X can be completed based on the optimized matrices A and B. If the value in the summary task114is greater than a predetermined value S0, then the worker104menters the next iteration and increments t by 1. The computation of Amtand {bit} in each iteration can be based on stochastic gradient decent (SGD): Ait=Ait-1+γt(xij−Ait-1Bjt-1)Bjt-1, Bjt=Bjt-1+γt(xij−Ait-1Bjt-1)Ait-1. where Bjt={bit}, and γtis a sequence of step sizes. Alternatively, each worker can solve a harder optimization problem than SGD based on the following equation: (Ait,Bjt)=argminAi,Bj{(xij-AiBj)2+λt||Ai-Ait-1||2+λt||Bj-Bjt-1||2} where λtis a sequence of step sizes. This alternative optimization problem is non-convex because it contains 4th order polynomials. To solve the problem, coordinate descent or global optimization methods including quantum annealing can be used. For example, Ait-1, Bjt-1, and λtcan be input into a quantum processor, which outputs Aitand Bit. 2. Latent Dirichlet Allocation Latent Dirichlet Allocation (LDA) is a Bayesian learning method, and an example of the use of LDA is in text clustering. Text clustering can include extracting topics of different documents, automatically organizing documents, e.g., based on topics, and fast retrieving or filtering information contained in the documents. To perform text clustering on a group of documents, each document is represented by words of a pre-determined vocabulary of words while the order of the words in the document is ignored. For example, a document containing the sentence: “The apple company has an apple logo.” is represented by “the: 1, apple: 2, company: 1, has: 1, an: 1, logo: 1”. Each number after a word represents the total number of times the word appears in the document. Sometimes the same word appearing multiple times can have different meanings. For example, the word “apple” in the example document above appears two times and has two different meanings. For a total of N documents and V words for representing all the documents, the documents can be represented by the following matrix: X=(x11x12…x1Vx21x22…x2V∶∶⋯∶xN1xN2…xNV)=(x1x2∶xN), where x=(xl1xl2. . . xlV), and l=1, . . . , N. Each matrix element xijrepresents the number of times a word j appears in a document i. In LDA, it is assumed that each word j in the document i has a topic zij∈{1, . . . , K}. A topic matrix Z for all N documents can be written as: Z=(z11z12…z1Kz21z22…z2K∶∶…∶zN1zN2…zNK)=(z1z2∶zN), Here, the topic is analogous to a base in matrix completion described above. The same word appearing multiple times having multiple meanings has multiple topics. Using Gibbs sampling, which is a Markov chain Monte Carlo (MCMC) method, the probability of word j in the document i having the topic k is sampled based on the current assignment of the topic of all other words: P(zij=k)∝(nik+α)·nkj+βnk+Vβ, where nikis the number of words in document i that has the topic k; nkjis the number of words j that has topic k; and nkis the total number of words that has the topic k. Parameters α and β are constants. For k=1, . . . , K total topics, the following vectors can be used:nkj=(n1jn2j, . . . nKj), for each word j;nik=(ni1ni2. . . niK), for each document I;nk=(n1n2. . . nK), for all words and all documents. FIG.4shows how a system400that has the same hardware and software architectures as the system100ofFIG.1is used in solving the text clustering problem described above. The total number of documents for use in the test clustering is divided into p sub-groups, each to be assigned to a worker104to work on a slave computer. Corresponding to the division of the matrix X, the topic matrix is also divided into p sub-matrices ZI, . . . , Zp. Each worker104mstores on its slave computer108ma sub-matrix Xmwhich corresponds to the assigned document group {xl} and corresponding topic sub-matrix Zm, which corresponds to topic group {zl}. The worker104malso stores and updates all nikfor the assigned document group {xl}. Furthermore, each word j and its topic assignment statistics nkjare stored as a work task110jby the taskmaster106. To solve the text clustering problem, the system400determines nkjand nikusing iterative computations performed by each worker104. Similar to the matrix completion problem, in each iteration, each worker obtains a mini-batch of tasks from the master computer102. FIG.5shows an example process500of text clustering a sub-group of documents represented by the matrix Xmby a worker104m. The worker104mperforms a QueryandOwn502to use a mini-batch of tasks {nkjt-1}, where j is a sub-group of 1, . . . , V and t is the current number of iterations of computation the worker104mis to perform. The worker104malso performs a Query504to read the summary task114to obtain nk. Upon receiving the requested mini-batch from the taskmaster102, the worker104mupdates506zijbased on the calculation of P(zij=k). The worker104mthen calculates 508 nkjtand nikt, and sends510nkjtto the taskmaster106to update the work task110j. Furthermore, the worker104msends514niktto its corresponding result task112m. The submitter104sregularly performs a Query to read each of the result tasks112and performs an Update on the summary task114. If the worker104mdetermines516has completed for all V words, then the iteration ends512. Otherwise the worker104menters the next iteration t+1. 3 Classification The distributed learning system100can also be applied in classification. For simplicity of the description, binary classification is described. Other classification problems, e.g., multi-class classification problems, can be similarly solved. As an example, a binary classification problem has a loss function L, data X=(x11x12…x1Dx21x22…x2D∶∶…∶xN1xN2…xND)=(x1x2∶xN), labels y={y1, . . . , yn}∈{+1, −1}nand parameter w=(w1w2. . . wD)T. The objective function to minimize is: R(w)=∑i=1nL(〈yixi,w〉)=∑i=1nL(∑j=1dyixijwj) FIG.6shows how a system600that has the same hardware and software architectures as the system100ofFIG.1is used in solving the binary classification problem described above. The parameter element wiof the vector w is stored as a work task110iin the master computer102. The data matrix X is partitioned into sub-matrices Xm, each corresponding to a group of row sub-matrices {xi} and stored by a corresponding worker104mon its slave computer108m. A label ymcorresponding to the sub-matrices Xmis also stored on the same slave computer108. FIG.7shows an example process700of solving a sub-problem of the binary classification by a worker104m. The worker104mperforms a QueryandOwn702to use a mini-batch of tasks {wspt-1}, where sp is a sub-group of 1, . . . , D and t is the current number of iterations of computation the worker104mis to perform. Upon receiving the requested mini-batch from the taskmaster102, the worker104mcomputes304Xmand {wspt} and performs an Update704on the tasks at the master computer102. The worker104malso computes706the error: E(Xm,ym), and sends706the computed error to the taskmaster106to be stored at its corresponding result task112m. Effectively, the worker104mperforms an Update on the stored residual at the result task112m. The worker104mthen performs a Query708to read the summary task114and determines710whether the value in the summary task114is smaller than a predetermined value E0. The summary task contains a summary of all errors from the result tasks112. The submitter104sregularly performs a Query to read each of the result tasks112and performs an Update on the summary task114. If the value in the summary task114is smaller than a predetermined value E0, the optimization of the sub-matrix Xmends712. If the value in the summary task114is greater than a predetermined value E0, then the worker104menters the next iteration t+1. In each iteration, the update of the sub-matrix Xmand the parameters {wspt} can be performed using SGD similarly to the update of the described process for matrix completion. Alternatively, instead of computing the gradient with respect to wsp, the following problem can be solved: wSpt=argminwSp{∑i∈IpL(∑j∈Spyixijwj+∑j∉Spyixijwjt-1)+λtwSp-wSpt-12}. In some implementations, the loss function L is a non-convex loss function, and the above problem is a non-convex sub-problem. Compared to the original size (N×D) of the problem, this sub-problem is much smaller (|Ip|×|Sp|). In some implementations, global optimization methods including quantum annealing can be used to solve the sub-problem. For example, wspt-1, xij, yi, and λtcan be input into a quantum processor, which outputs wspt. 4. Deep Learning The learning system100can also be used in deep learning. Datasets can be partitioned for the p different slave computers. In each iteration, each computer can execute a QueryandOwn to use some parameters based on the data it has and the past parameter it had, similarly to the classification problem described above. Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non-transitory storage medium for execution by, or to control the operation of, data processing apparatus. Alternatively or in addition, the program instructions can be encoded on an artificially-generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them. The term “data processing apparatus” refers to data processing hardware and encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable digital processor, a digital computer, or multiple digital processors or computers. The apparatus can also be or further include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). The apparatus can optionally include, in addition to hardware, code that creates an execution environment for computer programs, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A computer program, which may also be referred to or described as a program, software, a software application, a module, a software module, a script, or code, can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data, e.g., one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, e.g., files that store one or more modules, sub-programs, or portions of code. A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a data communication network. The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). For a system of one or more computers to be “configured to” perform particular operations or actions means that the system has installed on it software, firmware, hardware, or a combination of them that in operation cause the system to perform the operations or actions. For one or more computer programs to be configured to perform particular operations or actions means that the one or more programs include instructions that, when executed by data processing apparatus, cause the apparatus to perform the operations or actions. Computers suitable for the execution of a computer program include, by way of example, can be based on general or special purpose microprocessors or both, or any other kind of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device, e.g., a universal serial bus (USB) flash drive, to name just a few. Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. Control of the various systems described in this specification, or portions of them, can be implemented in a computer program product that includes instructions that are stored on one or more non-transitory machine-readable storage media, and that are executable on one or more processing devices. The systems described in this specification, or portions of them, can be implemented as an apparatus, method, or electronic system that may include one or more processing devices and memory to store executable instructions to perform the operations described in this specification. While this specification contains many specific implementation details, these should not be construed as limitations on the scope of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. Particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. | 30,454 |
11861467 | To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the drawings. It is contemplated that elements and features of one embodiment may be beneficially incorporated in other embodiments without further recitation. DETAILED DESCRIPTION Aspects of the present disclosure provide apparatuses, methods, processing systems, and computer readable mediums for optimizing the performance of machine learning models, such as neural networks, in hardware. With reference now to the figures, several exemplary aspects of the present disclosure are described. The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. Example System-on-a-Chip FIG.1illustrates an example implementation of a system-on-a-chip (SOC)100, which may include a central processing unit (CPU)102or a multi-core CPU configured to adaptively quantize weights and parameters for machine learning models. The SOC may further be configured to activate and deactivate performance of inferences on input data using a high efficiency quantized model, according to embodiments described herein. Quantized weights and activation parameters associated with each of a plurality of high efficiency quantized models may be stored in a memory block associated with a neural processing unit (NPU)108, in a memory block associated with a CPU102, in a memory block associated with a graphics processing unit (GPU)104, in a memory block associated with a digital signal processor (DSP)106, in a memory block118, or may be distributed across multiple blocks. Instructions executed at the CPU102may be loaded from a program memory associated with the CPU102or may be loaded from a memory block118. The SOC100may also include additional processing blocks tailored to specific functions, such as a GPU104, a DSP106, a connectivity block110, which may include fifth generation (5G) connectivity, fourth generation long term evolution (4G LTE) connectivity, Wi-Fi connectivity, USB connectivity, Bluetooth connectivity, and the like, and a multimedia processor112that may, for example, detect and recognize gestures. In one implementation, the NPU is implemented in the CPU102, DSP106, and/or GPU104. The SOC100may also include a sensor processor114, image signal processors (ISPs)116, and/or navigation module120, which may include a global positioning system. The SOC100may be based on an ARM instruction set. In an aspect of the present disclosure, the instructions loaded into the CPU102may comprise code to perform inferences using a machine learning model and concurrently optimize operational parameters (e.g., weights, biases, activation parameters, etc.) for the machine learning model. SOC100and/or components thereof may be configured to perform the methods described herein. Deep Neural Networks and Deep Learning Deep learning architectures may perform complex tasks, such as object recognition, by learning to represent inputs at successively higher levels of abstraction in each layer, thereby building up a useful feature representation of the input data. In this way, deep learning addresses a major bottleneck of traditional machine learning. Prior to the advent of deep learning, a machine learning approach for a task may have relied heavily on human engineered features, perhaps in combination with a shallow classifier. A shallow classifier may be a two-class linear classifier, for example, in which a weighted sum of input values (e.g., input vector components) may be compared with a threshold to predict to which class the input belongs. Human engineered features may be templates or kernels tailored to a specific problem domain by engineers with domain expertise. Deep learning architectures, in contrast, may learn to represent features that are similar to what a human engineer might design, but through training. Furthermore, a deep network may learn to represent and recognize new types of features that a human might not have considered. In some implementations, a deep learning architecture may learn a hierarchy of features. If presented with visual data, for example, the first layer may learn to recognize relatively simple features, such as edges, in the input stream. In another example, if presented with auditory data, the first layer may learn to recognize spectral power in specific frequencies. The second layer, taking the output of the first layer as input, may learn to recognize combinations of features, such as simple shapes for visual data or combinations of sounds for auditory data. Further layers may learn to represent complex shapes in visual data or words in auditory data. Still further layers may learn to recognize common visual objects or spoken phrases. Deep learning architectures may perform especially well when applied to problems that have a natural hierarchical structure. For example, the classification of motorized vehicles may benefit from first learning to recognize wheels, windshields, and other features. These features may be combined at higher layers in different ways to recognize cars, trucks, and airplanes. Neural networks may be designed with a variety of connectivity patterns. For example, in feed-forward networks, information is passed from lower to higher layers, with each neuron in a given layer communicating to neurons in higher layers. Neural networks may also have recurrent or feedback (also called top-down) connections. In a recurrent connection, the output from a neuron in a given layer may be communicated to another neuron in the same layer. A recurrent architecture may be helpful in recognizing patterns that span more than one of the input data chunks that are delivered to the neural network in a sequence. A connection from a neuron in a given layer to a neuron in a lower layer is called a feedback (or top-down) connection. A network with many feedback connections may be helpful when the recognition of a high-level concept may aid in discriminating the particular low-level features of an input. The connections between layers of a neural network may be fully connected or locally connected. In a fully connected neural network, a neuron in a first layer may communicate its output to every neuron in a second layer, so that each neuron in the second layer will receive input from every neuron in the first layer. In a locally connected neural network, a neuron in a first layer may be connected to a limited number of neurons in the second layer. More generally, a locally connected layer of the locally connected neural network may be configured so that each neuron in a layer will have the same or a similar connectivity pattern, but with connections strengths that may have different values. The locally connected connectivity pattern may give rise to spatially distinct receptive fields in a higher layer, because the higher layer neurons in a given region may receive inputs that are tuned through training to the properties of a restricted portion of the total input to the network. One example of a locally connected neural network is a convolutional neural network. The convolutional neural network may be configured such that the connection strengths associated with the inputs for each neuron in the second layer are shared. Convolutional neural networks may be well suited to problems in which the spatial location of inputs is meaningful. The processing of each layer of a convolutional network may be considered a spatially invariant template or basis projection. If the input is first decomposed into multiple channels, such as the red, green, and blue channels of a color image, then the convolutional network trained on that input may be considered three-dimensional, with two spatial dimensions along the axes of the image and a third dimension capturing color information. The outputs of the convolutional connections may be considered to form a feature map in the subsequent layer, with each element of the feature map receiving input from a range of neurons in the previous layer and from each of the multiple channels. The values in the feature map may be further processed with a non-linearity, such as a rectification, max(0,x). Values from adjacent neurons may be further pooled, which corresponds to down sampling, and may provide additional local invariance and dimensionality reduction. One type of convolutional neural network is a deep convolutional network (DCN). Deep convolutional networks (DCNs) are networks of convolutional layers, configured with additional pooling and normalization layers. DCNs have achieved state-of-the-art performance on many tasks. DCNs can be trained using supervised learning in which both the input and output targets are known for many exemplars and are used to modify the weights of the network by use of gradient descent methods. DCNs may be feed-forward networks. In addition, as described above, the connections from a neuron in a first layer of a DCN to a group of neurons in the next higher layer are shared across the neurons in the first layer. The feed-forward and shared connections of DCNs may be exploited for fast processing. The computational burden of a DCN may be much less, for example, than that of a similarly sized neural network that comprises recurrent or feedback connections. In some embodiments, a DCN may be designed to recognize visual features from an image input from an image capturing device130, such as a car-mounted camera. The DCN of the current example may be trained to identify traffic signs and a number provided on the traffic sign. Of course, the DCN may be trained for other tasks, such as identifying lane markings or identifying traffic lights. These are just some example tasks, and many others are possible. DCN may be trained with supervised learning. During training, the DCN may be presented with an image, such as a speed limit sign, and a forward pass may then be computed to produce an output. DCN may include a feature extraction section and a classification section. Upon receiving the image, a convolutional layer may apply convolutional kernels (not shown) to the image to generate a first set of feature maps. As an example, the convolutional kernel for the convolutional layer may be a 5×5 kernel that generates 28×28 feature maps. The number of convolutional kernels applied to an image may be correlated to a number of feature maps generated in the first set of feature maps. For example, where four different feature maps are generated in the first set of feature maps, four different convolutional kernels may be applied to the image at the convolutional layer. The convolutional kernels may also be referred to as filters or convolutional filters. A first set of feature maps may be subsampled by a max pooling layer to generate a second set of feature maps. The max pooling layer reduces the size of the first set of feature maps. That is, a size of the second set of feature maps, such as 14×14, is less than the size of the first set of feature maps, such as 28×28. The reduced size provides similar information to a subsequent layer while reducing memory consumption. The second set of feature maps may be further convolved via one or more subsequent convolutional layers to generate one or more subsequent sets of feature maps. Feature maps in a DCN may be convolved to generate one or more feature vectors. Each feature of the feature vector may correspond to a possible feature of an image, and a softmax function generate a probability for each feature. As such, the output of the DCN may thus be a probability that the input image includes one or more features. Before training, the output produced by the DCN is likely to be incorrect. Thus, an error may be calculated between the output produced by the DCN and a target output. The target output is the ground truth of the image. The weights of the DCN may then be adjusted so the output of the DCN is more closely aligned with the target output. To adjust the weights, a learning algorithm may compute a gradient vector for the weights. The gradient may indicate an amount that an error would increase or decrease if the weight were adjusted. At the top layer, the gradient may correspond directly to the value of a weight connecting an activated neuron in the penultimate layer and a neuron in the output layer. In lower layers, the gradient may depend on the value of the weights and on the computed error gradients of the higher layers. The weights may then be adjusted to reduce the error. This manner of adjusting the weights may be referred to as “back propagation” as it involves a “backward pass” through the neural network. In practice, the error gradient of weights may be calculated over a small number of examples, so that the calculated gradient approximates the true error gradient. This approximation method may be referred to as stochastic gradient descent. Stochastic gradient descent may be repeated until the achievable error rate of the entire system has stopped decreasing or until the error rate has reached a target level. After learning, DCN100may be presented with new images and a forward pass through the network may yield an output122that may be considered an inference or a prediction of the DCN. Finally, deep belief networks (DBNs) are probabilistic models comprising multiple layers of hidden nodes. DBNs may be used to extract a hierarchical representation of training data sets. A DBN may be obtained by stacking up layers of Restricted Boltzmann Machines (RBMs). An RBM is a type of artificial neural network that can learn a probability distribution over a set of inputs. Because RBMs can learn a probability distribution in the absence of information about the class to which each input should be categorized, RBMs are often used in unsupervised learning. Using a hybrid unsupervised and supervised paradigm, the bottom RBMs of a DBN may be trained in an unsupervised manner and may serve as feature extractors, and the top RBM may be trained in a supervised manner (on a joint distribution of inputs from the previous layer and target classes) and may serve as a classifier. The deep learning architectures discussed above generally may be trained using a training data set. The resulting model generated during training may be defined as a neural network with high precision floating point weights, such as 32-bit single precision floating point numbers or 64-bit double precision floating point numbers. Adaptive Quantization for Efficient Execution of Machine Learning Models Computer systems may perform operations on various numerical types of data. These data types may include integers and floating point numbers. Integers are generally whole numbers that may be represented by any sequence of bits, with a maximum and minimum value defined by a number of bits in the representation and whether the integer is signed or unsigned. Generally, the maximum value of an unsigned integer may be calculated as 2N−1 for any bit size n. The minimum value of a signed integer may be calculated as −2N-1, and the maximum value of a signed integer may be calculated as 2n-1for any bit size n. For example, an 8-bit integer may range in value from 0 to 255 in an unsigned representation and from −128 to 127 in a signed representation. As the number of bits increases, the number of possible values increases. Floating point numbers, however, are represented in a more complex manner. Typically, floating point numbers are defined in terms of a bit reserved for a sign (positive or negative), a number of exponent bits, and a number of precision bits. Because integer and floating point numbers are represented differently, mathematical operations may involve different levels of computational expense based on whether a mathematical operation is operating on integers or floating point numbers. For example, addition of two integers may be a trivial bitwise operation in which each bit is combined and overflow is carried to the next bit. However, floating point operations may be more complex, as multiple operations may be performed to combine the exponent and precision bits, and a multiplication operation may be performed based on the exponent and precision bits to generate a result. Thus, integer-based logic may be implemented on simpler, more power efficient hardware than floating-point based logic. Many types of computational hardware blocks may be used to run an inference, including, for example: a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), a neural processing unit (NPU), a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), and other custom hardware. To improve the efficiency of inference performance using these computational hardware blocks, model parameters, such as model weights, may be quantized and reduced in size from a number n bits to a smaller number of bits and/or from floating point to integer representations. For example, weights may be quantized for each layer in a machine learning model by finding a distribution of the weights and the maximum and minimum value of the weights in a floating point representation of the machine learning model, and weights may be mapped to an integer space (e.g., 64-bit long integer, 32-bit integer, 16-bit short integer) using the maximum and minimum value of the weights to account for the reduced dynamic range of the integer space. Quantization of weights to data types represented by fewer bits (e.g., quantizing a double precision floating point number to a single precision floating point number) may provide for acceptable inference performance; however, if the dynamic range of the floating point weights is high and weights are not clustered, the network may need to be retrained. Inside integer-based inference engines used to generate an inference representing some predicted data from a given input value, activation statistics tracking when neurons in a neural network are activated may be accumulated in high dynamic range registers (e.g., registers that can accommodate 32 bit integers or higher bit-size integers). When processing of layers in the neural network is completed, the activation statistics stored in these registers may be quantized to a smaller representation (e.g., 8-bit integers or 16-bit integers) and written to memory, such as on-chip static random access memory (SRAM) or dynamic random access memory (DRAM). To quantize accumulated activation statistics, a sample of representative inputs may be provided a priori, and the machine learning model may be executed to identify the maximum and minimum values for parameters in each layer of the machine learning model. The activation statistics may be quantized to an n-bit value for an activation parameter x according to the equation: xquantized=[xregister-xminxmax-xmin*2n] While quantizing activation statistics may also allow for execution of inferences using smaller data types, situations may exist where the value of an activation register overflows or underflows the quantization parameter. For example, an unsigned 8-bit register may accumulate beyond 28−1=255, thus rolling the register's value over to 0, or decrement below 0, thus rolling the register value back to 255, which results in poor inference accuracy. Developers generally do not quantize weights or biases in machine learning models prior to deployment, thus deploying a machine learning model with large floating point weights. These models may be executed using high performance, high power consumption hardware (e.g., “big” compute cores in a heterogeneous multicore processor, graphics processing units, etc.), even when inferences could be performed with sufficient accuracy using smaller data representations and less power-hungry hardware (e.g., “small” compute cores in a heterogeneous multicore processor). Where models are quantized, the quantization may be tested with some inputs to verify that the quantized model works for those outputs; however, real-life inputs may result in inaccurate or failed inferences. Further, because processing cores that support large floating point numbers (sometimes referred to as “high performance cores”) may have a larger size than processing cores that are provisioned for performing tasks efficiently (e.g., NPUs, DSPs, accelerators) or processing cores that support only integer (fixed point) operations (sometimes referred to as “efficient cores”), processors may be designed with a smaller number of high performance cores than efficient cores. To allow for inferences to be performed on efficient cores while maintaining a sufficient degree of accuracy in the results generated by executing inferences using a quantized machine learning model, embodiments described herein provide various techniques for adaptively quantizing parameters used by a machine learning model and switching between performing inferences using a high accuracy model on high performance cores and performing inferences using quantized models on efficient cores. Generally, execution of inferences may begin using a high accuracy model on high performance cores (e.g., using a machine learning model with floating point weights), and inferences may be performed in parallel on one or more high efficiency cores using one or more sets of quantized weights (e.g., weights quantized to reduced-size representations, such as 16-bit integer, to 8-bit integer, 4-bit integer, etc., relative to the floating point weights included in a high accuracy model) until a system is confident that one of the models executing with quantized weights (i.e., one of the high efficiency models) is able to generate sufficiently accurate inferences relative to the inferences generated by the high accuracy model. Likewise, while generating inferences using a high efficiency model, inferences can be periodically generated using the high accuracy model to refine parameter quantization and to determine whether to re-quantize the model. By quantizing machine learning parameters and continually optimizing the quantized weights, embodiments described herein allow for the generation of accurate inferences using efficient cores for models that are generated using higher accuracy data types (e.g., 32-bit single precision floating point or 64-bit double precision floating point). Using efficient cores and quantized weights to perform inferences may allow for power savings relative to using high-performance cores and machine learning models with floating point weights, which may provide for improved battery life on battery-powered devices on which inferences are executed, reduced power usage and heat generation by devices on which inferences are executed, and the like. Further, embodiments described herein allow for re-quantization of parameters for high efficiency models as operating conditions change (e.g., as inferences decrease in accuracy or as the data on which an inference is to be performed changes). FIG.2illustrates example operations200for performing inferences using adaptively quantized machine learning models, according to embodiments described herein. Generally, adaptively quantizing machine learning models may allow for adaptively executing inferences on a computing device. Operations200may be performed by a computing device with one or more processors (e.g., CPU, DSP, GPU, etc.) implementing a machine learning model, such as described with respect toFIG.8, below. As illustrated, operations200begin at block202, where the computing device receives weight information for a machine learning model to be executed on the computing device. The received weight information for the machine learning model may be high-precision information, such as 32-bit floating point or 64-bit floating point numbers, that may be generated during training of the machine learning model. This model may be designated as a “high accuracy” model which can be used, as discussed in further detail below, to determine whether inferences generated using a quantized model have an accuracy within a threshold amount relative to inferences generated using the high accuracy model. At block204, the computing device quantizes the received weight information into a representation having a reduced bit size relative to the received weight information. The representation having a reduced bit size relative to the received weight information may be referred to as quantized weight information. The quantized weight information may be in a format for which computation is less intensive than computation using the received weight information. For example, where the received weight information is in a high-precision data type (e.g., 32-bit single precision floating point or 64-bit double precision floating point), the quantized weight information may be in smaller floating point numbers (e.g., 16-bit half precision floating point) or in integers (e.g., 32-bit long integer, 16-bit short integer, etc.). In some embodiments, the computing device can reduce the weight information into a plurality of sets of quantized weight information associated with different quantization levels to be tested during execution of the machine learning model to identify an optimal level of quantization, or a level of quantization that results in sufficient inference accuracy relative to inference accuracy for inferences performed using the high accuracy model. In some embodiments, the computing device can reduce the weight information by quantizing the weight information to a first bit size and determine, as discussed below, whether inferences performed using the quantized weight information at the first bit size is sufficiently accurate relative to inference accuracy for inferences performed using the high accuracy model. If the accuracy of inferences performed using the quantized weight information at the first bit size is sufficiently accurate, the computing device can reduce the weight information to a second quantization level that is lower than the first quantization level (e.g., quantizing floating point data into an 8-bit fixed point representation, if the first quantization level quantized floating point data into a 16-bit fixed point representation) and continually reduce the weight information to lower quantization levels until inferences are no longer sufficiently accurate relative to inference accuracy for inferences performed using the high accuracy model. In another embodiment, the computing device can quantize weight information from floating point to a minimal bit size representation (e.g., 1-bit or 2-bit integer) and determine if inferences performed using the minimal bit size representation has sufficient accuracy relative to inference accuracy for inferences performed using the high accuracy model. If inferences performed using the minimal bit size representation are not sufficiently accurate, the computing device can quantize weight information from floating point to successively larger integer quantization levels until the accuracy of inferences performed using quantized weight information is sufficiently accurate relative to inference accuracy for inferences performed using the high accuracy model. At block206, the computing device performs first inferences using the model and the received weight information. In some embodiments, when the computing device performs first inferences using the model and the received weight information (e.g., performs first inferences using the high accuracy model), the computing device can determine statistics on the dynamic range of various activation statistics for each activation layer in a machine learning model. The dynamic range may be maximum and minimum values at each activation layer in the machine learning model that can be used, as discussed above, to reduce the weight information into quantized weight information and quantize other parameters for the machine learning model. At block208, the computing device performs second inferences using the model and the quantized weight information. At block210, the computing device compares the results of the first inferences and the second inferences. Generally, in comparing the results of the first and the second inferences, the computing device can treat the results of the first inferences as “ground truth” data and determine whether the second inferences are within a threshold accuracy level of the first inferences. To compare the results of the first inferences and the second inferences, the computing device may examine overflow/underflow statistics for each inference performed using the high efficiency (quantized) machine learning model to determine whether the overflow/underflow statistics for quantized data are within an acceptable range. This range may, in some embodiments, be a pre-defined range set by a developer of a machine learning model or by the computing device for generating quantized weights and activation statistics for a machine learning model. The computing device can also examine the accuracy of each of the second inferences relative to the corresponding first inferences to determine whether the high efficiency model used to generate the second inferences can generate sufficiently accurate inferences relative to the high accuracy model used to generate the first inferences. At block212, the computing device determines that the results of the second inferences are within a threshold performance level of results of the first inferences. At block214, based on the determination, the computing device performs one or more subsequent inferences using the model and the quantized weight information. In some embodiments, the received weight information may comprise a floating point representation of weights in the model. The quantized weight information may comprise an integer approximation of the floating point representation of the weights in the model. In some embodiments, the quantized weight information may comprise a plurality of weight sets, each weight set having a different bit size. Performing the second inferences may entail performing an inference using the model and each of the plurality of weight sets. The computing device can determine that results of the second inference are within the threshold performance level of results of the first inference by identifying a weight set of the plurality of weight sets with a result having a performance closest to the threshold performance level and returning the result associated with the identified weight set as the result of the second inference. While executing inferences using a high accuracy model and multiple high efficiency models may use more power than executing inferences using a high accuracy model, the computing device may execute inferences using the high accuracy model and multiple high efficiency models for a limited amount of time prior to selecting one of the multiple high efficiency models for future use. After the one of the multiple high efficiency models is selected, power usage for inferences performed on the computing device may decrease to a level below the power usage for inferences performed using the high accuracy model and may remain below the power usage for inferences performed using the high accuracy model until the computing device resumes performing inferences using the high accuracy model. In some embodiments, performing the one or more subsequent inferences using the model and the quantized weight information comprises performing the one or more subsequent inferences using the identified weight set of the plurality of weight sets. In some embodiments, the quantized weight information comprises first quantized weights having a predefined bit size. The computing device can generate second quantized weights having a smaller bit size than the quantized weight information and perform a subset of the one or more subsequent inferences using the second quantized weights. The computing device can determine that results of the subset of the one or more subsequent inferences using the second quantized weights are within the threshold performance level of results of the subset of the one or more subsequent inferences using the model and the quantized weight information. Based on determining that the results of the subset of the one or more subsequent inferences using the second quantized weights are within the threshold performance level, the computing system can perform additional inferences beyond the one or more subsequent inferences using the model and the second quantized weight information. In some embodiments, while performing the second inference using the model and the first quantized weights, the computing device can generate second quantized weights from quantizing the received weight information, the second quantized weights having a larger bit size than the first quantized weights. The computing device can determine that results of the second inference using the model and the quantized weight information are not within the threshold performance level of the results of the first inference for a threshold number of inferences and, based on the determination, perform additional inferences using the second quantized weights. In some embodiments, the performance level comprises an accuracy difference relative to the first inference and a size of overflow or underflow relative to a supported range of values for each layer in the model, given a bit size of the quantized weight information. In some embodiments, the computing device may adjust the threshold performance level based on an amount of difference between a current input and a previous input for which inferences are to be performed using the model. An accuracy difference threshold may be increased as differences between a current input and a previous input increase and may be decreased as the current input and previous input converge. For example, in a case where successive video frames include the same actors and a consistent motion vector, an accuracy difference threshold may be decreased, as the model may be expected to converge on similar inference results for similar inputs over time. Correspondingly, in a case where successive video frames increase new actors or change a motion vector of one of the actors, an accuracy difference threshold may be increased to account for uncertainty in the inferences generated from the new actors or changed motion vector. In some embodiments, the computing device determines that a difference between a current input and a previous input for which inferences are to be performed using the model exceeds a threshold amount of change. Based on the determination, the computing device can perform inferences on the current input and one or more additional inferences using the model and the received weight information. In some embodiments, the computing device performs inferences for a subset of the one or more subsequent inferences using the model and the received weight information and determines that results of the subset of the one or more subsequent inferences using the model and the quantized weight information are outside the threshold performance level (e.g., has lower accuracy and/or higher overflow/underflow statistics) relative to results of the subset of the one or more subsequent inferences using the model and the received weight information. Based on the determination that the results of the subset of the one or more subsequent inferences are outside the threshold performance level, the computing device performs additional inferences using the model and the received weight information. In some embodiments, the computing device can refine the quantized weight information based on results of the one or more subsequent inferences executed using the model and the quantized weight information, the refined quantized weight information comprising ranges of values to use in performing inferences using the model and the refined quantized weight information. In some embodiments, each inference of the second inferences is performed according to a periodicity defining a number of first inferences to be performed prior to performing one of the second inferences. In some embodiments, a subset of the one or more subsequent inferences are also performed using the received weight information. The subset of the one or more inferences may be a periodic sampling of the one or more subsequent inferences, and the periodicity of the periodic sampling may be determined from a performance difference between results of the subset of the one or more subsequent inferences performed using the received weight information and results of the subset of the one or more subsequent inferences performed using the quantized weight information. In some embodiments, the computing device saves the quantized weight information prior to halting inference performance using the model and the received weight information. The quantized weight information may be saved, for example, when the computing device transitions from a high accuracy mode to a high efficiency mode. When the computing system re-enters a high efficiency mode, the computing system can resume performance of inferences using the model and the saved quantized weight information without regenerating the quantized weight information. In some embodiments, the quantized weight information may comprise individual quantized weights for each activation layer in the model. Quantized weights for a respective layer in the model may have a bit size independent of a bit size of quantized weights for other layers in the model. In some embodiments, the computing device performs inferences using the received weight information on a first set of processing cores in a multicore processor and performs inferences using the quantized weight information on a second set of cores in the multicore processor. In some embodiments, the first inferences and the second inferences are performed in parallel across different processing cores in a multicore processor. In some embodiments, the first inferences are performed on a first type of processor; and the second inferences are performed on a second type of processor. In some embodiments, the first type of processor may be high-performance processors or processing cores in a multicore processor, and the second type of processor may be high-efficiency processors or processing cores in a multicore processor. Generally, by performing inferences on the second type of processor using quantized weight information, power savings can be realized relative to performance of inferences on the first type of processor using the received weight information. These power savings may, for example, provide for improved battery life for mobile devices on which inferences are performed, reduced power usage and heat generation by computing devices on which inferences are performed, and the like. In some embodiments, the first inferences are performed on a first set of processing cores in a heterogeneous multicore processor, and the second inferences are performed on a second set of processing cores in the heterogeneous multicore processor. In some embodiments, a delay factor, or hysteresis, may be used to control when a computing device performs the one or more subsequent inferences using the model and the quantized weight information. The delay factor may be set such that a number of inferences are to be performed using both the high accuracy and the high efficiency models to verify that the high efficiency model is consistently generating inferences that are sufficiently accurate and with overflow/underflow statistics that are below an acceptable level. FIG.3illustrates an example sequence of operations400that may be performed to activate and deactivate a high efficiency quantized mode for performing inferences on data. As illustrated, the operations300may begin at block302, where a system receives floating point weights for a machine learning model to be executed on the system. The floating point weights may be weights previously determined by a model training system and may be used for performing inferences using the machine learning model by executing the machine learning model on high performance processors, such as processors capable of performing operations on large bit-size floating point numbers (e.g., 16-bit half precision floating point, 32-bit single precision floating point, 64-bit double precision floating point, etc.). At block304, the system generates quantized weights from the floating point weights. The quantized weights may be generated by reducing the floating point weights into one or more integer approximations of the weights (e.g., 16-bit, 8-bit, 4-bit, etc. integers). As discussed, by reducing floating point weights into integer approximations of the weights, embodiments of the present disclosure may allow for machine learning models to be executed on more power efficient processors that may not be capable of performing floating point operations or may not be capable of performing such operations with acceptable performance. In some embodiments, the system can generate quantized weights successively from larger to smaller bit-size quantized weights until the accuracy of inferences performed using the quantized weights falls below a threshold accuracy level. In some embodiments, the system can generate quantized weights successively from smaller to larger bit-size quantized weights until the accuracy of inferences performed using the quantized weights reaches or exceeds a threshold accuracy level. In some embodiments, the system can generate a plurality of quantized weight sets and select the quantized weights having optimal performance parameters, such as a smallest number of bits in the integer representation of the quantized weights that results in inference accuracy being at or above a threshold accuracy level and overflow/underflow statistics being under a threshold level. At block306, the system performs inferences using floating point weights and quantized weights and quantized parameters generated from the performance of inferences using the floating point weights. The quantized parameters may be, for example, quantized activation statistics generated from ranges of values identified during execution of inferences using the floating point weights. In some embodiments, the quantized parameters for each layer of the machine learning model may be quantized to a common bit size (e.g., the quantized parameters may be quantized to an n bit representation for every layer in the machine learning model). In some embodiments, the quantized parameters may be quantized to different bit sizes for each layer of the machine learning model such that layers of the machine learning model with smaller dynamic range (e.g., a smaller difference between maximum and minimum values) are quantized to an integer representation using a smaller number of bits, while layers of the machine learning model with larger dynamic range (e.g., a larger difference between maximum and minimum values) are quantized to an integer representation using a larger number of bits. Generally, the performance of inferences using floating point and quantized weights and quantized parameters may be performed in parallel for a number of inferences such that the inference returned in response to a request to perform an inference is the inference generated using the floating point weights until, at block408, it is determined that inferences performed using one of the sets of quantized weights and parameters provides sufficient accuracy relative to inferences performed using the floating point weights. At block308, the system determines that inferences using quantized weights and parameters is within acceptable bounds. The bounds may be determined a priori by a developer of the machine learning model or may be implemented by the system as a threshold accuracy delta between inferences generated using the floating point weights and inferences generated using quantized weights. At block310, the system refines the quantized parameters based on differences between inferences performed using the floating point weights and inferences performed using the quantized weights. As discussed, the quantized parameters may be refined based on minimum and maximum values identified in each layer of the machine learning model when the machine learning model is executed using the floating point weights for multiple inferences. By refining quantized parameters over multiple inferences, a more accurate quantization may be performed by accounting for outliers in maximum and minimum values identified during execution of these multiple inferences or by expanding and contracting the range of values based on different inputs that the computing device may perform inferences on. The refined quantized parameters may be refined using a common bit size across a plurality of layers of the machine learning model or may be refined using different bit sizes for each layer of the machine learning model based on the dynamic range of values seen during inference performance using floating point weights in the machine learning model. At block312, the system performs subsequent inferences using floating point weights and quantized weights and the refined quantized parameters. The system may repeatedly perform inferences using floating point and quantized weights and refine the quantized parameters until a threshold number of inferences are performed using the quantized weights and quantized parameters. Generally the threshold number of inferences may be a number of inferences having inference accuracy within a threshold accuracy level relative to inferences performed using the floating point weights is reached. At block314, the system determines that inference performance using the refined quantized parameters meets an accuracy threshold and that overflow/underflow statistics are within an acceptable range. The overflow/underflow statistics may, for example, be a running counter over a time window of a number of times that a variable in the neural network overflows from a maximum value to a minimum value for a given bit size representation or underflows from a minimum value to a maximum value for the given bit size representation. The overflow/underflow statistics may be calculated globally, for the entirety of the machine learning model, or on a per-layer basis. At block316, the system performs inferences using the quantized weights and quantized parameters. At block318, the system periodically performs an inference using the floating point weights. The system may be configured to perform inferences using the floating point weights as a check against inferences performed using the quantized weights, and the inferences may be performed for every mthinference request. At block320, the system determines that inference performance using the quantized weights and quantized parameters is outside of acceptable bounds. Inference performance being outside of acceptable bounds may include, for example, inference accuracy for an inference performed using the quantized weights and parameters being below a threshold accuracy level relative to inference accuracy for inferences performed using the floating point weights. In some embodiments, inference performance being outside of acceptable bounds may include layer overflow/underflow statistics reaching a threshold number of overflow/underflow instances or affecting a threshold percentage of inferences performed using the quantized weights and quantized parameters. Based on the determination that inference performance using the quantized weights and parameters is outside of acceptable bounds, the system can determine that inferences performed using the quantized weights and parameters are not sufficiently accurate for continued use of the quantized weights and parameters. At block322, based on the determination that inference performance using the quantized weights and quantized parameters is outside of acceptable bounds, the system performs inferences using the floating point weights to generate new quantized weights. As discussed above, the system can resume a process of quantizing weights and parameters based on inferences performed using the floating point weights until inference performance using quantized weights and parameters reaches a threshold level of accuracy and an acceptable level of data overflow/underflow within the machine learning model. In some embodiments, the new quantized weights may be generated based on an initial inference performed using the floating point weights, and these new quantized weights may be refined based on subsequent inferences performed using the floating point weights until the new quantized weights are determined to be sufficiently accurate for use in executing future inferences. Example Operations for Switching from a High Accuracy Mode to a High Efficiency Mode for Performing Inferences FIG.4illustrates a flow chart for switching from a high accuracy mode in which inferences are performed using high-precision floating point weights to a high efficiency mode in which inferences are performed using lower-precision parameters, according to embodiments described herein. As illustrated, switching from a high accuracy mode to a high efficiency mode starts at block402, where a system receives a high accuracy floating point representation of a machine learning model. The high accuracy floating point representation of the machine learning model may be the model generated by a machine learning model trainer and deployed to the system for execution (e.g., included in an application binary, downloaded from a remote computing system, etc.). At block404, the system generates quantized weights for one or more high efficiency integer representations of the machine learning model. As discussed, the high efficiency integer representations may be reduced bit-size representations of the high accuracy floating point representation of the machine learning model that trades off some accuracy for efficiency in operation, as inferences performed using the high efficiency integer representations may be executed on more power-efficient processing units than inferences performed using the high accuracy floating point representation. At block406, the system executes an inference using the high accuracy representation of the machine learning model on high accuracy hardware. High accuracy hardware may be processors or processing cores that can perform floating point operations, such as cores designated as high performance cores in a heterogeneous multicore processor (e.g. “big” cores in a BIG.little architecture), graphics processing units, tensor processing units, neural processing units, and/or other high performance processing units. At block408, the system saves the results of performing the inference using the high accuracy representation of the machine learning model for future use. Execution of inferences using the high accuracy representation of the machine learning model and saving the results of the inference performance may repeat until the system switches from a high accuracy model to a high efficiency mode. In parallel, the system, at block410, accumulates statistics to define quantized activation parameters for the machine learning model. These statistics may include, for example, maximum and minimum values identified during execution of the machine learning model, and other information that may be used to quantize activation parameters for the machine learning model. At block412, the system defines quantized activation parameters for each layer in the machine learning model. The quantized activation parameters may be defined based on the accumulated overflow/underflow statistics and maximum and minimum values for activation parameters identified during execution of the machine learning model. In some embodiments, the quantized activation parameters may be quantized to a common bit size for the layers of the machine learning model or may be quantized to a bit size on a per-layer basis based on the dynamic range of values seen during execution of the machine learning model in each layer of the machine learning model. At block414, the system executes inferences using high efficiency representations of the machine learning model for one or more inferences executed using the high accuracy representation of the machine learning model. At block416, the system saves the results of the inferences executed using the high efficiency representations of the machine learning model and the quantized activation parameters, as well as overflow and underflow statistics accumulated during execution of the inferences at block414. The results of the inferences using the high efficiency representations of the machine learning model and the overflow and underflow statistics may be used, as discussed in further detail below, to determine whether the system can switch from inference performance in a high accuracy mode to inference performance in a high efficiency mode. At block418, the system compares the accuracy of inference results generated by the high efficiency representations of the machine learning models to the inference results generated by the high accuracy representation of the machine learning model. The system can generate an accuracy measurement or other metric by comparing the result of an inference generated by the high accuracy model, which may be treated as “ground truth” or a most accurate inference, and inferences generated by each of the high efficiency models. In some embodiments, the accuracy metric may be a difference between a value generated by the high accuracy representation of the machine learning model and each of a plurality of high efficiency representations of the machine learning model. At block420, the system determines whether any high efficiency model is sufficiently accurate and has acceptable overflow and underflow statistics. A high efficiency model may be deemed to be sufficiently accurate, for example, if inferences generated by the high efficiency model are within a threshold amount of difference away from the corresponding inferences generated by the high accuracy model. If a model is deemed sufficiently accurate and has acceptable overflow and underflow statistics relative to an a priori defined acceptable number or percentage of overflow and/or underflow events, then at block422, the system designates the most efficient high efficiency model (e.g., the high efficiency model quantized to the smallest bit size) that is sufficiently accurate and has acceptable overflow and underflow statistics as a selected high efficiency model and enters the high efficiency mode using the selected high efficiency model for the execution of subsequent inferences. The selected high efficiency model may be executed on high efficiency processors that use less power than the high accuracy hardware discussed above. These processors may include, for example, processors designed as high efficiency cores in a heterogeneous multicore processor (e.g., “little” cores in a BIG.little architecture), integer processing modules on a processor, or the like. If, however, at block420, the system determines that no high efficiency model is sufficiently accurate and has acceptable overflow and underflow statistics, the system remains in the high accuracy mode, and operations return to block406, where the system performs a subsequent inference using the high accuracy mode. Example Operations for Performing Inferences in a High Efficiency Mode FIG.5illustrates an example flow chart for executing inferences in a high efficiency (HE) mode in which inferences are performed using lower-precision parameters and determining whether to switch to a high accuracy (HA) mode. As illustrated, executing inferences in a high efficiency mode begins at block502, where a system receives an inference request. The inference request generally includes input on which the information is to be performed. At block504, the system executes an inference in response to the request. Generally, the executed inference may be performed using a selected high efficiency model (e.g., a machine learning model and weights quantized to a data type that involves less complex computation than a set of weights associated with a high accuracy model). The quantized weights may be weights quantized prior to execution of inferences in the high efficiency mode or refined at block512, discussed below. At block506, the system saves the results of the inference and overflow/underflow statistics for the high efficiency representation of the model. In parallel, at block508, the system executes an inference using the high accuracy model for every mthinference request. At block510, the system saves the results of each inference executed using the high accuracy model, and at block512, the system refines quantization parameters for each layer of the high efficiency model based on statistics gathered from execution of inferences using the high accuracy model. As discussed above, the refined quantization parameters may include, for example, refined activation parameters (e.g., per-layer minimum and maximum values) generated based on minimum and maximum values identified in each layer of the machine learning model. At block514, the system determines whether the high efficiency model accuracy and overflow/underflow statistics are within an acceptable range. To determine whether high efficiency model accuracy is within an acceptable range, the system compares the inference generated by the high efficiency model for the mthto the inference generated by the high accuracy model for the mthand determines an amount of difference between the inference generated by the high efficiency model and the inference generated by the high accuracy model. If the difference between the inference generated by the high efficiency model and the inference generated by the high accuracy model exceeds a threshold value, the system can determine that the high efficiency model accuracy is outside of an acceptable range. Overflow/underflow statistics may be accumulated in a counter that identifies a number of times an overflow or underflow situation is experienced during execution of inferences using the high efficiency model. The counter may count overflow and underflow events under a running window of time or may otherwise be periodically reset. If the counter exceeds a threshold value, the system can determine that high efficiency model overflow/underflow statistics are outside of the acceptable range. If the system determines, at block514, that high efficiency model accuracy is acceptable and overflow/underflow statistics are within the threshold value, the system may return to block502and execute a subsequent inference using the high efficiency model. Otherwise, at block516, the system exits the high efficiency mode and executes subsequent inferences using the high accuracy mode (e.g., as discussed above with respect toFIG.4). Example Operations for Performing Inferences in a High Efficiency Mode and Switching to a High Accuracy Mode Based on Differences in Inputs FIG.6illustrates an example flow chart for executing inferences in a high efficiency mode in which inferences are performed using lower-precision parameters and determining whether to switch to a high accuracy mode based on a difference between a current input and a most recent input for which an inference was generated in a high accuracy mode. As illustrated,FIG.6adds block602to the flow chart illustrated inFIG.5such that after an inference is performed using the high efficiency model, a difference between the current input (e.g., the data specified in an inference request received at block502) and the input used in the most recent execution of an inference in the high accuracy mode is compared. This scenario may exist, for example, when an input data set changes. An input data set may change when inputs into an image recognition system change from one type of image to another type of image. In some embodiments, an input data set may change when a motion vector in an image changes such that a new actor with a new motion vector is added to an image or an existing actor changes a direction of motion. Of course, various other scenarios may exist in which there is a sufficient difference between different inputs such that previously quantized weights and activation parameters are no longer valid. If the difference between the current input and the input used in the most recent execution of an inference in the high accuracy mode exceeds a threshold value, the system can determine that the quantized weights and parameters may no longer be applicable to the data on which inferences are to be performed. Thus, if the difference between the current input and the input used in the most recent execution of an inference in the high accuracy mode exceeds a threshold value, the system can proceed to block616, where the system exits the high efficiency mode and executes subsequent inferences in the high accuracy mode. Example Operations for Performing Inferences in a High Efficiency Mode Using Multiple High Efficiency Models FIG.7illustrates an example flow chart for executing inferences in a high efficiency mode in which inferences are performed using lower-precision parameters and determining whether to switch to a high accuracy mode or a different high efficiency model. As illustrated,FIG.7replaces blocks508and510with blocks702and704, respectively, and adds block706to the flow chart illustrated inFIG.5. As illustrated, at block702, for every mthinference request, the system executes an inference using the high accuracy model and each of the available high efficiency models (e.g., the high efficiency models other than the selected high efficiency model that has been designated as the high efficiency model to be used in executing inferences while the system is in the high efficiency mode). At block704, the system saves the results of the inferences for the high accuracy model and each of the available high efficiency models. At block706, which is executed after the system determines that high efficiency model accuracy and overflow/underflow statistics are within an acceptable range, the system designates the high efficiency model having the highest efficiency and sufficient accuracy and overflow/underflow statistics as the selected high efficiency model. Generally, the system may select the high efficiency representation of the machine learning model with the smallest bit-size quantization that has sufficient accuracy and overflow/underflow statistics as the selected high efficiency model such that the selected high efficiency model used while the system is in the high efficiency mode is continually the most efficient model. To do so, the system can examine the accuracy and overflow/underflow statistics of the high efficiency models against accuracy and overflow/underflow statistic thresholds. Accuracy may be compared relative to inferences performed using the high accuracy model, and the overflow/underflow statistics may be examined relative to a maximum overflow/underflow count used to determine whether an high efficiency model has a bit size that is sufficient to avoid inaccuracies caused by repeated integer overflow/underflow scenarios. Example Software Architecture for Optimizing Machine Learning Model Performance Using High Accuracy and High Efficiency Models FIG.8is a block diagram illustrating an exemplary software architecture800that may modularize artificial intelligence (AI) functions. Using architecture800, applications may be designed that may cause various processing blocks of an SOC820(for example a CPU822, a DSP824, a GPU826, and/or an NPU828) to execute inference operations using high accuracy and high efficiency models, according to aspects of the present disclosure. The AI application802may be configured to call functions defined in a user space804that may, for example, perform inferences on a given input, as discussed above. The AI application802may, for example, configure a microphone and a camera differently depending on whether the recognized scene is an office, a lecture hall, a restaurant, or an outdoor setting such as a lake. The AI application802may make a request to compile program code associated with a library defined in an AI function application programming interface (API)806. This request may ultimately rely on the output of a deep neural network configured to provide an inference response based on video and positioning data, for example. A run-time engine808, which may be compiled code of a runtime framework, may be further accessible to the AI application802. The AI application802may cause the run-time engine, for example, to request an inference at a particular time interval or triggered by an event detected by the user interface of the application. As illustrated, run-time engine808may include a model quantizer808A and a model switcher808B. Model quantizer808A generally uses the floating point weights defined a priori for a given machine learning model deployed within architecture800to generate one or more quantized sets of weights having a reduced bit size and complexity relative to the floating point weights. Model switcher808B is generally configured to perform inferences using the floating point weights and one or more of the quantized sets of weights to generate quantized activation parameters for the machine learning model and determine whether to perform subsequent inferences using the floating point weights or the quantized weights. When caused to provide an inference response, the run-time engine may in turn send a signal to an operating system in an operating system (OS) space810, such as a Linux Kernel812, running on the SOC820. The operating system, in turn, may cause inferences to be performed on the CPU822, the DSP824, the GPU826, the NPU828, or some combination thereof. The CPU822may be accessed directly by the operating system, and other processing blocks may be accessed through a driver, such as a driver814,816, or818for, respectively, the DSP824, the GPU826, or the NPU828. In the exemplary example, the deep neural network may be configured to run on a combination of processing blocks, such as the CPU822, the DSP824, and the GPU826, or may be run on the NPU828. Additional Considerations The preceding description is provided to enable any person skilled in the art to practice the various embodiments described herein. The examples discussed herein are not limiting of the scope, applicability, or embodiments set forth in the claims. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments. For example, changes may be made in the function and arrangement of elements discussed without departing from the scope of the disclosure. Various examples may omit, substitute, or add various procedures or components as appropriate. For instance, the methods described may be performed in an order different from that described, and various steps may be added, omitted, or combined. Also, features described with respect to some examples may be combined in some other examples. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method that is practiced using other structure, functionality, or structure and functionality in addition to, or other than, the various aspects of the disclosure set forth herein. It should be understood that any aspect of the disclosure disclosed herein may be embodied by one or more elements of a claim. As used herein, the word “exemplary” means “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (e.g., a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c). As used herein, the term “determining” encompasses a wide variety of actions. For example, “determining” may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” may include resolving, selecting, choosing, establishing and the like. The methods disclosed herein comprise one or more steps or actions for achieving the methods. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims. Further, the various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software component(s) and/or module(s), including, but not limited to a circuit, an application specific integrated circuit (ASIC), or processor. Generally, where there are operations illustrated in figures, those operations may have corresponding counterpart means-plus-function components with similar numbering. The following claims are not intended to be limited to the embodiments shown herein, but are to be accorded the full scope consistent with the language of the claims. Within a claim, reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more. No claim element is to be construed under the provisions of 35 U.S.C. § 112(f) unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for.” All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. FURTHER EXAMPLES Further examples of the present invention are explained in the following paragraphs: Example 1 A method for adaptively executing machine learning models on a computing device, comprising: receiving weight information for a machine learning model to be executed on a computing device; reducing the received weight information into quantized weight information having a reduced bit size relative to the received weight information; performing first inferences using the machine learning model and the received weight information; performing second inferences using the machine learning model and the quantized weight information; comparing results of the first and second inferences; determining that results of the second inferences are within a threshold performance level of results of the first inferences; and based on determining that results of the second inferences are within a threshold performance level of results of the first inferences, performing one or more subsequent inferences using the machine learning model and the quantized weight information. Example 2 The method of Example 1, wherein: the received weight information comprises a floating point representation of weights in the machine learning model; and the quantized weight information comprises an integer approximation of the floating point representation of the weights in the machine learning model. Example 3 The method of any of Examples 1 or 2, wherein the quantized weight information comprises a plurality of weight sets, each weight set having a different bit size. Example 4 The method of Example 3, wherein: performing the second inference comprises performing an inference using the machine learning model and each of the plurality of sets, and determining that results of the second inference are within the threshold performance level of results of the first inference comprises: identifying a weight set of the plurality of weight sets with a result having a performance closest to the threshold performance level, and returning the result associated with the identified weight set as the result of the second inference. Example 5 The method of Example 4, wherein performing the one or more subsequent inferences using the machine learning model and the quantized weight information comprises performing the one or more subsequent inferences using the identified weight set of the plurality of weight sets. Example 6 The method of any of Examples 1 to 5, wherein the quantized weight information comprises first quantized weights having a predefined bit size. Example 7 The method of Example 6, further comprising: generating second quantized weights having a smaller bit size than the quantized weight information; performing a subset of the one or more subsequent inferences using the second quantized weights; determining that results of the subset of the one or more subsequent inferences using the second quantized weights are within the threshold performance level of results of the subset of the one or more subsequent inferences using the machine learning model and the quantized weight information; and based on the determining, performing additional inferences beyond the one or more subsequent inferences using the machine learning model and the second quantized weights. Example 8 The method of Example 6, further comprising: while performing the second inference using the machine learning model and the first quantized weights, generating second quantized weights from quantizing the received weight information, the second quantized weights having a larger bit size than the first quantized weights; determining that results of the second inference using the machine learning model and the quantized weight information are not within the threshold performance level of the results of the first inference for a threshold number of inferences; and performing additional inferences using the second quantized weights. Example 9 The method of any of Examples 1 to 8, wherein the performance level comprises an accuracy difference relative to the first inference and a size of overflow or underflow relative to supported range of values for each layer in the machine learning model, given a bit size of the quantized weight information. Example 10 The method of any of Examples 1 to 9, further comprising: adjusting the threshold performance level based on an amount of difference between a current input and a previous input for which inferences are to be performed using the machine learning model. Example 11 The method of any of Examples 1 to 10, further comprising: determining that a difference between a current input and a previous input for which inferences are to be performed using the machine learning model exceeds a threshold amount of change; and performing inferences on the current input and one or more additional inferences using the machine learning model and the received weight information. Example 12 The method of any of Examples 1 to 11, further comprising: performing inferences for a subset of the one or more subsequent inferences using the machine learning model and the received weight information; determining that results of the subset of the one or more subsequent inferences using the machine learning model and the quantized weight information are outside the threshold performance level relative to results of the subset of the one or more subsequent inferences using the machine learning model and the received weight information; and based on the determining, performing additional inferences using the machine learning model and the received weight information. Example 13 The method of any of Examples 1 to 12, further comprising: refining the quantized weight information based on results of the one or more subsequent inferences executed using the machine learning model and the quantized weight information, the refined quantized weight information comprising ranges of values to use in performing inferences using the machine learning model and the refined quantized weight information. Example 14 The method of any of Examples 1 to 13, wherein each inference of the second inferences is performed according to a periodicity defining a number of first inferences to be performed prior to performing one of the second inferences. Example 15 The method of any of Examples 1 to 14, wherein a subset of the one or more subsequent inferences are also performed using the received weight information. Example 16 The method of Example 15, wherein: the subset of the one or more subsequent inferences comprises a periodic sampling of the one or more subsequent inferences, and a periodicity of the periodic sampling is determined from a performance difference between results of the subset of the one or more subsequent inferences performed using the received weight information and results of the subset of the one or more subsequent inferences performed using the quantized weight information. Example 17 The method of any of Examples 1 to 16, further comprising: saving the quantized weight information prior to halting performance of inferences using the machine learning model; and resuming performance of inferences using the machine learning model and the quantized weight information without regenerating the quantized weight information. Example 18 The method of any of Examples 1 to 17, wherein: the quantized weight information comprises individual quantized weights for each layer in the machine learning model, and quantized weights for a respective layer in the machine learning model has a bit size independent of a bit size of quantized weights for other layers in the machine learning model. Example 19 The method of any of Examples 1 to 18, further comprising: performing inferences using the received weight information on a first set of processing cores in a multicore processor, and performing inferences using the quantized weight information on a second set of cores in the multicore processor. Example 20 The method of any of Examples 1 to 19, wherein the first inferences and second inferences are performed in parallel across different processing cores in a multicore processor. Example 21 The method of any of Examples 1 to 20, wherein: the first inferences are performed on a first type of processor; and the second inferences are performed on a second type of processor. Example 22 The method of any of Examples 1 to 21, wherein: the first inferences are performed on a first set of processing cores in a heterogeneous multicore processor; and the second inferences are performed on a second set of processing cores in the heterogeneous multicore processor. Example 23 A system, comprising: a processor; and a memory having instructions stored thereon which, when executed by the processor, performs an operation for adaptively executing machine learning machine learning models on a computing device, the operation comprising: receiving weight information for a machine learning model to be executed on a computing device; reducing the received weight information into quantized weight information having a reduced bit size relative to the received weight information; performing first inferences using the machine learning model and the received weight information; performing second inferences using the machine learning model and the quantized weight information; comparing results of the first and second inferences; determining that results of the second inferences are within a threshold performance level of results of the first inferences; and based on determining that results of the second inferences are within a threshold performance level of results of the first inferences, performing one or more subsequent inferences using the machine learning model and the quantized weight information. Example 24 The system of example 23, wherein: the received weight information comprises a floating point representation of weights in the machine learning model; and the quantized weight information comprises an integer approximation of the floating point representation of the weights in the machine learning model. Example 25 The system of any of Examples 23 or 24, wherein the quantized weight information comprises a plurality of weight sets, each weight set having a different bit size. Example 26 The system of Example 25, wherein: performing the second inference comprises performing an inference using the machine learning model and each of the plurality of weight sets, and determining that results of the second inference are within the threshold performance level of results of the first inference comprises: identifying a weight set of the plurality of weight sets with a result having a performance closest to the threshold performance level, and returning the result associated with the identified weight set as the result of the second inference. Example 27 The system of Example 26, wherein performing the one or more subsequent inferences using the machine learning model and the quantized weight information comprises performing the one or more subsequent inferences using the identified weight set of the plurality of weight sets. Example 28 The system of any of Examples 23 to 27, wherein the quantized weight information comprises first quantized weights having a predefined bit size. Example 29 The system of Example 28, further comprising: generating second quantized weights having a smaller bit size than the quantized weight information; performing a subset of the one or more subsequent inferences using the second quantized weights; determining that results of the subset of the one or more subsequent inferences using the second quantized weights are within the threshold performance level of results of the subset of the one or more subsequent inferences using the machine learning model and the quantized weights; and based on the determining, performing additional inferences beyond the one or more subsequent inferences using the machine learning model and the second quantized weight information. Example 30 The system of Example 28, wherein the operation further comprises: while performing the second inference using the machine learning model and the first quantized weights, generating second quantized weights from quantizing the received weight information, the second quantized weights having a larger bit size than the first quantized weights; determining that results of the second inference using the machine learning model and the quantized weight information are not within the threshold performance level of the results of the first inference for a threshold number of inferences; and performing additional inferences using the second quantized weights. Example 31 The system of any of Examples 23 to 30, wherein the performance level comprises an accuracy difference relative to the first inference and a size of overflow or underflow relative to supported range of values for each layer in the machine learning model, given a bit size of the quantized weight information. Example 32 The system of any of Examples 23 to 31, further comprising: adjusting the threshold performance level based on an amount of difference between a current input and a previous input for which inferences are to be performed using the machine learning model. Example 33 The system of any of Examples 23 to 32, further comprising: determining that a difference between a current input and a previous input for which inferences are to be performed using the machine learning model exceeds a threshold amount of change; and performing inferences on the current input and one or more additional inferences using the machine learning model and the received weight information. Example 34 The system of any of Examples 23 to 33, wherein the operation further comprises: performing inferences for a subset of the one or more subsequent inferences using the machine learning model and the received weight information; determining that results of the subset of the one or more subsequent inferences using the machine learning model and the quantized weight information are outside the threshold performance level relative to results of the subset of the one or more subsequent inferences using the machine learning model and the received weight information; and based on the determining, performing additional inferences using the machine learning model and the received weight information. Example 35 The system of any of Examples 23 to 34, wherein the operation further comprises: refining the quantized weight information based on results of the one or more subsequent inferences executed using the machine learning model and the quantized weight information, the refined quantized weight information comprising ranges of values to use in performing inferences using the machine learning model and the refined quantized weight information. Example 36 The system of any of Examples 23 to 35, wherein each inference of the second inferences is performed according to a periodicity defining a number of first inferences to be performed prior to performing one of the second inferences. Example 37 The system of any of Examples 23 to 36, wherein a subset of the one or more subsequent inferences are also performed using the received weight information. Example 38 The system of Example 37, wherein: the subset of the one or more subsequent inferences comprises a periodic sampling of the one or more subsequent inferences, and a periodicity of the periodic sampling is determined from a performance difference between results of the subset of the one or more subsequent inferences performed using the received weight information and results of the subset of the one or more subsequent inferences performed using the quantized weight information. Example 39 The system of any of Examples 23 to 38, wherein the operation further comprises: saving the quantized weight information prior to halting performance of inferences using the machine learning model; and resuming performance of inferences using the machine learning model and the quantized weight information without regenerating the quantized weight information. Example 40 The system of any of Examples 23 to 39, wherein: the quantized weight information comprises individual quantized weights for each layer in the machine learning model, and quantized weights for a respective layer in the machine learning model has a bit size independent of a bit size of quantized weights for other layers in the machine learning model. Example 41 The system of any of Examples 23 to 40, wherein the operation further comprises: performing inferences using the received weight information on a first set of processing cores in a multicore processor, and performing inferences using the quantized weight information on a second set of cores in the multicore processor. Example 42 The system of any of Examples 23 to 41, wherein the first inferences and second inferences are performed in parallel across different processing cores in a multicore processor. Example 43 The system of any of Examples 23 to 42, wherein: the first inferences are performed on a first type of processor; and the second inferences are performed on a second type of processor. Example 44 The system of any of Examples 23 to 43, wherein: the first inferences are performed on a first set of processing cores in a heterogeneous multicore processor; and the second inferences are performed on a second set of processing cores in the heterogeneous multicore processor. Example 45 A system for adaptively executing machine learning machine learning models on a computing device, comprising: means for receiving weight information for a machine learning model to be executed on a computing device; means for reducing the received weight information into quantized weight information having a reduced bit size relative to the received weight information; means for performing first inferences using the machine learning model and the received weight information; means for performing second inferences using the machine learning model and the quantized weight information; means for comparing results of the first and second inferences; means for determining that results of the second inferences are within a threshold performance level of results of the first inferences; and means for based on determining that results of the second inferences are within a threshold performance level of results of the first inferences, performing one or more subsequent inferences using the machine learning model and the quantized weight information. Example 46 A computer-readable medium having instructions stored thereon which, when executed, performs an operation for adaptively executing machine learning machine learning models on a computing device, comprising: receiving weight information for a machine learning model to be executed on a computing device; reducing the received weight information into quantized weight information having a reduced bit size relative to the received weight information; performing first inferences using the machine learning model and the received weight information; performing second inferences using the machine learning model and the quantized weight information; comparing results of the first and second inferences; determining that results of the second inferences are within a threshold performance level of results of the first inferences; and based on determining that results of the second inferences are within a threshold performance level of results of the first inferences, performing one or more subsequent inferences using the machine learning model and the quantized weight information. Example 47 A computer program comprising instructions for performing a method according to any of the Examples 1 to 22. | 91,338 |
11861468 | The drawings are not necessarily to scale and may be illustrated by phantom lines, diagrammatic representations and fragmentary views. In certain instances, details that are not necessary for an understanding of the embodiments or that render other details difficult to perceive may have been omitted. DETAILED DESCRIPTION At a high level, aspects of the present disclosure are directed to a system and method for determining a plurality of biological outcomes using a plurality of dimensions of biological extraction user data and artificial intelligence. In non-limiting embodiments described herein, artificial intelligence may refer to a machine learning process, as described in further detail below. In non-limiting embodiments, system may receive plurality of dimensions of biological extraction, as defined below. In non-limiting embodiments, generating a dimensional history of a user may use plurality of dimensions of biological extraction data, as described in further detail below. Dimensional history data may be used as an input to at least a first machine learning algorithm to train a model to determine a plurality of correlated biosketch measurements of a user. A second machine learning process may be trained with a variety of available resources to determine the accuracy of user-reported plurality of dimensions of biological extraction data. Dimensional history data may be input into a machine learning process including a model trained with this data to determine a biological outcome of a user. Referring now toFIG.1, an exemplary embodiment of a system100for determining biological outcomes using artificial intelligence is illustrated. System100includes at least a computing device104. Computing device104may include any computing device104as described in this disclosure, including without limitation a microcontroller, microprocessor, digital signal processor (DSP) and/or system on a chip (SoC) as described in this disclosure. Computing device104may include, be included in, and/or communicate with a mobile device such as a mobile telephone or smartphone. Computing device104may include a single computing device104operating independently, or may include two or more computing device104operating in concert, in parallel, sequentially or the like; two or more computing devices104may be included together in a single computing device104or in two or more computing devices104. Computing device104may interface or communicate with one or more additional devices as described below in further detail via a network interface device. Network interface device may be utilized for connecting computing device104to one or more of a variety of networks, and one or more devices. Examples of a network interface device include, but are not limited to, a network interface card (e.g., a mobile network interface card, a LAN card), a modem, and any combination thereof. Examples of a network include, but are not limited to, a wide area network (e.g., the Internet, an enterprise network), a local area network (e.g., a network associated with an office, a building, a campus or other relatively small geographic space), a telephone network, a data network associated with a telephone/voice provider (e.g., a mobile communications provider data and/or voice network), a direct connection between two computing devices104, and any combinations thereof. A network may employ a wired and/or a wireless mode of communication. In general, any network topology may be used. Information (e.g., data, software etc.) may be communicated to and/or from a computer and/or a computing device. Computing device104may include but is not limited to, for example, a computing device or cluster of computing devices104in a first location and a second computing device104or cluster of computing devices104in a second location. Computing device104may include one or more computing devices104dedicated to data storage, security, distribution of traffic for load balancing, and the like. Computing device104may distribute one or more computing tasks as described below across a plurality of computing devices of computing device104, which may operate in parallel, in series, redundantly, or in any other manner used for distribution of tasks or memory between computing devices104. Computing device104may be implemented using a “shared nothing” architecture in which data is cached at the worker, in an embodiment, this may enable scalability of system100and/or computing device104. Still referring toFIG.1, computing device104may be designed and/or configured to perform any method, method step, or sequence of method steps in any embodiment described in this disclosure, in any order and with any degree of repetition. For instance, computing device104may be configured to perform a single step or sequence repeatedly until a desired or commanded outcome is achieved; repetition of a step or a sequence of steps may be performed iteratively and/or recursively using outputs of previous repetitions as inputs to subsequent repetitions, aggregating inputs and/or outputs of repetitions to produce an aggregate result, reduction or decrement of one or more variables such as global variables, and/or division of a larger processing task into a set of iteratively addressed smaller processing tasks. Computing device104may perform any step or sequence of steps as described in this disclosure in parallel, such as simultaneously and/or substantially simultaneously performing a step two or more times using two or more parallel threads, processor cores, or the like; division of tasks between parallel threads and/or processes may be performed according to any protocol suitable for division of tasks between iterations. Persons skilled in the art, upon reviewing the entirety of this disclosure, will be aware of various ways in which steps, sequences of steps, processing tasks, and/or data may be subdivided, shared, or otherwise dealt with using iteration, recursion, and/or parallel processing. With continued reference toFIG.1, computing device104may be designed and configured to receive plurality of dimensions of biological extraction108. A “dimension of biological extraction,” as used in this disclosure may refer to a measurement corresponding to a category of biological extraction data including without limitation microbiome analysis, genetic analysis, epigenetic analysis, blood test, gut wall and food sensitivity analysis, and/or toxicity report, including for instance, and without limitation, as described in U.S. Nonprovisional application Ser. No. 16/530,329 filed on Aug. 2, 2019, and entitled “METHODS AND SYSTEMS FOR GENERATING COMPATIBLE SUBSTANCE INSTRUCTION SETS USING ARTIFICIAL INTELLIGENCE,” the entirety of which is incorporated herein by reference. As a non-limiting example, a plurality of dimensions of biological extraction108may refer to at least two of six categories of biological extraction, as described in further detail below. Still referring toFIG.1, at least a computing device104may generate, using the plurality of dimensions of biological extraction108data and a first machine learning model112, correlating dimensional history124of the user. First machine learning model112contains a plurality of correlated biosketch measurements116generated by a first machine learning process120. First machine learning process may include generating first machine learning model112as described below in more detail. A “dimensional history,” as used in this description, is a body of data describing a relationship containing at least two elements of data, including at least an element of the plurality of dimensions of biological extraction108data and at least an output of a correlated biosketch measurement116. A “correlated biosketch measurement,” as used in this disclosure, is a mathematical, heuristic, causative, correlated, proportional, and/or any other relationship between at least two elements of data of the plurality of dimensions of biological extraction108data. A correlated biosketch measurements116may be, without limitation, a matrix and/or vector, describing values, coefficients, variables, or the like, that describe at least a relationship between at least two of more elements of data of the plurality of dimensions of biological extraction108. In non-limiting illustrative examples, this may include two elements of data within the same category of data, such as two elements of genetic analysis, and/or two elements of data in disparate categories, such as genetic analysis and toxicity report. A first machine learning process120, as described herein, generate a plurality of correlated biosketch measurements. In non-limiting illustration examples, correlated biosketch measurements116may describe a food sensitivity as a function of the presence and number of active cultures of a strain of bacteria in a user's gut. A machine learning model may be trained with at least a correlated biosketch measurements116and may be used as an input into a machine learning process to generate a dimensional history124of a user. A dimensional history124may be generated by a machine learning model containing at least a correlated biosketch measurements116using at least an element of a plurality of dimensions of biological extraction108data. In non-limiting illustrative examples, elements of data of a plurality of dimensions of biological extraction108data may be further narrowed into subsets by use of a classifier, as described below. Still referring toFIG.1, first machine learning model may use at least a supervised machine learning algorithm. Supervised machine learning algorithms, as defined herein, include algorithms that receive a training set relating a number of inputs to a number of outputs, and seek to find one or more mathematical relations relating inputs to outputs, where each of the one or more mathematical relations is optimal according to some criterion specified to the algorithm using some scoring function. For instance, a supervised learning algorithm may include an element of biological extraction108data as described above as inputs, dimensional history124as outputs, and a scoring function representing a desired form of relationship to be detected between inputs and outputs; scoring function may, for instance, seek to maximize the probability that a given input and/or combination of elements inputs is associated with a given output to minimize the probability that a given input is not associated with a given output. Scoring function may be expressed as a risk function representing an “expected loss” of an algorithm relating inputs to outputs, where loss is computed as an error function representing a degree to which a prediction generated by the relation is incorrect when compared to a given input-output pair provided in training data. Persons skilled in the art, upon reviewing the entirety of this disclosure, will be aware of various possible variations of supervised machine learning algorithms that may be used to determine relation between inputs and outputs. Supervised machine learning process may include classification algorithms, defined as processes whereby at least a computing device104derives, from training data, a model for sorting inputs into categories or bins of data. Classification may be performed using, without limitation, linear classifiers such as without limitation logistic regression and/or naive Bayes classifiers, regression algorithms, nearest neighbor classifiers, support vector machines, decision trees, boosted trees, random forest classifiers, and/or neural network-based classifiers, such as supervised neural net algorithms. Supervised machine learning process may include, without limitation, machine learning processes as described in U.S. Nonprovisional application Ser. No. 16/520,835, filed on Jul. 3, 2019, and entitled “METHODS AND SYSTEMS FOR ACHIEVING VIBRANT CONSTITUTION BASED ON USER INPUTS,” the entirety of which is incorporated herein by reference. Still referring toFIG.1, computing device104may select at least a first machine learning process120, as described before, with the at least a plurality of dimensions of biological extraction108data as an input to generate a correlated biosketch measurements116as an output. Correlated biosketch measurements116may be used as training data to train a model for generating a dimensional history124. A machine learning model for generating a dimensional history124output may be trained with at least a first correlated biosketch measurements116input. Machine learning model may be trained by training data128retrieved from a user database132, as described in further detail below. First machine learning model may accomplish this by using training data128containing a plurality of dimensions of biological extraction108and a correlated biosketch measurement116as it relates to other users and/or sets of data, such as by use of a classifier, as described in further detail below. Continuing in reference toFIG.1, “training data,” as used herein, is data containing correlations that a first machine learning model may use to model relationships between two or more categories of data elements. For instance, and without limitation, training data128may include a plurality of data entries, each entry representing a set of data elements that were recorded, received, and/or generated together; data elements may be correlated by shared existence in a given data entry, by proximity in a given data entry, or the like. Multiple data entries in training data128may evince one or more trends in correlations between categories of data elements; for instance, and without limitation, a higher value of a first data element belonging to a first category of data element may tend to correlate to a higher value of a second data element belonging to a second category of data element, indicating a possible proportional or other mathematical relationship linking values belonging to the two categories. Multiple categories of data elements may be related in training data128according to various correlations; correlations may indicate causative and/or predictive links between categories of data elements, which may be modeled as relationships such as mathematical relationships by machine learning processes as described in further detail below. Training data128may be formatted and/or organized by categories of data elements, for instance by associating data elements with one or more descriptors corresponding to categories of data elements. As a non-limiting example, training data128may include data entered in standardized forms by persons or processes, such that entry of a given data element in a given field in a form may be mapped to one or more descriptors of categories. Elements in training data128may be linked to descriptors of categories by tags, tokens, or other data elements; for instance, and without limitation, training data128may be provided in fixed-length formats, formats linking positions of data to categories such as comma-separated value (CSV) formats and/or self-describing formats such as extensible markup language (XML), enabling processes or devices to detect categories of data. Alternatively or additionally, training data128may include one or more elements that are not categorized; that is, training data128may not be formatted or contain descriptors for some elements of data. Machine learning algorithms and/or other processes may sort training data128according to one or more categorizations using, for instance, natural language processing algorithms, tokenization, detection of correlated values in raw data and the like; categories may be generated using correlation and/or other processing algorithms. As a non-limiting example, in a corpus of text, phrases making up a number “n” of compound words, such as nouns modified by other nouns, may be identified according to a statistically significant prevalence of n-grams containing such words in a particular order; such an n-gram may be categorized as an element of language such as a “word” to be tracked similarly to single words, generating a new category as a result of statistical analysis. Similarly, in a data entry including some textual data, a person's name may be identified by reference to a list, dictionary, or other compendium of terms, permitting ad-hoc categorization by machine learning algorithms, and/or automated association of data in the data entry with descriptors or into a given format. The ability to categorize data entries automatedly may enable the same training data128to be made applicable for two or more distinct machine learning algorithms as described in further detail below. Training data128used by computing device104may correlate any input data as described in this disclosure to any output data as described in this disclosure. As a non-limiting illustrative example, at least an element of biological extraction108data and biosketch measurements may be used with invention. Referring now toFIG.1, plurality of dimensions of biological extraction108data of a user may include microbiome data136including microbiome analysis that comes from a user stool sample. Analysis of user stool sample may come from database, medical professional, and/or user input. As described here, “microbiome data”, is data regarding the presence of, identity of, and measure of populations of microscopic organisms living in or on a user, including DNA and RNA, enzymes, peptides, biomarkers, toxins, and biologics originating from bacteria, viruses, fungi, protozoa, parasites, spores, eggs, or any other microbiological organisms that may be present in the body as resident flora, transient flora, pathogens, including opportunistic pathogens, or any other category of microbe. Various data that are represented by a microbiome analysis will be understood by those skilled in the art, after reviewing the disclosure in its entirety. Continuing in referring toFIG.1, plurality of dimensions of biological extraction108data of a user may include gut wall and food sensitivity analysis140that comes from at least a first user questionnaire analysis. As described here, “gut wall and food sensitivity,” is data relating to a user's food intolerances, allergies, or any other sensitivities and/or adverse reaction to a food, supplement, beverage, or the like, including relations to gut wall strength and/or integrity of gastrointestinal epithelium, including condition of intestinal villi, adsorption kinetics of macro and micronutrients, immunological function, and any other data referring to the gastrointestinal tract, further including for instance, and without limitation, data as described in U.S. Nonprovisional application Ser. No. 16/530,329. In non-limiting examples, gut wall and food sensitivity analysis and associated data may originate from data generated by tests as referenced above and/or data from a first user questionnaire, database, medical professional, and/or subset of users, without limitation. It will be understood by those skilled in the art, after reviewing the disclosure in its entirety, the various data that are represented by a gut wall and food sensitivity analysis. Continuing in referring toFIG.1, plurality of dimensions of biological extraction108data of a user may include a genetic analysis144obtained from at least a user physical sample. As described here, “physical sample,” is a biological user sample including blood, urine, feces, hair, saliva, skin, interstitial fluid, biopsy, or any other physical biological sample that genetic information can be obtained. As described here, “genetic analysis,” refers to the analysis of any genetic material including nucleic acids such as DNA and RNA, which may correspond to genetic elements of a user including coding regions (genes), non-coding regions such as promoters, enhancers, transposons, genome-integrated viral DNA, and the presence of structural RNAs, such as tRNAs, miRNAs and other RNA types, and analysis thereof; analysis may refer to detecting the presence of, enumeration of, and/or determining the sequence of a nucleic acid and/or stretch of nucleic acid. Genetic analysis144data may be stored and/or retrieved from a database, as described below. It will be understood by those skilled in the art, after reviewing the disclosure in its entirety, the various data that are represented by a genetic analysis. Continuing in referring toFIG.1, plurality of dimensions of biological extraction108data of a user includes an epigenetic analysis148; epigenetic analysis148may be obtained from at least a genetic analysis. As described here, “epigenetic analysis,” is an analysis of genetic data including any mathematical, causative, correlated, proportional, heuristic, and/or any other relationship regarding the sequence of a nucleic acid and/or stretch of nucleic acid. Epigenetic analysis148data, as used herein, without limitation, may refer to identification and/or enumeration of single nucleotide polymorphisms (SNPs) in one or more genes and/or non-coding regions, relative numbers of expression of genes and/or non-coding regions, the identification and/or enumeration of the presence of mutations, including germ-line mutations and somatic mutations, and any other data that can be used for genotyping, inferring, calculating, determining, and/or any other analysis from at least an element of a genetic analysis. It will be understood by those skilled in the art, after reviewing the disclosure in its entirety, the various data that are represented by an epigenetic analysis. Continuing in referring toFIG.1, receiving plurality of dimensions of biological extraction108data of a user may include receiving a blood test152from at least a user sample of blood. A “blood test,” as described herein, is a biochemical and/or clinical test administered with user blood sample as a material. Biological extraction data originating from a blood test152and/or a blood test152analysis may include biochemical and/or clinical chemistry data including without limitation, quantitative and/or qualitative information on the presence of enzyme content, such as liver enzymes including ALT and AST, blood proteins such as albumin, creatine kinase, hemoglobin, ferritin, metabolites such as glucose, triglycerides, LDL and HDL cholesterol, biomarkers such as for cancer, neurodegenerative disease, or diabetes, hormones such as insulin, cortisol, testosterone, estrogen, and progesterone, red blood cell viability and count, white blood cell viability and count, or any other data from a blood sample. It will be understood by those skilled in the art, after reviewing the disclosure in its entirety, the various data that are represented by a blood test152analysis. Continuing in referring toFIG.1, receiving plurality of dimensions of biological extraction108data of a user may include receiving a toxicity report156. A “toxicity report,” as described herein is at least an element of data describing the use of current medications, prescribed medications that are not currently taken, recreational drugs, tobacco products, alcohol, caffeine intake, topicals, supplements, drug allergies, hypersensitivities, immunological disorders, and/or any toxic or adverse effects of combinations of these and/or any other biologics, medications, supplements, beverages, foods, or any other chemical with a biological effect. User toxicity report156data may come from a variety of sources, including without limitation, a user questionnaire, medical professional input, or any other advocate on behalf a user such as a caretaker. It will be understood by those skilled in the art, after reviewing the disclosure in its entirety, the various data that are represented by a toxicity report156. Now referring toFIG.2, the plurality of dimensions of biological extraction108data may be stored and/or retrieved by a computing device designed and configured to store and/or retrieve data from a user database132. User database132may refer to a “database” which at least a computing device104may, as a non-limiting example, store and/or retrieve data from various tables as described below. Determinations by a machine learning process may also be stored and/or retrieved from the user database132, for instance in non-limiting examples a correlated biosketch measurements116. As a non-limiting example, user database may organize data according to one or more user database tables. One or more database tables may be linked to one another by, for instance in a non-limiting example, common column values. For instance, a common column between two tables of database may include an identifier of a submission, such as a form entry, textual submission, research paper, or the like, for instance as defined below; as a result, a query may be able to retrieve all rows from any table pertaining to a given submission or set thereof. Other columns may include any other category usable for organization or subdivision of expert data, including types of expert data, names and/or identifiers of experts submitting the data, times of submission, or the like; persons skilled in the art, upon reviewing the entirety of this disclosure, will be aware of various ways in which data from one or more tables may be linked and/or related to data in one or more other tables. Still referring toFIG.2, in a non-limiting embodiment, one or more user database tables of a user database132may include, as a non-limiting example, a microbiome table200, which may include the identity of microbes present in a user sample for use in, without limitation, predicting and/or calculating a correlated biosketch measurement116as it relates to metabolism of a user and/or correlating a dimension of biological extraction data, entries indicating degrees of relevance to and/or efficacy in predicting metabolism of a user, and/or other elements of data computing device and/or system may use to determine usefulness and/or relevance of a plurality of dimensions of biological extraction108data in determining metabolism as described in this disclosure. One or more tables may include, without limitation, a gut wall and food sensitivity table204, which may correlate biological extraction data and/or combinations thereof to one or more measures of food intolerances; gut wall and food sensitivity table may contain a plurality of entries associating at least an element of a dimension of biological extraction data with gut wall and food sensitivity. One or more tables may include, without limitation, a genetic analysis table212, which may contain one or more inputs identifying one or more categories of data, for instance the DNA sequence of a metabolic gene. One or more tables may include, without limitation, an epigenetic analysis table216, which may contain one or more inputs identifying one or more categories of data, for instance the genotypic differences among a subset of users. One or more tables may include, without limitation, a blood test table220which may contain one or more inputs identifying one or more categories of data, for instance white blood cell viability and count. One or more tables may include, without limitation, a toxicity report table224, which may contain one or more inputs identifying one or more categories of data, for instance user consumption of alcohol over time. One or more tables may include, without limitation, a cohort category table228which may contain one or more inputs identifying one or more categories of data, for instance microbiome data, gut wall and food sensitivity data, genetic data, epigenetic data, blood test152data, and/or toxicity report data, with regard to which users having matching or similar data may be expected to have similar dimensional history124and/or biological outcome as a result of biosketch measurements and/or other biological extraction data. One or more tables may include, without limitation, a heuristic table232, which may include one or more inputs describing potential mathematical relationships between at least an element of a plurality of dimensions of biological extraction108data, dimensional history124, and/or biological outcome, as described in further detail below. One or more tables may include, without limitation, a classification table236, which may include one or more inputs describing potential subsets of users and/or user data between at least an element of a dimension of biological extraction data, dimensional history124, and/or biological outcome, by using a classifier, as described in further detail below. Continuing in referring toFIG.2, in a non-limiting embodiment, one or more user database tables of a user database132may include, as a non-limiting example, a classification table236, which may include data that identifies a set of users having one or more features in common, based upon at least an element of a plurality of dimensions of biological extraction108data. Referring again toFIG.1, a set of users such as a set in classification table236may be identified and/or populated therein by a classifier and/or classification algorithm. A “classifier,” as used in this disclosure is a machine-learning model, such as a mathematical model, neural net, or program generated by a machine learning algorithm known as a “classification algorithm,” as described in further detail below, that sorts inputs into categories or bins of data, outputting the categories or bins of data and/or labels associated therewith. User classifier may be configured to output identifiers of a bin and/or set of users identified as similar using classification algorithm160, where a “identifier” is a datum that labels or otherwise identifies a user set; that is, a label identifying a set of users that have sets of user data, such as without limitation biological extractions, that are clustered together, found to be close under a distance metric as described below, or the like. A user set may be a collection of users having closely related user data regarding one or more categories for classification as described above. User classifier may include a classifier configured to input user data and output identifiers of user sets. Further referring toFIG.1, computing device and/or another device may generate user classifier using classification algorithm160, defined as a process whereby a computing device derives a classifier from user classification training data128. Classification algorithm may be trained by computing device and/or one or more other devices in or communicating with system using training data128containing a plurality of sets of data pertaining to a plurality of users. Classification may be performed using, without limitation, linear classifiers such as without limitation logistic regression and/or naive Bayes classifiers, nearest neighbor classifiers such as k-nearest neighbors classifiers, support vector machines, least squares support vector machines, fisher's linear discriminant, quadratic classifiers, decision trees, boosted trees, random forest classifiers, learning vector quantization, and/or neural network-based classifiers. Continuing in reference toFIG.1, a classifier may indicate a subset of users mutually similar in at least one or more elements of data including a dimension of biological extraction data, dimensional history124data, biological outcome data, and/or any other available data, to match a user to a dimensional history124and/or biological outcome. Matching a user to a dimensional history124and/or biological outcome via a classifier may correspond to identifying any correlated biosketch measurements116, as described in further detail below. A classifier may be an input to a machine learning process to calculate, modify, or otherwise generate dimensional history124, and/or biological outcome information for a user. Classifiers generated from a classification algorithm160may be stored and/or retrieved in a user database132, such as a cohort category table228, for use by machine learning process124, as described herein, including for instance, and without limitation, as described in U.S. Nonprovisional application Ser. No. 16/865,740, filed on May 4, 2020, and entitled “METHODS AND SYSTEMS FOR SYSTEM FOR NUTRITIONAL RECOMMENDATION140USING ARTIFICIAL INTELLIGENCE ANALYSIS FOR IMMUNE IMPACTS,” the entirety of which is incorporated herein by reference. Continuing in reference toFIG.1, inputs and outputs of a machine learning model such as first machine-learning model112may be selected based on a user classifier, as described above. A classifier may be used to differentiate an explicit category of data as an input to a machine learning model based on a subset of useful data. Alternatively or additionally, a classifier may be assigned, as described before, to an explicit category or subset of data of an output of a machine learning model. For instance, different subsets of a first machine learning model output may contain data useful to subsequent machine learning models. A classifier for input and/or output of a machine learning model may be stored and/or retrieved by a computing device, without limitation, from a database. Referring toFIG.1, first machine learning model112may generate a dimensional history124. First machine learning model112may input a plurality of dimensions of biological extraction108data, may and may generate a dimensional history124of a user. Alternatively or additionally, a dimensional history124may refer to an output from a machine learning process using a first machine learning model112summarizing values calculated from a plurality of correlated biosketch measurements116. In non-limiting illustrative examples, a correlated biosketch measurement116may be a function, or vector, that describes a mathematical relationship between user's propensity to harbor a strain of bacteria in their microbiome that metabolizes a pharmaceutical compound described in a user's toxicity report156. In further non-limiting illustrative examples, without limitation, this may be expressed, for instance, as a function describing the relationship for any values of microbiome data and/or any values of toxicity report data. In non-limiting illustrative examples, a dimensional history124output of a machine learning model may describe a corrected, or weighed, therapeutic dose range value, without limitation, in an updated toxicity report156reflecting the correlated biosketch measurements116and blood test data of the therapeutic metabolite. In non-limiting examples, a dimensional history124of a user may include a toxicity report156that is corrected, altered, or modified in any way by data describing, for instance without limitation, a relationship between microbiome data136that corresponds to data of a toxicity report156that infers new pharmacokinetics previously unknown from the original data. A first machine learning process120generating a correlated biosketch measurement116may use an input of data that is stored and/or retrieved from a user database132and/or may use data that pertains to a first user and/or any number of other users based upon a classifier, as described above. Continuing in referring toFIG.1, calculating a correlated biosketch measurement116using a first machine learning process120includes training a first machine learning process120using training data128including data entries corresponding to users in a user set. In non-limiting examples, a machine learning process may be trained with data pertaining to a classifier generated by a classification algorithm160to generate a machine learning model, as previously described, to narrow training set data to a useful cohort or similar users. Machine learning models128generated from classifiers may be used with a first machine learning process120and a plurality of dimensions of biological extraction108input data to output at least a correlated biosketch measurement116. In non-limiting illustrative examples, a classifier describing a subset of users based on a bacterial strain identified in the microbiome data136, may be used with a first machine learning process120and a plurality of dimensions of biological extraction data108as an input to generate an output of a correlated biosketch measurement116that describes a relationship between a first user's data and a subset of other users. In further non-limiting illustrative examples, such a correlated biosketch measurement116may further increase the accuracy of a dimensional history124and/or biological outcome weighed with such a correlated biosketch measurement116. In non-limiting illustrative examples, a first correlated biosketch measurement116may be used as an input to train a machine learning process to output a second correlated biosketch measurements116. At least a correlated biosketch measurement116and at least a classifier may by stored and/or retrieved from a database, as described before. Computing device104is configured to output as a function of a plurality of dimensions of biological extraction data and a first machine learning model a dimensional history of the user. A dimensional history of the user includes any of the dimensional histories as described above in more detail above. Continuing in referring toFIG.1, computing device is configured to determine a biological outcome168of a user; determination includes using a second machine learning process168and at least a dimensional history124of a user. Computing device104determines using a second machine-learning process and a dimensional history of a user at least a biological outcome associated with a user. A “biological outcome,” as described herein, is a potential diagnosis, prognosis, explanation, course-of-action, or similar conclusion. A biological outcome may be determined from at least a first dimensional history124and at least a second element of data, wherein the second element of data is biological extraction data, database data, a classifier, and/or a second dimensional history124. In non-limiting examples, biological extraction data, database data, a classifier, and/or a second dimensional history124may be available to a computing device via, without limitation, online repositories, user databases, research databases, medical databases, and the like, as previously described. In non-limiting examples, a second machine learning process168may use a machine learning model trained with training data from, without limitation, at least a classifier generated by a classification algorithm160to narrow down dimensional histories into more selective, useful subsets of data. Additionally or alternatively, a second machine learning process168may use a machine learning model trained with training data from, without limitation, a database. A second machine learning process168may be a supervised machine learning process, and/or any machine learning process suitable for a first machine learning process120, as previously described. In non-limiting illustrative examples, a biological outcome168may be the output of a second machine learning process168, for instance and without limitation, from an input of a dimensional history124and an input of an element of data from a database. In non-limiting illustrative examples, a biological outcome168may be an output that a user has a propensity for blood clots based on the dimensional history124describing the genetic analysis144, epigenetic analysis148, blood test152data, and toxicity report156data, wherein the biological outcome168output are values of probability of a variety of diagnoses involving blood clotting using a published criterion retrieved from a database. Biological outcome168outputs may be stored and/or retrieved from a database, as previously described. A first biological outcome168may be used as an input for a machine learning process, as previously described, to generate a second biological outcome168. A first biological outcome168may be used as training data for a machine learning process, as previously described, to generate a machine learning model. Continuing in referring toFIG.1, determining a biological outcome168of a plurality of biological outcomes168of a user includes determining at least a long-term effect172. A “long-term effect,” as described herein, is the element of a biological outcome168that is a chronic, addressable or non-addressable, pathology, ailment, underlying affect, disorder, diagnosis, pattern, or any other biological outcome168that is calculated or otherwise determined from a dimensional history124of a user. In non-limiting illustrative examples, long-term effect172is the component output of a biological outcome168that may refer to diabetes, cancer, lacking a probiotic, or any other chronic biological outcome168. In further non-limiting illustrative examples, a biological outcome168may include a long-term effect172output that describes the probability of a prognosis of deep vein thrombosis (DVT), heart attack, and/or stroke, based on the input of a dimensional history124of a user and the input of published criteria, such as the Wells' Criterion for determining DVT. In non-limiting illustrative embodiments, a machine learning process may be trained with training data128from a database and/or dimensional history124, for instance without limitation, the Wells' Criterion for determining DVT, to generate a model for determining at least a long-term effect172of a biological outcome168. In further non-limiting illustrative embodiments, a second machine learning process168outputting a biological outcome168may use an input of a dimensional history124and output an acute effect176describing the probability of a variety of potential diagnosis from that data, or likelihood that a person develops a disease in their lifetime. In further non-limiting examples, an acute effect176of a biological outcome168may be weighed for the presence of false-positives and false-negatives based upon correlated biosketch measurements116and/or classifiers to subsets of users that may share a suspected diagnosis or have confirmed a diagnosis. In further non-limiting examples, an acute effect176of a biological outcome168may describe no diagnosis or plan-of-action to be taken. This data may be stored and/or retrieved, without limitation, from a user database128or a source available to a computing device104, as previously described. In non-limiting examples, an acute effect176of a biological outcome168may include an output predicting the likelihood of a disease, ailment, disorder, impairment, or the like, from the plurality of dimensions of biological extraction108, and data available from online sources, and other databases128. In further non-limiting illustrative examples, a biological outcome168may be output by a first machine learning model that is trained on data available from a database128, user-reported data, and/or other sources of data available to computing device104, to determine a probability of a long-term effect172and refine the accuracy of the determination. Continuing in referring toFIG.1, determining the at least a biological outcome168may include determining at least an acute effect176from at least a long-term effect172. An “acute effect,” as described herein, is a current plan devised to address at least a long-term effect172of a biological outcome168. Acute effect176may be generated from a third machine learning process178with a long-term effect172and a second element of data, including a plurality of dimensions of biological extraction and/or data retrieved from a database. In non-limiting illustrative examples, a third machine learning process may take an input of a long-term effect describing the probability of stroke in a user and a list of preventative measures for strokes retrieved from a database, and output an acute effect176that describes potential immediate triggers for such strokes, and/or identifies such triggers as present in user based on plurality of dimensions of biological extractions; a user, another process and/or system, and/or a medical professional may use this to determine an appropriate course of action for a user matching the plurality of dimensions of biological extraction data that the long-term effect originated. In further non-limiting illustrative examples, a third machine learning process may take an input of a long-term effect172describing a the possibility of a user's lactose intolerance and an element of a plurality of dimensions of biological extraction data that describes a user's microbiome to output an acute effect176describing a probiotic that is absent from a user's microbiome which may alleviate lactose intolerance. In non-limiting illustrative examples, a machine learning process may be trained using training data, which may be selected, without limitation, from training data pertaining to user and/or medical data matched to user using a classifier, a second long-term effect172, a second biological outcome168, and/or any data available to a computing device, as described above, to output a machine learning model for generating an acute effect176. In non-limiting illustrative examples, an acute effect176of a biological outcome168may be an output of a recommended dietary restriction and/or dietary supplementation to resolve a long-term effect172based on a model trained with training data identified by a classifier describing a subset of users lacking the long-term effect172. In further non-limiting examples, an acute effect176of a biological outcome168may describe no current deviation from a user's habits, lifestyle, of the like, regarding the long-term effect172. Referring now toFIG.3, an exemplary embodiment of a method300of determining a plurality of biological outcomes168using plurality of dimensions of biological extraction108user data and artificial intelligence. At step305, method300includes receiving by the at least a computing device104, a plurality of dimensions of biological extraction data108, including at least a microbiome data136, at least a gut wall and food sensitivity analysis144, at least a genetic analysis144, at least an epigenetic analysis148, at least a blood test152, and at least a toxicity report156; this may be implemented, without limitation, as described above in reference toFIGS.1-2. At step310, method300includes generating by the at least a computing device, using a first machine learning process, comprising a plurality of biosketch measurements wherein a first machine learning process is trained as a function of training data to output a dimensional history of a user as a function of biological extraction data. At step315, computing device104outputs as a function of a plurality of dimensions of biological extraction data and a first machine-learning process, a dimensional history of a user. This may be performed utilizing any of the methodologies as described above in more detail in reference toFIG.1. At step320, method300includes determining, by the at least a computing device, using a second machine learning process168and at least a dimensional history124, at least a biological outcome associated with a user, this may be implemented, without limitation, as described above in reference toFIGS.1-2. It will be understood by those skilled in the art, after reviewing the disclosure in its entirety, the various ways data may be input in a computing device104and the various ways outputs may be displayed by a computing device104to a user for all steps described above. It is to be noted that any one or more of the aspects and embodiments described herein may be conveniently implemented using one or more machines (e.g., one or more computing devices that are utilized as a user computing device for an electronic document, one or more server devices, such as a document server, etc.) programmed according to the teachings of the present specification, as will be apparent to those of ordinary skill in the computer art. Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those of ordinary skill in the software art. Aspects and implementations discussed above employing software and/or software modules may also include appropriate hardware for assisting in the implementation of the machine executable instructions of the software and/or software module. Such software may be a computer program product that employs a machine-readable storage medium. A machine-readable storage medium may be any medium that is capable of storing and/or encoding a sequence of instructions for execution by a machine (e.g., a computing device) and that causes the machine to perform any one of the methodologies and/or embodiments described herein. Examples of a machine-readable storage medium include, but are not limited to, a magnetic disk, an optical disc (e.g., CD, CD-R, DVD, DVD-R, etc.), a magneto-optical disk, a read-only memory “ROM” device, a random access memory “RAM” device, a magnetic card, an optical card, a solid-state memory device, an EPROM, an EEPROM, and any combinations thereof. A machine-readable medium, as used herein, is intended to include a single medium as well as a collection of physically separate media, such as, for example, a collection of compact discs or one or more hard disk drives in combination with a computer memory. As used herein, a machine-readable storage medium does not include transitory forms of signal transmission. Such software may also include information (e.g., data) carried as a data signal on a data carrier, such as a carrier wave. For example, machine-executable information may be included as a data-carrying signal embodied in a data carrier in which the signal encodes a sequence of instruction, or portion thereof, for execution by a machine (e.g., a computing device) and any related information (e.g., data structures and data) that causes the machine to perform any one of the methodologies and/or embodiments described herein. Examples of a computing device include, but are not limited to, an electronic book reading device, a computer workstation, a terminal computer, a server computer, a handheld device (e.g., a tablet computer, a smartphone, etc.), a web appliance, a network router, a network switch, a network bridge, any machine capable of executing a sequence of instructions that specify an action to be taken by that machine, and any combinations thereof. In one example, a computing device may include and/or be included in a kiosk. FIG.4shows a diagrammatic representation of one embodiment of a computing device in the exemplary form of a computer system400within which a set of instructions for causing a control system to perform any one or more of the aspects and/or methodologies of the present disclosure may be executed. It is also contemplated that multiple computing devices may be utilized to implement a specially configured set of instructions for causing one or more of the devices to perform any one or more of the aspects and/or methodologies of the present disclosure. Computer system400includes a processor404and a memory408that communicate with each other, and with other components, via a bus412. Bus412may include any of several types of bus structures including, but not limited to, a memory bus, a memory controller, a peripheral bus, a local bus, and any combinations thereof, using any of a variety of bus architectures. Processor404may include any suitable processor, such as without limitation a processor incorporating logical circuitry for performing arithmetic and logical operations, such as an arithmetic and logic unit (ALU), which may be regulated with a state machine and directed by operational inputs from memory and/or sensors; processor404may be organized according to Von Neumann and/or Harvard architecture as a non-limiting example. Processor404may include, incorporate, and/or be incorporated in, without limitation, a microcontroller, microprocessor, digital signal processor (DSP), Field Programmable Gate Array (FPGA), Complex Programmable Logic Device (CPLD), Graphical Processing Unit (GPU), general purpose GPU, Tensor Processing Unit (TPU), analog or mixed signal processor, Trusted Platform Module (TPM), a floating point unit (FPU), and/or system on a chip (SoC) Memory408may include various components (e.g., machine-readable media) including, but not limited to, a random-access memory component, a read only component, and any combinations thereof. In one example, a basic input/output system416(BIOS), including basic routines that help to transfer information between elements within computer system400, such as during start-up, may be stored in memory408. Memory408may also include (e.g., stored on one or more machine-readable media) instructions (e.g., software)420embodying any one or more of the aspects and/or methodologies of the present disclosure. In another example, memory408may further include any number of program modules including, but not limited to, an operating system, one or more application programs, other program modules, program data, and any combinations thereof. Computer system400may also include a storage device424. Examples of a storage device (e.g., storage device424) include, but are not limited to, a hard disk drive, a magnetic disk drive, an optical disc drive in combination with an optical medium, a solid-state memory device, and any combinations thereof. Storage device424may be connected to bus412by an appropriate interface (not shown). Example interfaces include, but are not limited to, SCSI, advanced technology attachment (ATA), serial ATA, universal serial bus (USB), IEEE1394(FIREWIRE), and any combinations thereof. In one example, storage device424(or one or more components thereof) may be removably interfaced with computer system400(e.g., via an external port connector (not shown)). Particularly, storage device424and an associated machine-readable medium428may provide nonvolatile and/or volatile storage of machine-readable instructions, data structures, program modules, and/or other data for computer system400. In one example, software420may reside, completely or partially, within machine-readable medium428. In another example, software420may reside, completely or partially, within processor404. Computer system400may also include an input device432. In one example, a user of computer system400may enter commands and/or other information into computer system400via input device432. Examples of an input device432include, but are not limited to, an alpha-numeric input device (e.g., a keyboard), a pointing device, a joystick, a gamepad, an audio input device (e.g., a microphone, a voice response system, etc.), a cursor control device (e.g., a mouse), a touchpad, an optical scanner, a video capture device (e.g., a still camera, a video camera), a touchscreen, and any combinations thereof. Input device432may be interfaced to bus412via any of a variety of interfaces (not shown) including, but not limited to, a serial interface, a parallel interface, a game port, a USB interface, a FIREWIRE interface, a direct interface to bus412, and any combinations thereof. Input device432may include a touch screen interface that may be a part of or separate from display436, discussed further below. Input device432may be utilized as a user selection device for selecting one or more graphical representations in a graphical interface as described above. A user may also input commands and/or other information to computer system400via storage device424(e.g., a removable disk drive, a flash drive, etc.) and/or network interface device440. A network interface device, such as network interface device440, may be utilized for connecting computer system400to one or more of a variety of networks, such as network444, and one or more remote devices448connected thereto. Examples of a network interface device include, but are not limited to, a network interface card (e.g., a mobile network interface card, a LAN card), a modem, and any combination thereof. Examples of a network include, but are not limited to, a wide area network (e.g., the Internet, an enterprise network), a local area network (e.g., a network associated with an office, a building, a campus or other relatively small geographic space), a telephone network, a data network associated with a telephone/voice provider (e.g., a mobile communications provider data and/or voice network), a direct connection between two computing devices, and any combinations thereof. A network, such as network444, may employ a wired and/or a wireless mode of communication. In general, any network topology may be used. Information (e.g., data, software420, etc.) may be communicated to and/or from computer system400via network interface device440. Computer system400may further include a video display adapter452for communicating a displayable image to a display device, such as display device436. Examples of a display device include, but are not limited to, a liquid crystal display (LCD), a cathode ray tube (CRT), a plasma display, a light emitting diode (LED) display, and any combinations thereof. Display adapter452and display device436may be utilized in combination with processor404to provide graphical representations of aspects of the present disclosure. In addition to a display device, computer system400may include one or more other peripheral output devices including, but not limited to, an audio speaker, a printer, and any combinations thereof. Such peripheral output devices may be connected to bus412via a peripheral interface456. Examples of a peripheral interface include, but are not limited to, a serial port, a USB connection, a FIREWIRE connection, a parallel connection, and any combinations thereof. The foregoing has been a detailed description of illustrative embodiments of the invention. Various modifications and additions can be made without departing from the spirit and scope of this invention. Features of each of the various embodiments described above may be combined with features of other described embodiments as appropriate in order to provide a multiplicity of feature combinations in associated new embodiments. Furthermore, while the foregoing describes a number of separate embodiments, what has been described herein is merely illustrative of the application of the principles of the present invention. Additionally, although particular methods herein may be illustrated and/or described as being performed in a specific order, the ordering is highly variable within ordinary skill to achieve methods, systems, and software according to the present disclosure. Accordingly, this description is meant to be taken only by way of example, and not to otherwise limit the scope of this invention. Exemplary embodiments have been disclosed above and illustrated in the accompanying drawings. It will be understood by those skilled in the art that various changes, omissions and additions may be made to that which is specifically disclosed herein without departing from the spirit and scope of the present invention. | 59,184 |
11861469 | DETAILED DESCRIPTION Embodiments of the present invention will now be described in detail with reference to the accompanying Figures. Auto-AI or AutoML systems allow developers, ranging from highly trained data scientist to novices using AI or ML systems for the first time, to implement a range of different solutions that suit the needs of the problem to be solved. During implementation, Auto-AI systems may develop multiple AI pipelines, which may be compared and selected by a developer. Such AI pipelines may include some combination of preprocessing data (e.g., ingestion of data, tagging of data, classification of data, preparation of data), AI model (machine learning or deep learning models), and feature engineering (transforming an existing feature to a new feature, e.g., absolute(X)). And while these techniques may allow for a robust comparison of multiple pipelines in a shortened timeframe, limitations still exist that limit the uptake of this technology. Human-Computer Interface (HCI) researchers have performed extensive work on understanding how data scientists work and how to design systems to better support them. For example, it is suggested that 80 percent of the time of a data science project is spent in data preparation. As a result, data scientists often do not have enough time to complete a comprehensive data analysis. Auto-AI systems focus mostly on the model building and data analysis tasks, with only a few exceptions that cover the data preparation tasks. Interactive machine learning research aims to design better user experiences for human users to enable more information to be provided to the machine learning tool. These are often labelers of a data sample or domain experts who have better domain knowledge that the machine learning model does not easily reflect. Alternatively, as outlined below, ease and trust of collaboration of data scientists with auto-AI to build models can greatly improve the HCI, making for more trusted and efficient overall system building. Many tools are built to support data scientists' work practices. For example, Jupyter® Notebook (Jupyter is a registered trademark of NumFOCUS, Inc.) and its variations such as Google® Colab (Google is a registered trademark of Google, LLC) and Jupyter-Lab are widely adopted by the data science community. These systems provide an easy code-and-test environment with a graphical user interface so that data scientists can quickly iterate their model crafting and testing process. Another group of tools includes the Data Voyager and TensorBoard® (TensorBoard is a registered trademark of Google, LLC) systems that provide visual analytic support to data scientists to explore their data, but they often stop in the data preparation stage and thus do not provide automated support for model building tasks. Auto-AI describes a group of technologies that can automate the manual processes required in the preprocessing of data, feature selection, model tuning, and model selection in the traditional data science workflow, so that it can support data scientists to work to generate insights from the dataset faster. Despite the extensive work in building these AI systems, they mainly focus on data visualization, and do not focus on the interaction between humans and AI in the automated data science domain. In the embodiments described below, an auto-AI system is described which may enable a mechanism to improve the HCI for data scientists and other users, and the auto-AI process in general by building a portable AI result that can be used outside of the framework of the auto-AI system, thereby reducing the amount of computational resources when these systems are run in tandem. FIG.1illustrates the auto-AI system199for use in aiding the building of an AI pipeline for implementation in data analysis, in accordance with an embodiment of the invention. In an example embodiment, auto-AI system199includes an auto-AI device110and a user computing device160interconnected via a network198. In the example embodiment, network198is the Internet, representing a worldwide collection of networks and gateways to support communications between devices connected to the Internet. Network198may include, for example, wired, wireless or fiber optic connections. In other embodiments, network198may be implemented as an intranet, a local area network (LAN), or a wide area network (WAN). In general, network198can be any combination of connections and protocols that will support communications between the auto-AI device110and the user computing device160. User computing device160may include a user interface162and/or user data164. User computing device160may be a desktop computer, a notebook, a laptop computer, a tablet computer, a handheld device, a smart-phone, a thin client, or any other electronic device or computing system capable of receiving and sending data to and from other computing devices such as auto-AI device110via network198. The components of user computing device160are described in more detail with reference toFIG.5. User interface162includes components used to receive input from a user and transmit and display the results of auto-AI generation program112in order to facilitate the user's deployment of an AI pipeline. In an example embodiment, user interface126uses a combination of technologies and devices, such as device drivers, to provide a platform to enable users of computing device110to see displays of multiple pipelines, select a specific pipeline, and view parameters and comparisons of such pipelines. In the example embodiment, the user interface162may include application components to display the results of auto-AI building and evaluation, a code editor, and a kernel for executing code developed from the auto-AI. In an example embodiment, user interface162may be any Programming Console or Integrated Developer Environment, such as VSCode, Eclipse® (Eclipse® is a registered trademark of Eclipse Foundation), Jupyter® Notebook, Google® Colab, Jupyter®-Lab, Data Voyager and TensorBoard®, etc. User data164is a data file containing information used to build and test an AI pipeline. User data164may contain structured and/or unstructured data. User data164may be any type of data that a user wants to perform data analysis on using machine learning or artificial intelligence techniques. Auto-AI device110includes an Auto-AI generation program112having an external optimization module116and an internal optimization module114, an auto-AI conversion program120, a history database130, a library database132, an auto-AI scoring program140, an auto-AI training program150, and a user interface155. In the example embodiment, auto-AI device110may include a cluster of servers, such as a cloud computing environment described in detail below inFIGS.5and6, executing the same software to collectively process the actions in parallel. However, auto-AI device may be any computing device such as a desktop computer, a notebook or a laptop computer; a smart phone, a tablet computer, a handheld device, a thin client, or any other electronic device or computing system capable of receiving and sending data to and from user computing device160via network198. The components of auto-AI device110is described in more detail with reference toFIG.5. History DB130is a database containing AI pipelines, the characteristics of the data for used to develop the AI pipeline, scores of the AI pipeline with the data, whether the AI pipeline was selected by a user, and any modifications to the AI pipeline by the user. Such data may for the basis for training the machine learning elements of the auto-AI generation program112and auto-AI conversion program120. Auto-AI scoring program140is a program that scores the performance of an AI pipeline. Auto-AI scoring program140may use any number of performance metrics to score the accuracy, precision, and any other relevant metrics in order to assess and rank each AI pipeline in comparison to each other, such as R2, F1, ROC AUC, and Precision scores. Auto-AI generation program112includes an external optimization module116and an internal optimization module114. Auto-AI generation program112is an automated machine learning program that is capable of generating one or more AI pipelines based on a dataset, such as user data164. Auto-AI generation program112may leverage machine learning techniques to select and optimize elements of an AI pipeline based on scores of previously developed AI piplines contained in history database30. In doing so, the internal optimization module114is a machine learning model trained to select components of an AI pipeline based on the type of data being evaluated, as well as statistical characteristics of the data that may lend itself to a specific element (e.g., model) or set of elements suited for the data. Internal optimization module114may compose a pipeline or a portion thereof by selecting from one or more libraries of transformers, feature unions, and estimators, to improve predictive performance. Once these elements are selected, external optimization module116may select additional components of the AI pipeline to provide additional optimization. External optimization module116may tune transformers and estimators which provide hyperparameter range information to a parameter optimizer. Each module may be trained by auto-AI training program150using the results contained in history DB130. Auto-AI training program150is a program that trains the machine learning algorithms used in external optimization module116and internal optimization module114. Training for each model is based on the type of models and the information contained in history DB130. Library database132is a collection of libraries containing commands, sub-routines, classes, value and type specifications, and other code used in machine learning or artificial intelligence systems. Auto-AI conversion program120converts an AI pipeline created by the auto-AI generation program112into a non-native format that may be operated and manipulated in an environment separate from the auto-AI device110, and the components located on the auto-AI device. The non-native format may be in the form of source code or object code and may be exportable from the auto-AI device110. In embodiments where the non-native format is in the form of source code, the code may use a general programming language such as, for example, Java® (Java® is a registered trademark of Oracle), C++, or Python® (Python® is a registered trademark of PSF). User interface155is a program that works in conjunction with user interface162to display the AI pipelines created by auto-AI generation program112, along with scores and metrics from auto-AI scoring program140, and source code created by auto-AI conversion program120. FIG.2is a flow chart illustrating a method of creating an AI pipeline using auto-AI generation program112. The selection or determination in each step may be performed by a trained machine learning module of auto-AI generation program112, which may be trained based on decisions made in performing each step and scores determined by auto-AI training program150. Referring to step210, one or more methods/techniques for preprocessing data may be selected by a specifically trained AI module, such as internal optimization module114, of auto-AI generation program112, and these methods or techniques may make up a preprocessing element of an AI pipeline. A ground truth of the user data164may be determined. In embodiments where ground truths are not generated by the user, ground truth may be determined using statistical techniques such as k-means clustering, mixture models, hierarchical clustering, hidden Markov models, blind signal separation, self-organizing maps (SOMs), adaptive resonance theory (ART), and any other applicable methods. Such ground truth gathering may define a standard, or measuring stick, of the data contained in user data164. Following ground truth determination, data cleansing of user data164may be performed with regards to the ground truth gathered in step210. Cleansing of data may include removing outliers or biases included in the data based on the comparison of such data points to the ground truth. Following cleansing, additional data engineering may be performed in order to organize or classify the data into meaningful subsets that may be used in model training. Referring to step220, one or more methods/techniques may be selected by specifically trained AI module, such as internal optimization module114, of auto-AI generation program112, and these methods or techniques may make up a modeling element of an AI pipeline. One or more models may be selected by an AI module of auto-AI generation program112based on the results and methods used in creating the preprocessing element from steps210. Auto-AI generation program112may use models such as, for example, Linear Regression, logistical Regression, Random Forest, Gradient Boosted Trees, Support Vector Machines (SVM), Neural Networks (including Convolutional Neural Networks and Deep Learning networks), Decision Trees, Naive Bayes, and Nearest Neighbor. The one or more models may undergo hyperparameters optimization for each of the one or more preprocessing elements of the AI pipeline based on the likelihood of success of such a combination based on an AI module of auto-AI generation program112. Referring to step230, one or more methods/techniques may be selected by specifically trained AI module, such as external optimization module116, of auto-AI generation program112, and these methods or techniques may make up a feature engineering element of an AI pipeline. One or more transformers may be selected by an AI module of auto-AI generation program112based on the models created in step220. Transformers may be used to convert the results of the model into a format that is easier for a user to decipher or scoring by auto-AI scoring program140. Referring to step240, the selected and trained elements may be displayed to the user using a combination of user interface162and user interface155. The display may score each of the AI pipelines created in steps210through230and display a ranking of the pipelines in order along with their score. Additionally, user interface162may display each AI pipeline element, and have an interface showing connections between each element (when applicable). User interface162may additionally have an interface element to enable a user to view and/or download a non-native format for each AI pipeline generated by auto-AI conversion program120. FIG.3is a flow chart illustrating a method used by auto-AI conversion program120to convert an AI pipeline generated by the auto-AI generation program112into a non-native format that can be implemented independent of the components of the auto-AI device110. An example build layout of the non-native format is depicted inFIG.4. Referring to step310, extraction of pipelines from auto-AI generation program112by auto-AI conversion program120may occur. Extraction of the pipeline retrieves each element (e.g., data preprocessing modules, AI models, transformer modules) from the AI pipeline created in steps210-240. In extraction of the pipeline may include the models, transformers, and data preprocessing techniques chosen by auto-AI generation program112, as well ordering of such elements. Referring to step320, auto-AI conversion program120may extract the hyperparameters determined for each element of the AI pipeline may be extracted. Referring to step330, auto-AI conversion program120may formulate the constructor for each step based on each of the extracted pipelines and the extracted hyperparameters. The formulated constructors may make up a portion of the non-native format, such as source code. Formulating the constructors may be performed through a combination of AI and non-AI techniques. For example, in an embodiment a shell may be constructed by inserting a form layout of constructors for the element. The form layout may include the constructors, function calls, or any code that may form a generic template for an element. The hyperparameters may be loaded or placed into the shell based on rules for such hyperparameters. In this embodiment, an AI code generation module may modify the source code for the created shell to conform with learned user modifications. The AI code generation module may be trained based on modifications made to the source code in conjunction with scores for the modified code determined by auto-AI scoring program140. In another embodiment, an AI code generation module may create the source code without the aid of forms and the intermediate step of creating a shell. In another embodiment, auto-AI conversion program120may transition from using a form layout to just an AI code generation module based on training of the AI code generation module based on the form layout. Referring to step340, auto-AI conversion program120may generate an AI pipeline in the non-native format. Generation of the pipeline may be performed by ordering the constructors for each of the extracted elements based on the order used in auto-AI generation program112extracted in step310. For example, the constructors for each pipeline may be ordered similarly to the constructed layout depicted inFIG.4. InFIG.4, the constructors are ordered having constructors to load of the libraries and data410, constructors for the preprocessing elements420, constructors for the AI model430, constructors for a first optimization of the hyperparameters440of the AI model, constructors for feature engineering elements450, and constructors for a second optimization hyperparameters460of the AI model and feature engineering elements. This is an example layout of a construction of a complete AI pipeline having at least constructor from each element, however constructors may be removed or added depending on the data contained in user data164. Referring to step350, auto-AI conversion program120may display the non-native format to the user in a code editor using user interface162and user interface155. The code editor may display the editable code used in non-native format of the AI pipeline to a user, which may allow the user to read and/or edit the code. Additionally, by allowing a user to interact with the code, it enables additional user confidence in the output of the auto-AI generation program112. Specifically, the user may confirm that the generated code is acceptable, and if it is not acceptable the user may manipulate the code to suit their preferences. Referring to step360, auto-AI conversion program120may determine whether a user has edited the code. This determination may be made by comparing the code (or hashes of versions of the code) with each other to determine if any differences exist in what was originally created by the auto-AI conversion program120and what is currently displayed in the code editor. The determination may be made intermittently (e.g., based on time elapsed in editor, based an addition/deletion of a line of code) or on user prompt. If there is a change in the code, auto-AI conversion program120proceeds to step380. If there is no change in the code, auto-AI conversion program120proceeds to step380. Referring to step370, the newly created code may be evaluated similarly to each of the other AI pipelines by auto-AI generation program112. For example, the newly created code may be optimized, scored, and displayed, using the processes outlined in steps220through240. This may enable the user to compare the new code with each of the other auto-AI models using similar metrics. Additionally, scoring of the newly created code may be saved in history database132and fed back into the AI code generation module for training of the model. Referring to step380, auto-AI conversion program120may export the non-native format for use in an environment outside of the auto-AI generation program112. Such an environment may be outside of the cloud environment in which the non-native format was originally constructed, or as a portion of an API in the cloud environment. FIG.5depicts a block diagram of components of auto-AI device110and user computing device160, in accordance with an illustrative embodiment of the present invention. It should be appreciated thatFIG.3provides only an illustration of one implementation and does not imply any limitations with regard to the environments in which different embodiments may be implemented. Many modifications to the depicted environment may be made, and auto-AI device110and user computing device160may include multiple instances of the computer/server depicted, such as in the cloud computing environment described inFIG.6andFIG.7. Auto-AI device110and user computing device160include communications fabric902, which provides communications between computer processor(s)904, memory906, persistent storage908, communications unit912, and input/output (I/O) interface(s)914. Communications fabric902can be implemented with any architecture designed for passing data and/or control information between processors (such as microprocessors, communications and network processors, etc.), system memory, peripheral devices, and any other hardware components within a system. For example, communications fabric902can be implemented with one or more buses. Memory906and persistent storage908are computer-readable storage media. In this embodiment, memory906includes random access memory (RAM)916and cache memory918. In general, memory906can include any suitable volatile or non-volatile computer-readable storage media. The programs auto-AI generation program112, external optimization module116, internal optimization module114, auto-AI conversion program120, auto-AI scoring program140, auto-AI training program150, and user interface155in auto-AI device110; and user interface162in user computing device160are stored in persistent storage908for execution by one or more of the respective computer processors904via one or more memories of memory906. The files history database130and library database132in auto-AI device110; and user data164in user computing device160are stored in persistent storage908. In this embodiment, persistent storage908includes a magnetic hard disk drive. Alternatively, or in addition to a magnetic hard disk drive, persistent storage908can include a solid state hard drive, a semiconductor storage device, read-only memory (ROM), erasable programmable read-only memory (EPROM), flash memory, or any other computer-readable storage media that is capable of storing program instructions or digital information. The media used by persistent storage908may also be removable. For example, a removable hard drive may be used for persistent storage908. Other examples include optical and magnetic disks, thumb drives, and smart cards that are inserted into a drive for transfer onto another computer-readable storage medium that is also part of persistent storage908. Communications unit912, in these examples, provides for communications with other data processing systems or devices. In these examples, communications unit912includes one or more network interface cards. Communications unit912may provide communications through the use of either or both physical and wireless communications links. The generation program112, external optimization module116, internal optimization module114, auto-AI conversion program120, auto-AI scoring program140, auto-AI training program150, user interface155, history database130, and library database132in auto-AI device110; and user interface162and user data164in user computing device160may be downloaded to persistent storage908through communications unit912. I/O interface(s)914allows for input and output of data with other devices that may be connected to auto-AI device110and social media user computing device160. For example, I/O interface914may provide a connection to external devices920such as a keyboard, keypad, a touch screen, and/or some other suitable input device. External devices920can also include portable computer-readable storage media such as, for example, thumb drives, portable optical or magnetic disks, and memory cards. Software and data used to practice embodiments of the present invention, e.g., The generation program112, external optimization module116, internal optimization module114, auto-AI conversion program120, auto-AI scoring program140, auto-AI training program150, user interface155, history database130, and library database132in auto-AI device110; and user interface162and user data164in user computing device160can be stored on such portable computer-readable storage media and can be loaded onto persistent storage908via I/O interface(s)914. I/O interface(s)914can also connect to a display922. Display922provides a mechanism to display data to a user and may be, for example, a computer monitor. It is to be understood that although this disclosure includes a detailed description on cloud computing, implementation of the teachings recited herein are not limited to a cloud computing environment. Rather, embodiments of the present invention are capable of being implemented in conjunction with any other type of computing environment now known or later developed. Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models. Characteristics are as follows: On-demand self-service: a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider. Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs). Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time. Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service. Service Models are as follows: Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations. Infrastructure as a Service (IaaS): the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls). Deployment Models are as follows: Private cloud: the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises. Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises. Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services. Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds). A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure that includes a network of interconnected nodes. Referring now toFIG.6, illustrative cloud computing environment50is depicted. As shown, cloud computing environment50includes one or more cloud computing nodes10with which local computing devices used by cloud consumers, such as, for example, personal digital assistant (PDA) or cellular telephone54A, desktop computer54B, laptop computer54C, and/or automobile computer system54N may communicate. Nodes10may communicate with one another. They may be grouped (not shown) physically or virtually, in one or more networks, such as Private, Community, Public, or Hybrid clouds as described hereinabove, or a combination thereof. This allows cloud computing environment50to offer infrastructure, platforms and/or software as services for which a cloud consumer does not need to maintain resources on a local computing device. It is understood that the types of computing devices54A-N shown inFIG.6are intended to be illustrative only and that computing nodes10and cloud computing environment50can communicate with any type of computerized device over any type of network and/or network addressable connection (e.g., using a web browser). Referring now toFIG.7, a set of functional abstraction layers provided by cloud computing environment50(FIG.6) is shown. It should be understood in advance that the components, layers, and functions shown inFIG.7are intended to be illustrative only and embodiments of the invention are not limited thereto. As depicted, the following layers and corresponding functions are provided: Hardware and software layer60includes hardware and software components. Examples of hardware components include: mainframes61; RISC (Reduced Instruction Set Computer) architecture based servers62; servers63; blade servers64; storage devices65; and networks and networking components66. In some embodiments, software components include network application server software67and database software68. Virtualization layer70provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers71; virtual storage72; virtual networks73, including virtual private networks; virtual applications and operating systems74; and virtual clients75. In one example, management layer80may provide the functions described below. Resource provisioning81provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and Pricing82provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may include application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal83provides access to the cloud computing environment for consumers and system administrators. Service level management84provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment85provide pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA. Workloads layer90provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation91; software development and lifecycle management92; virtual classroom education delivery93; data analytics processing94; transaction processing95; and Auto-AI program96. The programs described herein are identified based upon the application for which they are implemented in a specific embodiment of the invention. However, it should be appreciated that any particular program nomenclature herein is used merely for convenience, and thus the invention should not be limited to use solely in any specific application identified and/or implied by such nomenclature. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. While steps of the disclosed method and components of the disclosed systems and environments have been sequentially or serially identified using numbers and letters, such numbering or lettering is not an indication that such steps must be performed in the order recited, and is merely provided to facilitate clear referencing of the method's steps. Furthermore, steps of the method may be performed in parallel to perform their described functionality. | 43,146 |
11861470 | DETAILED DESCRIPTION FIG.1illustrates an example network environment100for generating an ML model generation tool in accordance with an implementation of the present disclosure. As illustrated inFIG.1, the network environment100includes a network102, one or more user devices104, one or more storages device108, one or more cloud devices110, and/or a service provider112. The network102may be a single network or a combination of different networks. For example, the network102may be a local area network (LAN), a wide area network (WAN), a public network, a private network, a proprietary network, a Public Telephone Switched Network (PSTN), an Internet, a wireless network, a virtual network, a satellite network, or any combination thereof. The network102may also include various network access points, e.g., wired or wireless access points such as base stations or Internet exchange points, through which a data source may connect to the network102in order to transmit data114-1,114-2,114-3, etc. (collectively referred to herein as “data114”), via the network102. The one or more user devices104may be any type of computing devices including, but not limited to, a desktop computer, a laptop computer, a built-in device in a motor vehicle, or a mobile device. In implementations, the one or more user devices104may also include wearable devices, such as a smart watch, smart glasses, smart shoes, electronic textiles, etc. Using one or more of the user devices104, a user (not shown) may send data114-1to the service provider112via the network102voluntarily or in response to a request from the service provider112or a third-party. The user may be an existing customer of the service provider112. For example, the user may be a policy holder of an auto insurance service or of any other type of insurance policy (e.g., home, life, etc.). In implementations, the user may be a potential customer of the service provider112. The data114-1may include, but not limited to, a potential customer survey data, insurance quote data, customer information, vehicle information, accident and claim information, etc. The data114-1may be real-time data or data that is accumulated over a period of time. It should be appreciated that the data114-1,114-2, and114-3shown inFIG.1are merely for the purpose of illustration. The data114-1generated by one or more of the user devices104may be uploaded to a remote database (e.g., storage device108), a cloud storage (not shown inFIG.1) associated with the cloud devices110, or the storage device112-C associated with the service provider112. As such, the content of the data114-1,114-2, and114-3may have certain level of overlap, yet, each of the data114-1,114-2, and114-3may also include non-overlapping information The service provider112may include a server device112-A, a model generating device112-B, and/or a storage devices112-C. The service provider112may utilize one or more of the server device112-A, the model generating device112-B, or the storage devices112-C to provide internet-based services, for example, banking services, auto-insurance services, home security services, etc. The server device112-A may implement software and/or applications enabling online to offline operations. The software and/or applications may include various versions or instances that can be installed or created in the user devices (e.g., the one or more user devices104). The software and/or applications may be stored on the storage device112-C. The model generating device112-B may be any type of computing device that is configured to generate a ML model. It should be understood that the server device112-A, the model generating device112-B, and/or the storage device112-C shown inFIG.1are merely for illustration purpose. The present disclosure is not intended to be limiting. The model generating device112-B can be integrated to the server device112-A. In implementations, the model generating device112-B can be located at a third-party service provider connected to the network102. The storage device112-C may be physically connected to and in communication with the same intranet of the server device112-A. In implementations, the storage device112-C may be a cloud storage space provided by a cloud service provider. In some examples, the model generating device112-B generates a web-based tool that enables a user to generate, modify, or train the ML models from any computing device connected to the network102. The web-based tool and the pre-generated ML models (i.e., the pre-trained ML models) may be further implemented on a cloud-based system, for example, the cloud device110. The web-based tool and the pre-trained ML model may be distributed to any computing devices connected to the cloud-base system. Any computing devices connected to the cloud-based system may download the web-based tool and the pre-trained ML model to the local storage and perform data analysis using the trained ML model. In some examples, the user may modify the pre-trained ML model via the web-based tool, or generate additional ML models via the we-based tool. An administrator106of the service provider may access the one or more server devices112-A, one or more model generating devices112-B, and/or one or more storage devices112-C for perform a task. For example, as will be described in greater detail below, the administrator106may send a request via the network102to the one or more user devices104to obtain data114-1stored thereon. In implementations, the administrator106may retrieve data stored on the one or more storage devices112-C. In other implementations, the administrator106may retrieve data114-3stored on the one or more storage device108via the network102. Additionally, or alternatively, the administrator106may retrieve data114-2from one or more cloud devices110. The one or more cloud devices110may include a cloud service provider or a third-party service provider that is affiliated with the service provider, for example, a product manufacture or an application provider that sells the product or service through a service provider platform. The example network environment100illustrated inFIG.1facilitates a user of the ML model generating system to obtain data from various resources, via the network102, to train the ML model. For example, to train a ML model to predict potential users of a newly proposed auto-insurance plan, the user may obtain data114-3stored in the storage device108via the network102. The data114-3may include information related to former and existing customers of the auto-insurance company. Alternatively, or additionally, the user may obtain data114-1from the user devices104and/or and data114-2from the cloud device110, via the network102. The data114-1and114-2may include information related to potential customers, such as, consuming behaviors, social activities, travel frequencies and preferences, etc. The example network environment as illustrated inFIG.1provides the user the availability and flexibility to utilize various types of data to train the ML model to achieve optimal prediction results. In addition, the example network environment as illustrated inFIG.1provides a web-based application with a guided user interface (GUI) that enables the user to build new ML models and/or modify the pre-trained ML models based on various business analysis needs. The GUI provides step-by-step instructions to the user to configure one or more parameters related to data analysis and prediction using the ML model and datasets from various data sources. FIG.2illustrates an example configuration200of a device for generating an ML model generation tool in accordance with an implementation of the present disclosure. As illustrated inFIG.2, the example configuration200of the ML model generating device112-B may include, but is not limited to, one or more processing units204, one or more network interfaces206, an input/output (I/O) interface208, and a memory210. In implementations, the processing units204may be configured to execute instructions that are stored in the memory210, received from the input/output interface208, and/or the network interface206. In implementations, the processing units204may be implemented as one or more hardware processors including, for example, a microprocessor, an application-specific instruction-set processor, a physics processing unit (PPU), a central processing unit (CPU), a graphics processing unit, a digital signal processor, a tensor processing unit, etc. Additionally or alternatively, the functionality described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), application-specific standard products (ASSPs), system-on-a-chip systems (SOCs), complex programmable logic devices (CPLDs), etc. The memory210may include machine readable media in a form of volatile memory, such as Random Access Memory (RAM) and/or non-volatile memory, such as read only memory (ROM) or flash RAM. The memory210is an example of machine readable media. The machine readable media may include a volatile or non-volatile type, a removable or non-removable media, which may achieve storage of information using any method or technology. The information may include a machine readable instruction, a data structure, a program module or other data. Examples of machine readable media include, but not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random-access memory (RAM), read-only memory (ROM), electronically erasable programmable read-only memory (EEPROM), quick flash memory or other internal storage technology, compact disk read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage, magnetic cassette tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission media, which may be used to store information that may be accessed by a computing node. As defined herein, the machine readable media does not include any transitory media, such as modulated data signals and carrier waves. In implementations, the network interfaces206may be configured to connect the model generating device112-B to other computing devices via the network102. The network interfaces206may be established through a network interface controller (NIC), which may employ both hardware and software in connecting the model generating device112-B to the network102. Each type of NIC may use a different type of fabric or connector to connect to a physical medium associated with the network102. Examples of types of fabrics or connectors may be found in the IEEE 802 specifications, and may include, for example, Ethernet (which is defined in 802.3), Token Ring (which is defined in 802.5), and wireless networking (which is defined in 802.11), an InfiniBand, etc. In implementations, the model generating device112-B may further include other hardware components and/or other software components, such as program modules214to execute instructions stored in the memory210for performing various operations, and program data212for storing data related to various operations performed by the program modules214. The program modules214may include a data summarization module222, a data pre-processing module224, a data visualization module226, a data correlation discovery module228, a dimension reduction module230, an initialization module232, a training module234, a testing module236, and a delivery module238. The data summarization module222may be configured to generate a summary of a dataset202received through the network interface206. The model generating device112-B may generate the guided user interface (GUI) (i.e., a graphic user interface) on a terminal device that the administrator106operates. The guided user interface may be compatible with the input/output (I/O) interface208. The administrator106may obtain the dataset202from various data storages and import the dataset to the model generating device112-B by operating the guided user interface. The dataset202may be any combinations of the data114-1,114-2, or114-3shown inFIG.1, and may be stored on the program data212. The dataset202can be in any computer readable format, for example, and without limitation, text file format or comma-separated values (CSV) file format. Given CSV file format as an example, based on the input of the administrator106via the guided user interface, the data summarization module222determines a count of rows and a count of columns of the dataset. The columns of the dataset may denote a plurality of variables or objects and the rows of the dataset may denote respective values corresponding to the plurality of variables or objects. The data summarization module222may generate the summary including the count of columns, the count of rows, and a total count of data items in the dataset. In implementations, based on the input of the administrator106via the guided user interface, the data summarization module222may further calculate statistics of the respective values corresponding to each of the plurality of variables or objects, for example, a sum of the respective values corresponding to each of the plurality of variables or objects, a mean value of the respective values corresponding to each of the plurality of variables or objects, a median value of the respective values corresponding to each of the plurality of variables or objects, a standard deviation of the respective values corresponding to each of the plurality of variables or objects, a minimum value of the respective values corresponding to each of the plurality of variables or objects, a maximum value of the respective values corresponding to each of the plurality of variables or objects, etc. The data pre-processing module224may be configured to receive the dataset202and the summary of the dataset202from the data summarization module222and pre-process the dataset202based on the input of the administrator106via the guided user interface. The model generating device112-B may update the guided user interface to guide the administrator106to select the pre-processing operations. The pre-processing operations on the dataset202may include removing null values in the dataset or replacing the null values with a selected value, e.g., a mean value or a median value indicated in the summary of the dataset202. Alternatively, or additionally, the pre-processing operations on the dataset202may also include dropping duplicate columns of the dataset, i.e., duplicate variables or objects. The pre-processing operations on the dataset202may further include outliers treatment. For a given variable, outliers are those observations that lie outside 1.5*Inter Quartile Range (IQR), where IQR is the difference between 75thand 25thpercentiles. The outliers treatment may include imputations of the outliers with a mean value, a median value, a mode value, etc. Alternatively, or additionally, the outliers treatment may include capping of the outliers. For missing values that lie outside the 1.5*IQR limits, the pre-processing operations may cap them by replacing those observations below the lower limit with the value of 5th% and those observations above the upper limit with the value of 95th%. In implementations, the pre-processing operations on the dataset202may be performed on ordinal categorical variables. In other implementations, the pre-processing operations on the dataset202may be performed on numerical values of a single variable or object. The data visualization module226may be configured to receive the pre-processed dataset202from the data pre-processing module224and generate one or more graphic illustrations of the dataset202based on the input of the administrator106via the guided user interface. The model generating device112-B may update the guided user interface to guide the administrator106to select the types of the graphic illustrations. For example, and without limitation, the one or more graphic illustrations may include histograms of the dataset, box plots of the dataset, pie plots of the dataset, correlation plots of the dataset, scattered plots of the dataset, etc. The guided user interface may provide user interactive guidance enabling the administrator106to select a portion or a combination of different portions of the dataset202to be presented. The data visualization module226then also generates the one or more graphic illustrations of a portion of the dataset202based on the input of the administrator106via the guided user interface. The data visualization module226presents the pre-processed dataset202in various illustrations that facilitates the user to further discover the correlations between different variables or objects. For instance,FIGS.3A and3Billustrate an example interface300generated by the data visualization module226and associated with generating an ML model generation tool. Aspects of the example interface300shown inFIGS.3A and3Bwill be described in greater detail below. With continued reference toFIG.2, the data correlation discovery module228may be configured to receive the pre-processed dataset202from the data pre-processing module224and identify various relationships among the plurality of variables or objects. For example, based on one or more correlation plots of the dataset202generated by the data visualization module226, the data correlation discovery module228may identify linear dependencies for a given variable or object. The data correlation discovery module228may further identify cross correlations for a given variable or object. Based on the linear dependencies and cross correlations, the data correlation discovery module228may further identify one or more highly-correlated variables or objects with respect to the given variable or object, i.e., the best features of the given variable or object. In implementations, the one or more highly correlated variables or objects may be a pre-set number of highly correlated variables or objects. Alternatively, or additionally, the one or more highly-correlated variables or objects may be determined based on a pre-set threshold. The variables or objects having correlation degrees that exceed the pre-set threshold may be determined as highly-correlated to the target variable or target object. The dimension reduction module230may be configured to receive the pre-processed dataset202from the data pre-processing module224and perform dimension reduction on the dataset202based at least on the highly-correlated variables or objects associated with a target variable or target object. The dimension reduction module230may map the original dimension of dataset202(i.e., the high-dimension of dataset) to a low-dimension of dataset so that the variance of the data values in the low-dimension representation is maximized. The low-dimension of dataset may be used as a training dataset of a machine learning model. The dimension reduction module230may implement various algorithms to perform dimension reduction on the dataset including, but not limited to, random forest algorithm, K-nearest neighbors algorithm, principle component analysis (PCA), non-negative matrix factorization (NMF), kernel PCA, graph-based kernel PCA, linear discriminant analysis (LDA), generalized discriminant analysis (GDA), single variable logistic regression algorithm, variable clustering algorithm, etc. The model generating device112-B may update the guided user interface to guide the administrator106to select the algorithms for dimension reduction. The initialization module232may be configured to initialize a ML model based on the input of the administrator106via the guided user interface. The model generating device112-B may update the guided user interface to facilitate the administrator106to select one or more parameters associated with the ML model. For example, and without limitation, the one or more parameters may include an algorithm to be used for the ML model, a target variable or object to be predicted, one or more key features used to predict the target variable, etc. The algorithm to be used for the ML model may include, but not limited to, supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning, feature learning, sparse dictionary learning, etc. The one or more key features may be obtained based on the results from the dimension reduction module230. In implementations, the one or more parameters may further include a parameter k related to k-fold cross-validation of the machine learning model. The cross-validation refers to a resampling procedure to evaluate a trained ML model on the training dataset. The parameter k refers to a number of groups that the training dataset is split into. In a 3-fold cross-validation, the training dataset is split into three groups, among which, two groups of the training dataset may be used for training and one group of the training dataset may be used for testing. It should be understood that the one or more parameters associated with the ML model described above are merely for illustration purpose. The present disclosure is not intended to be limiting. Once the one or more parameters associated with the ML model are set, the training module234may train the ML model based on the training dataset and to generate a trained ML model. The testing module236may validate the trained ML model before the trained ML model is delivered. Once the trained ML model is validated to satisfy a pre-set prediction accuracy, the delivery module238may deliver the trained ML model to be stored in a storage space, e.g., the storage device112-C, or the storage device108. Alternatively, or additionally, the delivery module238may deliver the trained ML model to be implemented on any computing devices, e.g., the one or more user devices104. It should be appreciated that the data summarization module222, the data pre-processing module224, the data visualization module226, the data correlation discovery module228, the dimension reduction module230, the initialization module232, the training module234, the testing module236, and the delivery module238shown inFIG.2are merely for illustration purpose. The functions of one or more of those modules may be integrated to one single module. The present disclosure is not intended to be limiting. FIG.3Aillustrates an example interface for generating an ML model generation tool in accordance with an implementation of the present disclosure. The example interface300may be generated by the data visualization module226and provide a guided user interface to guide the administrator106to select the types of the graphic illustrations to present the dataset202. The example interface300may include a guidance window302to facilitate user to select a variable from the dataset202to generate a graph histogram of the numerical values associated with the variable. The example interface300may further include a guidance window304to facilitate user to select multiple variables and generate a box plot and/or a scattered plot of the numerical values associated with the multiple variables. The example interface300may include a guidance window306to facilitate the user to select multiple variables and generate correlation plots associated with the multiple variables. The example interface300provides an interactive window to the user to analyze the dataset202and determine highly-correlated variables to be used for generating the ML model. The example interface300merely illustrates the guided user interface generated during the data visualization process. The example interface300may include different interactive windows during different stages of generating the ML model. By generating the interactive windows in each stage, the model generating device112-B can provide the user with full manipulation of the dataset202and flexibility to determine the algorithms and parameters associated with the ML model. FIG.3Billustrates another example interface for generating an ML model generation tool in accordance with an implementation of the present disclosure. After the user selects the graph histograms in the guidance window302, the response variable and numeric variable in the guidance window304, and the correlation variables in the guidance window306, the data visualization module226may display the histograms, the box plots, and the correlation associated with the dataset as illustrated by Plot-A, Plot-B, and Plot-C, respectively. As the selected dataset characters are visualized via the guided user interface, the user can efficiently determine the parameters or the variables that are highly corrected to a target object and use only those highly-corrected parameters to generate the ML model. The methods described inFIGS.4-7are described in the general context of machine-executable instructions. Generally, machine-executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, and the like that perform particular functions or implement particular abstract data types. Furthermore, each of the example methods are illustrated as a collection of blocks in a logical flow graph representing a sequence of operations that can be implemented in hardware, software, firmware, or a combination thereof. The order in which the method is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method, or alternate methods. Additionally, individual blocks may be omitted from the method without departing from the spirit and scope of the subject matter described herein. In the context of software, the blocks represent computer instructions that, when executed by one or more processors, perform the recited operations. In the context of hardware, some or all of the blocks may represent application specific integrated circuits (ASICs) or other physical components that perform the recited operations. FIG.4illustrates an example flow chart400for generating an ML model generation tool in accordance with an implementation of the present disclosure. At block402, the model generating device112-B may receive, from a computing device, a dataset including a plurality of objects and respective values corresponding to the plurality of objects. The dataset may be stored in any computer readable format, in which, the plurality of objects may also refer to a plurality of variables. In implementations, the values included in the dataset may represent consumer information associated with a service provider, such as, consumer's age, gender, race, occupation, annual income, products and/or services purchased from the service provider, claims filed and/or processed by the service provider, etc. The model generating device112-B may load the dataset from a storage device connected to a local computer network. Alternatively, or additionally, the model generating device112-B may obtain the dataset from a remote storage space, such as, a cloud storage space, or a third-party storage space, etc. At block404, the model generating device112-B may determine a dimension of the dataset, the dimension including a first dimension of the plurality of objects and a second dimension of the respective values. The model generating device112-B may determine counts of columns and rows that correspond to the dimensions of the dataset. The model generating device112-B may further determine a total count of data items in the dataset. In implementations, the dimension of the dataset may be determined by the data summarization module222of the model generating device112-B. The data summarization module222determines a count of rows and a count of columns of the dataset. The columns of the dataset may denote a plurality of variables or objects and the rows of the dataset may denote respective values corresponding to the plurality of variables or objects. At block406, the model generating device112-B may determine statistic information associated with the dataset. The statistic information may include mean values, median values, standard deviations, distributions that the data items fit into, etc. The model generating device112-B may determine the statistic information for each of the plurality of objects that have numerical values. In implementations, non-numerical values associated with the objects may be digitized and statistic information may be determined based on the digitized values associated with these objects. In implementations, the statistic information associated with the dataset may be determined by the data summarization module222of the model generating device112-B. At block408, the model generating device112-B may determine whether null value exists in the dataset. If the null value exists in the dataset (block408—Yes), the model generating device112-B may preform null value treatment at bock410. The null value treatment may include, but is not limited to, removing the null value from the dataset, replacing the null value with a pre-set value, e.g., a mean value, a median value, etc. If the null value does not exist in the dataset (block408—No), the model generating device112-B may further determine whether outlier value exists in the dataset at block412. If the outlier value exists in the dataset (block412—Yes), the model generating device112-B may preform outlier value treatment at bock414. The outlier value treatment may include imputations of the outliers with a mean value, a median value, a mode value, etc. Alternatively, or additionally, the outlier value treatment may include capping of the outliers. For missing values that lie outside the 1.5*IQR limits, the pre-processing operations may cap them by replacing those observations below the lower limit with the value of 5th% and those observations above the upper limit with the value of 95th%. If an outlier value does not exist in the dataset (block412—No), the model generating device112-B may proceed directly from block412to block416. At block416, the model generating device (e.g., the model generating device112-B) may generate pre-processed dataset after the null value and outlier value treatments are performed. In implementations, the operations described with respect to blocks408-416may be performed by the data pre-processing module224of the model generating device112-B. The example method described with respect toFIG.4performs an initial assessment of the dataset, summarizes the dimension and statistic information related to the dataset, and performs treatments on the null values and outlier values in the dataset. The operations described herein help the user to learn the characteristics of the dataset including, but not limited to, data types, data distribution characteristics, missing features and observation count. Training the ML model using the pre-processed dataset (i.e., with removed null values and/or replaced outlier values) also improves the prediction outcome of the ML model. FIG.5illustrates another example flow chart500for generating an ML model generation tool in accordance with an implementation of the present disclosure. At block502, the model generating device112-B may receive, at a guided user interface, a selection of a first object from the plurality of objects. A user (e.g., the administrator106) may select the first object from the plurality of objects and identify one or more second objects that are highly correlated to the first object. In implementations, the operation of block502may be performed by the data visualization module226of the model generating device112-B. At block504, the model generating device112-B may receive, at the guided user interface, selections of one or more parameters for presenting data associated with the first object in a visual format. The one or more parameters may include the visual formats for presenting data, such as, histograms of the dataset, box plots of the dataset, pie plots of the dataset, correlation plots of the dataset, scattered plots of the dataset, etc. In implementations, the one or more parameters may further include a list of objects that the user can choose from to observe the correlations between the objects. In implementations, the operation of block504may be performed by the data visualization module226of the model generating device112-B. At block506, the model generating device112-B may determine influence degrees between the first object and other objects based at least in part on the presenting of data associated with the first object in the visual format. The correlations between the objects may be represented as a correlation matrix having a plurality of correlation coefficients. The greater a correlation coefficient, the higher correlation between two objects. For the given first object, other objects that have greater correlation coefficients may be determined as having higher influence degrees therebetween. In implementations, the operation of block506may be performed by the data correlation discovery module228of the model generating device112-B. At block508, the model generating device112-B may select a number of second object from the other objects based at least in part on the influence degrees. The model generating device112-B may select the number of second object based on a pre-set threshold related to the influence degrees. Alternatively, or additionally, the model generating device112-B may select a pre-set top number of second objects based on the ranked influence degrees. In implementations, the operation of block508may be performed by the data correlation discovery module228of the model generating device112-B. At block510, the model generating device112-B may determine one or more key features associated with the first object based on the count of second objects. The one or more key features may refer to at least part of the second objects that influences the prediction outcome with respect to the first object. In implementations, the operation of block510may be performed by the data correlation discovery module228of the model generating device112-B. The example method described with respect toFIG.5explores the relationships among the plurality of variables in the dataset. Given a target variable, the example method determines one or more variables highly-related to the target variable. The ML model with respect to the target variable can be trained using the numerical values associated with the one or more highly-related variables to achieve better prediction performance. FIG.6illustrates another example flow chart600for generating an ML model generation tool in accordance with an implementation of the present disclosure. At block602, the model generating device112-B may obtain the dataset including a plurality of objects and respective values corresponding to the plurality of objects. The dataset may include any combinations of the data stored on the storage device112-C of the service provider112, the data114-1from the one or more user devices104, the data114-2from the one or more cloud devices110, or the data114-3from the one or more storage device108, etc. The In implementations, the operation of block602may be performed by the data summarization module222of the model generating device112-B. The operation described at block602may be caused by a user operation on a guided user interface (GUI) of the ML model generation tool. For example, the user may select, via the GUI a dataset from a data resource and load the dataset to the local storage. The data resource may be located in a local storage or a remote storage. The user selection may generate a call to an application program interface (API), through which, the data summarization module222communicates with the data resource to retrieve the dataset. At block604, the model generating device112-B may perform dimension reduction on the dataset to generate a data subset. The model generating device112-B may implement various algorithms to perform dimension reduction on the dataset, such as, random forest algorithm, K-nearest neighbors algorithm, principle component analysis (PCA), single variable logistic regression algorithm, variable clustering algorithm, etc. The model generating device (e.g., the model generating device112-B) may update the graphic user interface to facilitate the user to choose the algorithm for dimension reduction. The data subset, i.e., the low-dimension data subset, may be stored in a storage device and/or a storage space. In implementations, the operation of block604may be performed by the dimension reduction module230of the model generating device112-B. The operation described at block604may be caused by a subsequent user operation on the guided user interface (GUI) of the ML model generation tool. In some examples, the GUI of the ML model generation tool may provide a plurality of available dimension reduction algorithms for the user to choose from. When the user operates on the GUI and makes a selection of the dimension reduction algorithm, a subsequent call to an API is generated. The subsequent call to the API causes the dimension reduction module230to perform dimension reduction on the dataset using the selected dimension reduction algorithm. At block606, the model generating device112-B may divide the data subset into at least a training subset and a testing subset. For example, the data subset, i.e., the low-dimension data subset, may be split into three subsets, among which, two subsets of the data subset may be used for training and one subset of the data subset may be used for testing. It should be understood that the model generating device112-B may divide the data subset into various number of subsets for training and testing. The present disclosure is not intended to be limiting. The user may select the parameter related to k-fold cross-validation on the guided user interface (GUI) of the ML model generation tool to define the split of the training subset and testing subset. At block608, the model generating device112-B may receive, at the guided user interface, a selection of an algorithm to construct a ML model. The model generating device112-B may update the guided user interface to guide the user to select the algorithm for the ML model. The algorithm to be used for the ML model may include supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning, feature learning, sparse dictionary learning, etc. In some examples, the selection may include a combination of different algorithms for the ML model. In implementations, the operation of block606may be performed by the initialization module232of the model generating device112-B. At block610, the model generating device112-B may train the ML model based at least in part on the training subset to generate a trained ML model with respect to the first object. The low-dimension data subset filters out the objects and the associated values that are less influential to the first object and contains the objects and the associated values that are highly related to the first object. The operation described at block610may be triggered by a user operation on the GUI of the ML model generation tool to train the ML model. In implementations, the operation of block608may be performed by the training module234and the testing module236of the model generating device112-B. At block612, the model generating device112-B may test the machine learning model based at least in part on the testing subset to validate accuracy of the machine learned model with respect to the first object. The model generating device112-B may use at least part of the testing subset as an input to the machine learning model to predict an output. The model generating device112-B may compare the output with the corresponding value indicated in the testing subset to determine the accuracy of the machine learning model. When the difference between the output and the corresponding value indicated in the testing subset is no greater than a pre-set threshold, the model generating device112-B may determine that the machine learning model satisfies the accuracy requirement. The operation described at block612may be triggered by a user operation on the GUI of the ML model generation tool to test the ML model. At block614, the model generating device112-B may store the trained ML model with respect to the first object on a database. The trained ML model may be stored in a local storage device connected to the computer network of the service provider. Alternatively, or additionally, the trained ML model may be stored in a cloud storage space or a third-party storage space. In implementations, the operation of block610may be performed by the delivery module238of the model generating device112-B. The operation described at block614may be triggered by a user operation on the GUI of the ML model generation tool to store the trained ML model. The GUI of the ML model generation tool may provide the locations to store the ML model. The user may select storing the ML model in a local computing device or a remote/cloud storage device. In some examples, the GUI of the ML model generation tool may enable the user to implement the trained ML model on a cloud-based computing device to be distributed to any client devices connected to the network. The GUI of the ML model generation tool may also enable the user to define the privilege level of using the ML model, e.g., whether a user can modify the trained ML model, override the trained ML model, or build a new ML model, etc. The example method described with respect toFIG.6transforms the high-dimensional dataset to a low-dimensional dataset for the ML model training. The dimension reduction on the dataset improves the speed and efficiency of the ML model training. Further, the dimension reduction on the dataset improves the prediction performance as the dimension reduction yields the highly-related variables but eliminates less-related variables. FIG.7illustrates another example flow chart700for generating the ML model generation tool for in accordance with an implementation of the present disclosure. At block702, the model generating device112-B may receive a request to predict a target value associated with a target object, the request including a new dataset. The request to predict a target value may be received from the service provider or a third-party affiliated with the service provider. The new dataset may be in the same dimensions with the dataset used for training the ML model. In implementations, the new dataset may be in different dimensions from the dataset used for training the ML model. The operation described at block702may be caused by a user operation on a guided user interface (GUI) of the ML model generation tool. The user operation may generate a call to an application program interface (API), through which, the data summarization module222communicates with the data resource to retrieve the new dataset. In implementations, the operation of block702may be performed by the data summarization module222of the model generating device112-B. At block704, the model generating device112-B may determine whether the ML model exists. If the ML model exists (block704—Yes), the model generating device112-B may obtain the ML model with respect to the target object at block712. The model generating device112-B may obtain the ML model with respect to the target object from a local storage device and/or from a remote storage space via the network. The ML model may be previously trained using historical data and stored in the local storage device and/or the remote storage space. If the trained ML model does not exist (block704—No), the model generating device112-B may construct the ML model in real-time based on the user inputs on the graphic user interface at block706. At block708, the model generating device112-B may train the ML model based at least on a historical dataset to generate a ML model with respect to the target object. The historical dataset may be retrieved from the storage device112-C, the one or more storage device108, the one or more cloud devices110, etc. At block710, the model generating device112-B may store the trained ML model with respect to the target object on a database. Details of constructing and training the ML model are described above in connection withFIGS.4-6, and therefore, are not repeated herein. In implementations, the operations of block704-710may be performed by one or more of the data summarization module222, the data pre-processing module224, the data visualization module226, the data correlation discovery module228, the dimension reduction module230, the initialization module232, the training module234, the testing module236, or the delivery module238of the model generating device112-B. At block714, the model generating device112-B may receive, at the guided user interface, inputs of one or more parameters associated with the ML model. For example, and without limitation, the one or more parameters may include an algorithm to be used for the ML model, a target variable or object to be predicted, one or more key features used to predict the target variable, etc. The model generating device112-B may update the guided user interface to facilitate the user (e.g., the administrator106) to choose different parameters to achieve better prediction results. At block716, the model generating device112-B may compute the target value based at least in part on the trained ML model with respect to the target object and the one or more parameters. In implementations, the trained ML model may be periodically re-trained based on updated dataset. For example, one or more parameters associated with the ML model and/or the trained ML model may be adjusted to predict different target objects. The re-trained ML model may be transmitted over the network102to be stored in the storage device or the storage space. In implementations, the prediction outcome with respect to a target object using the ML model may be provided to the service provider112or a third-party service provider. Various prediction outcomes with respect to a target object may also be available for comparison and decision making. The ML generating methods and systems describes herein provides a web-based application that facilitates guided data assessment and discovery of data features impacting a target variable. Rather than hiring dedicated data scientists to analyze the data or using complex tools designed by vendors, the present disclosure provides a guided user interface to guide the user to configure the algorithms, parameters, and variables to generate an ML model. The present disclosure dynamically generates Python programs related to the ML model in a backend computer based on the user's inputs and/or selections through the guided user interface of the web-based application. The present disclosure eliminates time consuming operations to manually run through each variable in the dataset to identify correlations among the variables. Further, the present disclosure also improves the efficiencies to develop new ML models with respect to new target variables and/or modify existing ML models. In some instances, one or more components may be referred to herein as “configured to,” “configurable to,” “operable/operative to,” “adapted/adaptable,” “able to,” “conformable/conformed to,” etc. Those skilled in the art will recognize that such terms (e.g., “configured to”) can generally encompass active-state components and/or inactive-state components and/or standby-state components, unless context requires otherwise. As used herein, the term “based on” can be used synonymously with “based, at least in part, on” and “based at least partly on.” As used herein, the terms “comprises/comprising/comprised” and “includes/including/included,” and their equivalents, can be used interchangeably. An apparatus, system, or method that “comprises A, B, and C” includes A, B, and C, but also can include other components (e.g., D) as well. That is, the apparatus, system, or method is not limited to components A, B, and C. While the invention is described with respect to the specific examples, it is to be understood that the scope of the invention is not limited to these specific examples. Since other modifications and changes varied to fit particular operating requirements and environments will be apparent to those skilled in the art, the invention is not considered limited to the example chosen for purposes of disclosure, and covers all changes and modifications which do not constitute departures from the true spirit and scope of this invention. Although the application describes implementations having specific structural features and/or methodological acts, it is to be understood that the claims are not necessarily limited to the specific features or acts described. Rather, the specific features and acts are merely illustrative some implementations that fall within the scope of the claims of the application. | 49,370 |
11861471 | DETAILED DESCRIPTION Embodiments disclosed herein include a computer vision model that identifies a combination of graphic elements present in a query image based on a support set of images that include other various combinations of the graphic features. The term “few-shot” refers to a model that is trained to interpret a few sources of input data that the model has not necessarily observed before. Few-shot is shorthand for stating that the model has “a few shots” to determine what the user is seeking. “A few” does not necessarily refer to “three” as is often applied, but a relatively small number when compared to other models known in the art. Few-shot learning (FSL) refers to the training of machine learning algorithms using a very small set of training data (e.g. a handful of images), as opposed to the very large set that is more often used. This commonly applies to the field of computer vision, where it is desirable to have an object categorization model work well without thousands of training examples. FSL is utilized in the field of computer vision, where employing an object categorization model still gives appropriate results even without having several training samples. For example, where a system categorizes bird species from photos, some rare species of birds may lack enough labeled pictures to be used as training images. Consequently, if there is a classifier for bird images, with the insufficient amount of the dataset, a solution would employ FSL. In some embodiments, a few-shot model uses 10 or fewer input examples, 20 or fewer, 100 or fewer input examples, or 5-7 input examples. When applied to graphic feature identification, the number of input examples may be directly correlated with the number of graphic features that are possible in queries. The referenced input examples differ from those the model is trained with in that those examples used during the few-shot do not necessarily have any relationship (with the exception of having a comparable data type, like the use of ASCII characters, or image data). The training of the model is premised in teaching the model how to quickly adapt to new training examples, rather than to recognize a given input strictly based on examples that it has seen during training. Rather than evaluate individual inputs, the few-shot model is trained to evaluate few-shots—specifically relationships that exist between the various examples within the few-shot. An example embodiment of the present disclosure is that of evaluating which graphic features of a set of graphic features appear in a query image. If the few-shot includes a set of examples including a set of forms with various check boxes clicked (e.g., a pre-existing condition form). A model determines commonality between the query image and the support set (e.g. are there check boxes that match those in the support set?). A derivation of the exact graphic features present in the query image is based on identified overlap of graphic features of images in the support set. Previous work on few-shot learning requires that each example in the support set (examples for the model to adapt quickly to) contain only a single label. For example, suppose a model can quickly learn to classify images of a rare bird species. Prior work requires that each image in the support set contain a single bird. Other work relating to few-shot models and relation network models include the following references:Yutian Chen, Yannis M. Assael, Brendan Shillingford, David Budden, Scott E. Reed, Heiga Zen, Quan Wang, Luis C. Cobo, Andrew Trask, Ben Laurie, Çaglar Gülçcehre, Aäron van den Oord, Oriol Vinyals, and Nando de Freitas.Sample Efficient Adaptive Text-to-Speech. CoRR, abs/1809.10460, 2018.Chelsea Finn, Pieter Abbeel, and Sergey Levine.Model-Agnostic Metalearning for Fast Adaptation of Deep Networks. CoRR, abs/1703.03400, 2017.Gregory R. Koch.Siamese Neural Networks for One-Shot Image Recognition.2015.Scott E. Reed, Yutian Chen, Thomas Paine, Aaron van den Oord, S. M. Ali Eslami, Danilo Jimenez Rezende, Oriol Vinyals, and Nando de Freitas.Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions. CoRR, abs/1710.10304, 2017.Florian Schroff, Dmitry Kalenichenko, and James Philbin.Facenet: A Unified Embedding for Face Recognition and Clustering. CoRR, abs/1503.03832, 2015.Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip H. S. Torr, and Timothy M. Hospedales.Learning to Compare: Relation Network for Few-shot Learning. CoRR, abs/1711.06025, 2017.Oriol Vinyals, Charles Blundell, Timothy P. Lillicrap, Koray Kavukcuoglu, and Daan Wierstra.Matching Networks for One Shot Learning. CoRR, abs/1606.04080, 2016. FIG.1is a flowchart illustrating a method of deriving a combination of graphical features present in a query image. In step102, a graphic features model receives a support set of images including known graphic features and generates graphic features vectors. The graphic features vectors are representations of the corresponding images within the support set. The graphic features model generates the graphic features vectors to be reflective of the graphic features of those images. In some embodiments, the graphic features vectors are binary. The same graphics features model is used to identify graphic features of a query image. In some embodiments the receipt of the support set is supervised in that the graphic features model is informed what the relevant graphic features of the support set are. In some embodiments, the graphic features model is unsupervised, and the graphic features vectors associated with the support set are interpreted at a later step based on the known content of the support set. In step104, the graphic features model receives a query image and generates a query vector. The graphic features model similarly vectorizes the query image. The query vector includes data reflective of the graphic features of the query image. In step106, the image identification system concatenates the query vector to each of the graphic features vectors. In step108, a relation network model receives the concatenated vectors. In step110, the relation network model generates an overlapping features vector from the combination of the concatenated vectors. The overlapping features vector includes data reflective of a number of graphic features that the query image has in common with each of the respective support set images. In step112, the image recognition system generates a support set features matrix and inverts that matrix. The support set features matrix includes data reflective of the graphic features included in the whole of the support set. In some embodiments, the graphic features matrix is a combination of support set graphic features vectors combined as rows in the matrix. Because the support set matrix is inverted, the matrix must have a rank equal to the number of categories (full rank matrix). In cases where the matrix not full rank, or in cases where we have more images than a full rank, the pseudo-inverse can be used instead. However, without a full-rank matrix, the problem can no longer be solved deterministically. In step114, the image recognition system derives the graphical features present in the query image based on a relationship between support set matrix and the overlapping features vector. The features of the query image multiplied by the support set matrix generates an overlapping features vector. Thus, multiplying the overlapping features vector by an inverted version of the support set matrix generates a vector indicating the graphical features in the query image. FIG.2is an illustration of a sample few-shot model20configured to derive graphic features. The sample illustrated is a simplistic implementation utilizing relatively few, and easy to recognize graphic features. This disclosure is not limited to such simple implementations and the relevant models may be configured to operate and identify more complex sets of graphic features. In the example, Model A20, is a few-shot model designed to identify and categorize graphic features that are received. In some embodiments, Model A20is configured with a set list of graphic features to observe (indicated by a graphic feature matrix). In other embodiments, Model A20includes no explanation what a support set includes and instead merely identifies similar patterns in pixels. Few-shot models that describe identification of a similar “language” where the language may be letters, or pictures or any like-with-like manner of representing information, are disclosed in co-pending U.S. patent application Ser. No. 16/413,159, entitled “FEW-SHOT LANGUAGE MODEL TRAINING AND IMPLEMENTATION” and filed on May 15, 2019. The illustration ofFIG.2includes a three-image support set22,24,26and a single query image28. The images include some combination of three graphical features depicting a frog, a cat, or a dog. When each image22,24,26,28is supplied to Model A20, Model A20generates a respective vector that describes the image content. Each vector30,32,34,36includes a set of dimensions that together are indicative of the graphic content of the images22,24,26,28. Image A22corresponds to Vector A30. Image B24corresponds to Vector B32. Image C26corresponds to Vector C34. The query image28corresponds to the query vector36. In some embodiments, the support set vectors30,32,34and the query vector36are 128 dimensions in length. Dimensions may relate directly to graphical features on a one-to-one basis, or multiple dimensions may be used to describe a given graphic feature. As depicted in the figure, the query image28does not include a combination of graphic features that exist in any of the support set. Each feature exists in the support set, but not necessarily by itself, or with an exact same combination. While a human observer can readily identify the content of the query image, the image identification system is taught how to identify via few-shot models. FIG.3is an illustration of a graphic features matrix38as corresponding to a support set22,24,26. In some embodiments, the graphic features matrix38is provided as input into Model A as a binary truth table illustrating the presence of graphic features in support set images. In some embodiments, where the support set vectors30,32,34are also binary, combining the corresponding vectors30,32,34generated for the support set22,24,26as rows in is the same as the matrix38. As evident fromFIG.3, the graphic features matrix38is a binary matrix where columns reference specific graphic features and rows refer to images. A cell that includes a “1” indicates that the corresponding image includes the corresponding graphic feature. While the illustrated support set includes only images A, B and C, any number of images (n) could be supplied in a support set. Similarly, three graphic features are depicted in the figure, but any number of graphic features (N) may be included. The graphic features matrix38is full-rank. The matrix38is either invertible or pseudo-invertible. Ability to invert or pseudo-invert the graphic features matrix38is the only restriction on the values of “n” or “N.” Image A22includes a frog and a dog, thus the graphic features matrix38indicates that each of those features are present. Similar data is included regarding image B24and Image C26. The row depicting the data included in the query image28is not a part of the graphic features matrix38as pertaining to the inversion requirement of the matrix38. The image identification system is limited in identifying graphic features that exist in the support set. Graphic features that exist external to the support set cannot be identified. For example, if the query image included a cow graphic feature, Model A20(and subsequent models) would identify the existence of a graphic feature, but without a cow present in the support set, the models would be unable to determine that the present graphic feature was a cow. In some embodiments the graphic features matrix38includes an additional unknown graphic feature to accommodate for the potential that the query image28includes graphic features that are not present within the support set22,24,26. FIG.4is an illustration of a sample relation network model40generating a pairwise comparison42of the sample set22,24,26and the query image28. Model B40is a relation network that performs a pairwise comparison. To prepare input for Model B40, the query vector36is concatenated to each of the vectors associated with the support set30,32,34. The concatenated vectors are input into Model B40together. In embodiments where the vectors30,32,34,36are 128 dimensions in length, the concatenated vectors are 256 dimensions in length. Model B40is a relation network model and performs a pairwise comparison of the components of the concatenated vectors. Each concatenated vector corresponds to a resulting pairwise comparison vector42. The pairwise comparison vector42includes a signal of how similar the query image28is to the corresponding support set vector30,32,34. In some embodiments, a combination of each pairwise comparison vector42(into a matrix) is multicable with the inverse of graphic features matrix38. In some embodiments, the pairwise comparison vector42indicates a number of overlapping features between the query image28and the respective support set image22,24,26. Where the pairwise comparison vector42indicates the number of overlaps, the pairwise comparison vector42has a length of 1. In an example where each pairwise comparison vector42indicates the number of graphic feature overlaps, the query image28includes one overlapped graphic feature with each support set image22,24,26. Both the query image28and image A22include a dog (one overlap). Both the query image28and image B24include a cat (one overlap). Both the query image28and image C26include a cat (one overlap). In the example, a combination of each pairwise comparison vector42into a pairwise comparison matrix43is (1,1,1). While this particular pairwise comparison matrix43has width 1 and could be described as a vector, the width is not necessarily fixed at 1, and in other examples would not be 1. The pairwise comparison vector42or matrix43are not necessarily binary. Where there are multiple overlaps, the overlap count cannot be represented by a single bit. In some embodiments, a given graphical feature is not necessarily represented by a single integer. Similarly, in some embodiments, the pairwise comparison vector42does not indicate a single pairwise comparison between a given support set image, and the query image28with a single cell/position in the pairwise query vector42. A one-to-one correspondence is used in the figures merely to illustrate an example. In other embodiments, the pairwise comparison vector42has an arbitrary length including sufficient elements to describe a similarity signal between the relevant components of the input concatenated vector. In some embodiments the arbitrary length matches the query vector36and the support set vectors30,32,34(e.g., length of 128). FIG.5is an illustration of a derivation of the combination of graphical features present in a query image28. Given that: (1) The graphic features matrix38, representing the graphical features present in the support set is [A]; (2) The unknown or uninterpreted vector representing the combination of graphical features present in the query image28is [B]; and (3) a matrix43indicating a degree of similarity between graphic features of query image28and a support set of images22,24,26is [C] (in some embodiments [C] indicates a number of overlaps); then [A]×[B]=[C]. However [B] is not initially known information and is what the model ultimately predicts. To solve for [B], the relevant equation is [A]−1×[C]=[B]. Where an inverse of [A] is unavailable, a pseudo-inverse is used instead. Where the pairwise comparison vectors42and the subsequent pairwise comparison matrix43describe a degree of similarity (as opposed to a simple count of overlaps), [A]−1serves as a disentangling signal for [C]. The resultant [B] is a partial product (not in the same format as [A]) and is subjected to further processing. The additional processing is through a projection model (a third neural network) Thus, to determine or interpret the combination of features in the query image28, the image identification system first inverts the graphic features matrix38. The inverted graphic features matrix44is multiplied by the pairwise comparison vector42. The product is query solution vector44. Where no inversion to the graphic features matrix38exists, a pseudo-inverse is performed instead. In some embodiments, the algorithm involved to obtain the query solution vector46involves additional processing. Processing depends on the configured outputs of model A20and model B40. Given information indicating the presence of graphical features in a support set and information indicating similarity between graphical features of a query image and individual support set images, a few-shot learning system is enabled to derive the combination of graphical features in the query image. The inverted graphical features matrix44and the pairwise comparison vector42may include additional post processing in order to derive the query solution vector46. In some embodiments, the query solution vector46is subjected to further post processing to conform to a format of the graphical features matrix38(e.g., become human readable). In some embodiments query vector28is an uninterpreted version of the query solution vector46. The support set images22,24,26include metadata that indicate the graphical features present whereas the query vector28does not. The disclosed system and method solve for the difference. Where the pairwise comparison vector42is 128 dimensions and the graphical features matrix38is 128×128 dimensions, the query solution vector46is also 128 dimensions and does not necessarily include a one-to-one correlation between individual bits and graphical features. FIG.6is a block diagram illustrating a projection model applied to the query solution46. Where the length of the pairwise comparison vectors42are arbitrary, and include data describing a degree of similarity between the query image28and each support set image22,24,26the above described equations require further manipulation and post processing. For example, where the graphic features matrix38is a 3×3 and the pairwise comparison matrix43is 3×128 (e.g., comprising three pairwise comparison vectors42of length128), the resultant matrix is 3×128. That resultant matrix is not in the same format as the graphic features matrix38(e.g., cannot be appended to the bottom of the graphic features matrix38and be used as a table illustrating features present in the query). A third model, model C48is used to project the query solution vector46into a projected query solution50. Model C48is a neural network configured to project the data contained within the query solution vector46into a binary space that corresponds with the graphic features matrix38(e.g., in the illustrated example, that would correspond to a 3×1 matrix). The projected query solution50may be appended as an additional row on the graphic features matrix, thereby created an appended graphic features matrix52that may be read as a truth table regarding the graphic features present in all images. In some embodiments, Model C48multiplies the number of support set images×number of dimensions matrix (e.g., 3×128) by a number of dimensions×1 matrix (e.g., 128×1) in order to have a projected query solution50project into a preferred size. Appending the projected query solution50to the graphics features matrix38is provided as an illustrative example indicating that the technique herein identifies the graphic content of the query image. It is unnecessary for the graphic content of the query to be represented in exactly the above described human readable format. Other human readable formats are suitable. The projected query solution50should be in any format that enables both a human and a computer to make actionable choices on the information. FIG.7is a depiction of a form template54that the present disclosure may be applied to. The sample form template54illustrated is one indicating pre-existing conditions in a medical context. This form is filled out by indicating via check boxes56whether the relevant person has listed conditions. When processing a large number of filled-out versions of the form template54, given a support set of filled-out forms that include each checkbox marked at least once, the few-shot image identification model may identify which check boxes are marked in any number of unidentified forms in an efficient manner. A human intensive, or a more computationally complex computer vision process need only be used to generate a support set. FIG.8is a high-level block diagram showing an example of a processing device800that can represent a system to run any of the methods/algorithms described above. A system may include two or more processing devices such as represented inFIG.8, which may be coupled to each other via a network or multiple networks. A network can be referred to as a communication network. In the illustrated embodiment, the processing device800includes one or more processors810, memory811, a communication device812, and one or more input/output (I/O) devices813, all coupled to each other through an interconnect814. The interconnect814may be or include one or more conductive traces, buses, point-to-point connections, controllers, scanners, adapters and/or other conventional connection devices. Each processor810may be or include, for example, one or more general-purpose programmable microprocessors or microprocessor cores, microcontrollers, application specific integrated circuits (ASICs), programmable gate arrays, or the like, or a combination of such devices. The processor(s)810control the overall operation of the processing device800. Memory811may be or include one or more physical storage devices, which may be in the form of random access memory (RAM), read-only memory (ROM) (which may be erasable and programmable), flash memory, miniature hard disk drive, or other suitable type of storage device, or a combination of such devices. Memory811may store data and instructions that configure the processor(s)810to execute operations in accordance with the techniques described above. The communication device812may be or include, for example, an Ethernet adapter, cable modem, Wi-Fi adapter, cellular transceiver, Bluetooth transceiver, or the like, or a combination thereof. Depending on the specific nature and purpose of the processing device800, the I/O devices813can include devices such as a display (which may be a touch screen display), audio speaker, keyboard, mouse or other pointing device, microphone, camera, etc. Unless contrary to physical possibility, it is envisioned that (i) the methods/steps described above may be performed in any sequence and/or in any combination, and that (ii) the components of respective embodiments may be combined in any manner. The techniques introduced above can be implemented by programmable circuitry programmed/configured by software and/or firmware, or entirely by special-purpose circuitry, or by a combination of such forms. Such special-purpose circuitry (if any) can be in the form of, for example, one or more application-specific integrated circuits (ASICs), programmable logic devices (PLDs), field-programmable gate arrays (FPGAs), etc. Software or firmware to implement the techniques introduced here may be stored on a machine-readable storage medium and may be executed by one or more general-purpose or special-purpose programmable microprocessors. A “machine-readable medium”, as the term is used herein, includes any mechanism that can store information in a form accessible by a machine (a machine may be, for example, a computer, network device, cellular phone, personal digital assistant (PDA), manufacturing tool, any device with one or more processors, etc.). For example, a machine-accessible medium includes recordable/non-recordable media (e.g., read-only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; etc.), etc. Physical and functional components (e.g., devices, engines, modules, and data repositories, etc.) associated with processing device800can be implemented as circuitry, firmware, software, other executable instructions, or any combination thereof. For example, the functional components can be implemented in the form of special-purpose circuitry, in the form of one or more appropriately programmed processors, a single board chip, a field programmable gate array, a general-purpose computing device configured by executable instructions, a virtual machine configured by executable instructions, a cloud computing environment configured by executable instructions, or any combination thereof. For example, the functional components described can be implemented as instructions on a tangible storage memory capable of being executed by a processor or other integrated circuit chip (e.g., software, software libraries, application program interfaces, etc.). The tangible storage memory can be computer readable data storage. The tangible storage memory may be volatile or non-volatile memory. In some embodiments, the volatile memory may be considered “non-transitory” in the sense that it is not a transitory signal. Memory space and storages described in the figures can be implemented with the tangible storage memory as well, including volatile or non-volatile memory. Note that any and all of the embodiments described above can be combined with each other, except to the extent that it may be stated otherwise above or to the extent that any such embodiments might be mutually exclusive in function and/or structure. Although the present invention has been described with reference to specific exemplary embodiments, it will be recognized that the invention is not limited to the embodiments described, but can be practiced with modification and alteration within the spirit and scope of the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative sense rather than a restrictive sense. | 26,418 |
11861472 | DETAILED DESCRIPTION OF THE DISCLOSURE Again, the present disclosure relates to systems and methods for a Machine Learning (ML) model abstraction layer for runtime efficiency. Specifically, the present disclosure provides a model abstraction layer that is used to serve a machine learning model in production. The model abstraction layer includes removing training information that is not relevant to runtime, e.g., hyperparameters. The model abstraction layer is algorithm and programming language agnostic, supporting any architecture. With this approach, the training process is decoupled from the runtime process, leading to a lean, purpose-built model for runtime. Also, the present disclosure relates to systems and methods utilizing Machine Learning (ML) for smart quarantining of files, such as for scanning, sandboxing, etc. in a cloud-based system. Specifically, the present disclosure presents a smart quarantine with a goal of minimizing the number of files quarantined, the number of malicious files passed through to an end user, and a number of files scanned by a sandbox. In minimizing each of these metrics, the smart quarantine provides better UX relative to conventional approaches, lower risk as the risky files are only scanned/quarantined, and lower cost as the sandbox only focused on files of interest as detected by machine learning. The present disclosure can be implemented in an antivirus program resident on a user device, in a standalone sandbox, in a security appliance, and/or through a cloud-based system offering security-as-a-service. Example Cloud-Based System FIG.1Ais a network diagram of a cloud-based system100offering security as a service. Specifically, the cloud-based system100can offer a Secure Internet and Web Gateway as a service to various users102, as well as other cloud services. In this manner, the cloud-based system100is located between the users102and the Internet as well as any cloud services106(or applications) accessed by the users102. As such, the cloud-based system100provides inline monitoring inspecting traffic between the users102, the Internet104, and the cloud services106, including Secure Sockets Layer (SSL) traffic. The cloud-based system100can offer access control, threat prevention, data protection, etc. The access control can include a cloud-based firewall, cloud-based intrusion detection, Uniform Resource Locator (URL) filtering, bandwidth control, Domain Name System (DNS) filtering, etc. The threat prevention can include cloud-based intrusion prevention, protection against advanced threats (malware, spam, Cross-Site Scripting (XSS), phishing, etc.), cloud-based sandbox, antivirus, DNS security, etc. The data protection can include Data Loss Prevention (DLP), cloud application security such as via Cloud Access Security Broker (CASB), file type control, etc. The cloud-based firewall can provide Deep Packet Inspection (DPI) and access controls across various ports and protocols as well as being application and user aware. The URL filtering can block, allow, or limit website access based on policy for a user, group of users, or entire organization, including specific destinations or categories of URLs (e.g., gambling, social media, etc.). The bandwidth control can enforce bandwidth policies and prioritize critical applications such as relative to recreational traffic. DNS filtering can control and block DNS requests against known and malicious destinations. The cloud-based intrusion prevention and advanced threat protection can deliver full threat protection against malicious content such as browser exploits, scripts, identified botnets and malware callbacks, etc. The cloud-based sandbox can block zero-day exploits (just identified) by analyzing unknown files for malicious behavior. Advantageously, the cloud-based system100is multi-tenant and can service a large volume of the users102. As such, newly discovered threats can be promulgated throughout the cloud-based system100for all tenants practically instantaneously. The antivirus protection can include antivirus, antispyware, antimalware, etc. protection for the users102, using signatures sourced and constantly updated. The DNS security can identify and route command-and-control connections to threat detection engines for full content inspection. The DLP can use standard and/or custom dictionaries to continuously monitor the users102, including compressed and/or SSL-encrypted traffic. Again, being in a cloud implementation, the cloud-based system100can scale this monitoring with near-zero latency on the users102. The cloud application security can include CASB functionality to discover and control user access to known and unknown cloud services106. The file type controls enable true file type control by the user, location, destination, etc. to determine which files are allowed or not. For illustration purposes, the users102of the cloud-based system100can include a mobile device110, a headquarters (H.Q.)112which can include or connect to a data center (DC)114, Internet of Things (IoT) devices116, a branch office/remote location118, etc., and each includes one or more user devices (an example user device250is illustrated inFIG.2B). The devices110,116, and the locations112,114,118are shown for illustrative purposes, and those skilled in the art will recognize there are various access scenarios and other users102for the cloud-based system100, all of which are contemplated herein. The users102can be associated with a tenant, which may include an enterprise, a corporation, an organization, etc. That is, a tenant is a group of users who share a common access with specific privileges to the cloud-based system100, a cloud service, etc. In an embodiment, the headquarters112can include an enterprise's network with resources in the data center114. The mobile device110can be a so-called road warrior, i.e., users that are off-site, on-the-road, etc. Further, the cloud-based system100can be multi-tenant, with each tenant having its own users102and configuration, policy, rules, etc. One advantage of the multi-tenancy and a large volume of users is the zero-day/zero-hour protection in that a new vulnerability can be detected and then instantly remediated across the entire cloud-based system100. The same applies to policy, rule, configuration, etc. changes—they are instantly remediated across the entire cloud-based system100. As well, new features in the cloud-based system100can also be rolled up simultaneously across the user base, as opposed to selective and time-consuming upgrades on every device at the locations112,114,118, and the devices110,116. Logically, the cloud-based system100can be viewed as an overlay network between users (at the locations112,114,118, and the devices110,106) and the Internet104and the cloud services106. Previously, the I.T. deployment model included enterprise resources and applications stored within the data center114(i.e., physical devices) behind a firewall (perimeter), accessible by employees, partners, contractors, etc. on-site or remote via Virtual Private Networks (VPNs), etc. The cloud-based system100is replacing the conventional deployment model. The cloud-based system100can be used to implement these services in the cloud without requiring the physical devices and management thereof by enterprise I.T. administrators. As an ever-present overlay network, the cloud-based system100can provide the same functions as the physical devices and/or appliances regardless of geography or location of the users102, as well as independent of platform, operating system, network access technique, network access provider, etc. There are various techniques to forward traffic between the users102at the locations112,114,118, and via the devices110,116, and the cloud-based system100. Typically, the locations112,114,118can use tunneling where all traffic is forward through the cloud-based system100. For example, various tunneling protocols are contemplated, such as Generic Routing Encapsulation (GRE), Layer Two Tunneling Protocol (L2TP), Internet Protocol (I.P.) Security (IPsec), customized tunneling protocols, etc. The devices110,116can use a local application that forwards traffic, a proxy such as via a Proxy Auto-Config (PAC) file, and the like. A key aspect of the cloud-based system100is all traffic between the users102and the Internet104or the cloud services106is via the cloud-based system100. As such, the cloud-based system100has visibility to enable various functions, all of which are performed off the user device in the cloud. The cloud-based system100can also include a management system120for tenant access to provide global policy and configuration as well as real-time analytics. This enables I.T. administrators to have a unified view of user activity, threat intelligence, application usage, etc. For example, I.T. administrators can drill-down to a per-user level to understand events and correlate threats, to identify compromised devices, to have application visibility, and the like. The cloud-based system100can further include connectivity to an Identity Provider (IDP)122for authentication of the users102and to a Security Information and Event Management (SIEM) system124for event logging. The system124can provide alert and activity logs on a per-user102basis. FIG.1Bis a network diagram of an example implementation of the cloud-based system100. In an embodiment, the cloud-based system100includes a plurality of enforcement nodes (EN)150, labeled as enforcement nodes150-1,150-2,150-N, interconnected to one another and interconnected to a central authority (CA)152. The nodes150,152, while described as nodes, can include one or more servers, including physical servers, virtual machines (V.M.) executed on physical hardware, etc. That is, a single node150,152can be a cluster of devices. An example of a server is illustrated inFIG.2. The cloud-based system100further includes a log router154that connects to a storage cluster156for supporting log maintenance from the enforcement nodes150. The central authority152provide centralized policy, real-time threat updates, etc. and coordinates the distribution of this data between the enforcement nodes150. The enforcement nodes150provide an onramp to the users102and are configured to execute policy, based on the central authority152, for each user102. The enforcement nodes150can be geographically distributed, and the policy for each user102follows that user102as he or she connects to the nearest (or other criteria) enforcement node150. The enforcement nodes150are full-featured secure internet gateways that provide integrated internet security. They inspect all web traffic bi-directionally for malware and enforce security, compliance, and firewall policies, as described herein. In an embodiment, each enforcement node150has two main modules for inspecting traffic and applying policies: a web module and a firewall module. The enforcement nodes150are deployed around the world and can handle hundreds of thousands of concurrent users with millions of concurrent sessions. Because of this, regardless of where the users102are, they can access the Internet104from any device, and the enforcement nodes150protect the traffic and apply corporate policies. The enforcement nodes150can implement various inspection engines therein, and optionally, send sandboxing to another system. The enforcement nodes150include significant fault tolerance capabilities, such as deployment in active-active mode to ensure availability and redundancy as well as continuous monitoring. In an embodiment, customer traffic is not passed to any other component within the cloud-based system100, and the enforcement nodes150can be configured never to store any data to disk. Packet data is held in memory for inspection and then, based on policy, is either forwarded or dropped. Log data generated for every transaction is compressed, tokenized, and exported over secure TLS connections to the log routers154that direct the logs to the storage cluster156, hosted in the appropriate geographical region, for each organization. The central authority152hosts all customer (tenant) policy and configuration settings. It monitors the cloud and provides a central location for software and database updates and threat intelligence. Given the multi-tenant architecture, the central authority152is redundant and backed up in multiple different data centers. The enforcement nodes150establish persistent connections to the central authority152to download all policy configurations. When a new user connects to an enforcement node150, a policy request is sent to the central authority152through this connection. The central authority152then calculates the policies that apply to that user102and sends the policy to the enforcement node150as a highly compressed bitmap. Once downloaded, a tenant's policy is cached until a policy change is made in the management system120. When this happens, all of the cached policies are purged, and the enforcement nodes150request the new policy when the user102next makes a request. In an embodiment, the enforcement node150exchange “heartbeats” periodically, so all enforcement nodes150are informed when there is a policy change. Any enforcement node150can then pull the change in policy when it sees a new request. The cloud-based system100can be a private cloud, a public cloud, a combination of a private cloud and a public cloud (hybrid cloud), or the like. Cloud computing systems and methods abstract away physical servers, storage, networking, etc., and instead offer these as on-demand and elastic resources. The National Institute of Standards and Technology (NIST) provides a concise and specific definition which states cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Cloud computing differs from the classic client-server model by providing applications from a server that are executed and managed by a client's web browser or the like, with no installed client version of an application required. Centralization gives cloud service providers complete control over the versions of the browser-based and other applications provided to clients, which removes the need for version upgrades or license management on individual client computing devices. The phrase “Software as a Service” (SaaS) is sometimes used to describe application programs offered through cloud computing. A common shorthand for a provided cloud computing service (or even an aggregation of all existing cloud services) is “the cloud.” The cloud-based system100is illustrated herein as an example embodiment of a cloud-based system, and other implementations are also contemplated. As described herein, the terms cloud services and cloud applications may be used interchangeably. The cloud service106is any service made available to users on-demand via the Internet, as opposed to being provided from a company's on-premises servers. A cloud application, or cloud app, is a software program where cloud-based and local components work together. The cloud-based system100can be utilized to provide example cloud services, including Zscaler Internet Access (ZIA), Zscaler Private Access (ZPA), and Zscaler Digital Experience (ZDX), all from Zscaler, Inc. (the assignee and applicant of the present application). The ZIA service can provide the access control, threat prevention, and data protection described above with reference to the cloud-based system100. ZPA can include access control, microservice segmentation, etc. The ZDX service can provide monitoring of user experience, e.g., Quality of Experience (QoE), Quality of Service (QoS), etc., in a manner that can gain insights based on continuous, inline monitoring. For example, the ZIA service can provide a user with Internet Access, and the ZPA service can provide a user with access to enterprise resources instead of traditional Virtual Private Networks (VPNs), namely ZPA provides Zero Trust Network Access (ZTNA). Those of ordinary skill in the art will recognize various other types of cloud services106are also contemplated. Also, other types of cloud architectures are also contemplated, with the cloud-based system100presented for illustration purposes. Example Server Architecture FIG.2Ais a block diagram of a server200, which may be used in the cloud-based system100, in other systems, or standalone. For example, the enforcement nodes150and the central authority152may be formed as one or more of the servers200. The server200may be a digital computer that, in terms of hardware architecture, generally includes a processor202, input/output (I/O) interfaces204, a network interface206, a data store208, and memory210. It should be appreciated by those of ordinary skill in the art thatFIG.2Adepicts the server200in an oversimplified manner, and a practical embodiment may include additional components and suitably configured processing logic to support known or conventional operating features that are not described in detail herein. The components (202,204,206,208, and210) are communicatively coupled via a local interface212. The local interface212may be, for example, but not limited to, one or more buses or other wired or wireless connections, as is known in the art. The local interface212may have additional elements, which are omitted for simplicity, such as controllers, buffers (caches), drivers, repeaters, and receivers, among many others, to enable communications. Further, the local interface212may include address, control, and/or data connections to enable appropriate communications among the aforementioned components. The processor202is a hardware device for executing software instructions. The processor202may be any custom made or commercially available processor, a Central Processing Unit (CPU), an auxiliary processor among several processors associated with the server200, a semiconductor-based microprocessor (in the form of a microchip or chipset), or generally any device for executing software instructions. When the server200is in operation, the processor202is configured to execute software stored within the memory210, to communicate data to and from the memory210, and to generally control operations of the server200pursuant to the software instructions. The I/O interfaces204may be used to receive user input from and/or for providing system output to one or more devices or components. The network interface206may be used to enable the server200to communicate on a network, such as the Internet104. The network interface206may include, for example, an Ethernet card or adapter or a Wireless Local Area Network (WLAN) card or adapter. The network interface206may include address, control, and/or data connections to enable appropriate communications on the network. A data store208may be used to store data. The data store208may include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, and the like)), nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, and the like), and combinations thereof. Moreover, the data store208may incorporate electronic, magnetic, optical, and/or other types of storage media. In one example, the data store208may be located internal to the server200, such as, for example, an internal hard drive connected to the local interface212in the server200. Additionally, in another embodiment, the data store208may be located external to the server200such as, for example, an external hard drive connected to the I/O interfaces204(e.g., SCSI or USB connection). In a further embodiment, the data store208may be connected to the server200through a network, such as, for example, a network-attached file server. The memory210may include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, etc.)), nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, etc.), and combinations thereof. Moreover, the memory210may incorporate electronic, magnetic, optical, and/or other types of storage media. Note that the memory210may have a distributed architecture, where various components are situated remotely from one another but can be accessed by the processor202. The software in memory210may include one or more software programs, each of which includes an ordered listing of executable instructions for implementing logical functions. The software in the memory210includes a suitable Operating System (O/S)214and one or more programs216. The operating system214essentially controls the execution of other computer programs, such as the one or more programs216, and provides scheduling, input-output control, file and data management, memory management, and communication control and related services. The one or more programs216may be configured to implement the various processes, algorithms, methods, techniques, etc. described herein. Example User Device Architecture FIG.2Bis a block diagram of a user device250, which may be used with the cloud-based system100or the like. Specifically, the user device250can form a device used by one of the users102, and this may include common devices such as laptops, smartphones, tablets, netbooks, personal digital assistants, MP3 players, cell phones, e-book readers, IoT devices, servers, desktops, printers, televisions, streaming media devices, and the like. The user device250can be a digital device that, in terms of hardware architecture, generally includes a processor252, I/O interfaces254, a network interface256, a data store258, and memory260. It should be appreciated by those of ordinary skill in the art thatFIG.2Bdepicts the user device250in an oversimplified manner, and a practical embodiment may include additional components and suitably configured processing logic to support known or conventional operating features that are not described in detail herein. The components (252,254,256,258, and252) are communicatively coupled via a local interface262. The local interface262can be, for example, but not limited to, one or more buses or other wired or wireless connections, as is known in the art. The local interface262can have additional elements, which are omitted for simplicity, such as controllers, buffers (caches), drivers, repeaters, and receivers, among many others, to enable communications. Further, the local interface262may include address, control, and/or data connections to enable appropriate communications among the aforementioned components. The processor252is a hardware device for executing software instructions. The processor252can be any custom made or commercially available processor, a CPU, an auxiliary processor among several processors associated with the user device250, a semiconductor-based microprocessor (in the form of a microchip or chipset), or generally any device for executing software instructions. When the user device250is in operation, the processor252is configured to execute software stored within the memory260, to communicate data to and from the memory260, and to generally control operations of the user device250pursuant to the software instructions. In an embodiment, the processor252may include a mobile optimized processor such as optimized for power consumption and mobile applications. The I/O interfaces254can be used to receive user input from and/or for providing system output. User input can be provided via, for example, a keypad, a touch screen, a scroll ball, a scroll bar, buttons, a barcode scanner, and the like. System output can be provided via a display device such as a Liquid Crystal Display (L.C. D), touch screen, and the like. The network interface256enables wireless communication to an external access device or network. Any number of suitable wireless data communication protocols, techniques, or methodologies can be supported by the network interface256, including any protocols for wireless communication. The data store258may be used to store data. The data store258may include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, and the like)), nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, and the like), and combinations thereof. Moreover, the data store258may incorporate electronic, magnetic, optical, and/or other types of storage media. The memory260may include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, etc.)), nonvolatile memory elements (e.g., ROM, hard drive, etc.), and combinations thereof. Moreover, the memory260may incorporate electronic, magnetic, optical, and/or other types of storage media. Note that the memory260may have a distributed architecture, where various components are situated remotely from one another, but can be accessed by the processor252. The software in memory260can include one or more software programs, each of which includes an ordered listing of executable instructions for implementing logical functions. In the example ofFIG.2B, the software in the memory260includes a suitable operating system264and programs266. The operating system264essentially controls the execution of other computer programs and provides scheduling, input-output control, file and data management, memory management, and communication control and related services. The programs266may include various applications, add-ons, etc. configured to provide end user functionality with the user device250. For example, example programs266may include, but not limited to, a web browser, social networking applications, streaming media applications, games, mapping and location applications, electronic mail applications, financial applications, and the like. In a typical example, the end-user typically uses one or more of the programs266along with a network such as the cloud-based system100. Machine Learning in Network Security Machine learning can be used in various applications, including malware detection, intrusion detection, threat classification, the user or content risk, detecting malicious clients or bots, etc. In a particular use case, machine learning can be used on a content item, e.g., a file, to determine if further processing is required during inline processing in the cloud-based system100. For example, machine learning can be used in conjunction with a sandbox to identify malicious files. A sandbox, as the name implies, is a safe environment where a file can be executed, opened, etc. for test purposes to determine whether the file is malicious or benign. It can take a sandbox around 10 minutes before it is fully determined whether the file is malicious or benign. Machine learning can determine a verdict in advance before a file is sent to the sandbox. If a file is predicted as benign, it does not need to be sent to the sandbox. Otherwise, it is sent to the sandbox for further analysis/processing. Advantageously, utilizing machine learning to pre-filter a file significantly improves user experience by reducing the overall quarantine time as well as reducing workload in the sandbox. Of course, machine learning cannot replace the sandbox since malicious information from a static file is limited, while the sandbox can get a more accurate picture with dynamic behavior analysis. Further, it follows that the machine learning predictions require high precision due to the impact of a false prediction, i.e., finding a malicious file to be benign. In the context of inline processing, sandboxing does a great job in detecting malicious files, but there is a cost in latency, which affects user experience. Machine learning can alleviate this issue by giving an earlier verdict on the static files. However, it requires ML to have extremely high precision, since the cost of a false positive and false negative are very high. For example, a benign hospital life-threatening file, if mistakenly blocked due to an ML model's wrong verdict, would cause a life disaster. Similarly, undetected ransomware could cause problems for an enterprise. Therefore, there is a need for a high-precision approach for both benign and malicious files. The conventional approach to improve precision includes improving the probability threshold to increase precision. A p-value (probability value) is a statistical assessment for measuring the reliability of a prediction, but this does not identify the unreliability of predictions with high probabilities. A description utilizing machine learning in the context of malware detection is described in commonly-assigned U.S. patent application Ser. No. 15/946,546, filed Apr. 5, 2018, and entitled “System and method for malware detection on a per packet basis,” the content of which is incorporated by reference herein. As described here, the typical machine learning training process collects millions of malware samples, extracts a set of features from these samples, and feeds the features into a machine learning model to determine patterns in the data. The output of this training process is a machine learning model that can predict whether a file that has not been seen before is malicious or not. Decision Tree In an embodiment, a generated machine learning model is a decision tree. A trained model may include a plurality of decision trees. Each of the plurality of decision trees may include one or more nodes, one or more branches, and one or more termini. Each node in the trained decision tree represents a feature and a decision boundary for that feature. Each of the one or more termini is, in turn, associated with an output probability. Generally, each of the one or more nodes leads to another node via a branch until a terminus is reached, and an output score is assigned. FIG.3is a diagram of a trained machine learning model300. The machine learning model300includes one or more features310and multiple trees320a,320n. A feature is an individual measurable property or characteristic of a phenomenon being observed. The trees320a,320ncan be decision trees associated with a random forest or a gradient boosting decision trees machine learning model. In various embodiments, the trees320a,320bare constructed during training. While the machine learning model300is only depicted as having trees320a,320n, in other embodiments, the machine learning model300includes a plurality of additional trees. The features310, in the context of malicious file detection, relate to various properties or characteristics of the file. The trees320a,320ninclude nodes330a,330band termini340a,340b,340c,340d. That is, the node330ais connected to termini340a,340band the node330bis connected to termini340c,340, via one or more branches. In other embodiments, the trees320a,320ninclude one or more additional nodes, one or more additional branches, and one or more additional termini. The nodes330each represent a feature and a decision boundary for that feature. The termini340can each be associated with a probability of maliciousness, in the example of malicious file detection. Generally, each of the one or more nodes leads to another node via a branch until a terminus is reached, and a probability of maliciousness is assigned. The output of the trained machine learning model300is a weighted average of a probability of maliciousness predicted by each of the trees320aand the tree320n. Ensemble Models Multiple different machine learning models can be used as an ensemble model that obtain better predictive performance than could be obtained from any of the constituent machine learning models alone. The individual models in an ensemble model could be tree-based (e.g., the decision tree used by gradient boosting decision trees and random forest) or neural networks or any other machine learning model where the prediction follows a decision path or activation path. For illustration purposes, the foregoing examples relate to decision trees. The machine learning model300is an example of a decision tree. A decision tree is a tool that uses a tree-like model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. It is one way to display an algorithm that only contains conditional control statements, i.e., if . . . then . . . else. Random forests or random decision forests are an ensemble model for classification, regression and other tasks that operates by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Random decision forests correct for decision trees' habit of overfitting to their training set. Of note, each of the decision trees is independent of one another in the case of Random Forest. Gradient Boosting Decision Trees are dependent between one another. Gradient boosting is a machine learning technique for regression and classification problems, which produces a prediction model in the form of an ensemble of weak prediction models, typically a decision tree. Blind Spots Instance-based machine learning approaches make predictions based on its neighbor, that is, the examples similar to it. On the other hand, if no similar examples are surrounding the example under prediction, there is insufficient support for the prediction. Thus, the prediction is untrustworthy. An instance-based approach needs a similarity threshold to decide whether there are similar examples. However, the similarity is relative and not absolute. The similarity is also feature dependent. Again, blind spots in a machine learning model are regions in a feature space defined by ensemble trees where there is insufficient or conflicting evidence from previously seen data (e.g., training). Blind spots are the target of adversarial attacks where the models are fooled with malicious input. Machine learning models are unable to make accurate predictions at blind spots. For an example of a blind spot, is broccoli more similar to cauliflower or kale? It is clear that from the shape perspective, broccoli is closer to cauliflower. While if the green color is the dominant feature, then broccoli becomes closer to kale. Thus, this model requires additional features as shape and color alone are not sufficient for distinguishing examples. Prudent Ensemble Models The present disclosure includes measuring the reliability of a prediction to provide confidence/over prediction. These reliability measures can also be double-checked and tracked to improve the measurement of reliability further. For example, in the malicious content item detection use case, the unreliability predictions could be doubled checked by a sandbox. The unreliability predictions can increase precision by filtering out unreliable predictions. If a prediction is made, it has very high precision. While for those, it is not sure, they can be analyzed further to identify malware concept drift or discrepancy in the data distribution. Prudent Ensemble Model Process FIG.4is a flowchart of a prudent ensemble model process520. The process520includes training an ensemble model (step522). This step proceeds as is known in the machine learning art. As described herein, the ensemble model could be tree-based (e.g., the decision tree used by gradient boosting decision trees and random forest) or neural networks or any other machine learning model where the prediction follows a decision path or activation path. The process520includes determining blind spots in the trained ensemble model (step524). Again, a blind spot is a location where the trained ensemble model has not seen any examples with the combination of the features at the location or has examples with conflicting labels. The determined blind spots are marked or otherwise noted (step526). The trained ensemble model is utilized in production to make predictions, but any predictions that are in marked blind spots are filtered out (ignored) as being unreliable (step528). Again, by filtering out unreliable predictions, that is, the predictions that fall into blind spots, the process520counters adversarial attacks, including those not just on decision boundary attack, but also those far away from the decision boundary. For example, a malicious file can be configured to fool the model by having characteristics similar to a benign content item, but still being malicious. The process520advantageously protects against such attacks as the malicious file that tries to fool the model will end up in a blind spot as such as file would not have existing examples. Accordingly, this file would be rejected due to the blind spot. The process520further achieves skyscraper high precision, and the process520increases the visibility of the trained ensemble model by explicitly exposing the vulnerable part of the model. The vulnerable part of the model can be improved through further training. The process520leverages the idea from instance-based (e.g., k-nearest neighbor) and integrates it into ensemble models to enhance their predictions. The trained ensemble model uses learned models to define what are similar examples. Ensemble models non-linearly segment the feature space into small regions. Each region is the result of superimposing the decision paths from all sub-models. Examples within the same region are deemed similar. If the prediction paths for an example fall into a region where no examples have been seen previously or only examples with conflicting labels, that means it is a region without sufficient support from examples, thus named blind spots. The blind spots defined in this way can be anywhere in the feature space and do not have to near the decision boundary. By filtering out predictions fell into blind spots, the process520can counter the adversary attack in various regions in feature space (not just those close to decision boundary). This is complementary to existing solutions for the adversarial attack. Since the adversarial examples generated using the existing data are limited. There are still blind spots remaining after the hardening of the model trained with adversary examples, especially those not close to the decision boundary. Content Processing Process by an Inline Security System FIG.5is a flowchart of a content processing process540, implemented by the cloud-based security system100. The process540can include obtaining a trained machine learning ensemble model to identify malicious content items (step542). The trained machine learning ensemble model can be from the process520. The process540includes receiving a content item between a user device and a location on the Internet or an enterprise network (step544), utilizing the trained machine learning ensemble model to determine whether the content item is malicious (step546), responsive to the trained machine learning ensemble model determining the content item is malicious or determining the content item is benign but such determining is in a blind spot of the trained ensemble model, performing further processing on the content item (step548), and, responsive to the trained machine learning ensemble model determining the content item is benign with such determination not in a blind spot of the trained machine learning ensemble model, allowing the content item (step550). As mentioned, the blind spot is a location where the trained machine learning ensemble model has not seen any examples with a combination of features at the location or has examples with conflicting labels. The process540can further include training the trained machine learning ensemble model to identify malicious content items and identifying and marking blind spots in the trained machine learning ensemble model. The process540can further include, subsequent to the further processing, one of allowing the content item and blocking the content item based on the further processing. Further processing can include performing a dynamic analysis of the content item in a sandbox. For example, this can include the analysis described in U.S. Pat. No. 9,152,789, issued on Oct. 6, 2015, and entitled “Systems and methods for dynamic cloud-based malware behavior analysis,” the contents of which are incorporated by reference herein. In an embodiment, the further processing includes blocking the content item in the cloud-based security system based on a classification by the trained machine learning ensemble model. Here, the trained machine learning ensemble model can be viewed as the final decision without requiring a sandbox or the like. In an embodiment, the content item is malicious and configured to fool the trained machine learning ensemble model via an adversarial attack where the content item is configured to mimic benign features, and wherein the content item lands on a blind spot in the trained machine learning ensemble model thereby preventing the adversarial attack. The content item can be one of an executable file, a Portable Document File (PDF) file, a Microsoft Office file, and a JavaScript file. The cloud-based security system can be located inline between the user device and the location. Smart Quarantine Approach The present disclosure includes a smart quarantine approach where machine learning is utilized as a front-end to a scanning system to decide whether or not to scan a particular file. Again, the goal in such an approach is to minimize waiting time, risk, and cost. Of note, the smart quarantine approach is described herein with reference to the cloud-based system100, offering a cloud security service. Those skilled in the art will recognize the smart quarantine approach contemplates use in other architectures, including in a stand-along software program executed on the user device300, in a security appliance, in a router, in a Secure Web Gateway (SWG), in a Web proxy, etc. Conventional Quarantine Process FIG.6is a flow diagram of a conventional quarantine process600for quarantining, scanning, blocking, and allowing a file602. The file602can be a document (e.g., a Microsoft Office document or the like), a Portable Document Format (PDF), or an executable file (e.g., a Portable Executable (P.E.) file in 32 or 64-bit format). The file602is obtained, and then policy604determinates how the file602is processed, namely either quarantined (step604-1), allowed and scanned (step604-2), or allowed and not scanned (step604-3). In the cloud-based system100, the actions of quarantine and scanning may be separate. For example, a file may be blocked to the end user102in the cloud-based system100if it is held, i.e., quarantined (step606). The file may be allowed to the end user102and simultaneously scanned by a sandbox (steps604-2,608). The result of the sandbox608is a score, and it can be used to determine whether the file602is malicious or benign (step610). Again, the sandbox608is configured to run the file602in a controlled environment (i.e., a “sandbox”) and perform observation and analysis to determine behavior. For example, there can be a scoring threshold, X, and a score above it means the file602is determined to be malicious (step612), and a score below means the file602is determined to be benign (step614). The step604-3immediately allows the file602to the end user102. The step604-1holds the file602(step606), and the step604-2immediately allows the file602to the end user102, but still performs scanning in the sandbox608. For example, if the file602is malicious (step612), but allowed at the step604-2, the file602can be blocked the next time. If the file602is held (step606) and the file602is malicious (step612), the file602can be blocked, such as in the cloud-based system100. If the file602is held (step606) and found to be benign (step614), the file602can be allowed to the end user102. Machine Learning Smart Quarantining Process FIG.7is a flow diagram of a smart quarantine process700A for quarantining, scanning, blocking, and allowing a file602, where machine learning702is used to front end whether or not to hold the file602.FIG.8is a flow diagram of a smart quarantine process700B for quarantining, scanning, blocking, and allowing a file602, where machine learning702is used to front end whether or not to hold or scan the file602.FIG.9is a flow diagram of a smart quarantine process700C for quarantining, scanning, blocking, and allowing a file602, where machine learning702is used to front end whether or not to hold, scan, or allow the file602. InFIG.7, the smart quarantine process700A includes obtaining the file602and then policy704determinates how the file602is processed, either quarantined (step704-1), processed by the machine learning702to determine whether to quarantine (step704-2) or to allow and scan (step704-3), or allowed without a scan (step704-4). That is, in the smart quarantine process700A, the machine learning702is used to front end the allow and scan step. Thus, in the smart quarantine process700A, the allow and scan is now augmented to become quarantine if malicious from the machine learning702(step704-2) or allow and scan if not malicious from the machine learning702(step704-3). Similar to the quarantine process600, the smart quarantine process700A includes holding the file602after the steps704-1,704-2(step706), and scanning the file602with a sandbox708after the steps704-1,704-2,704-3. The sandbox708scores the file602(step710), and the smart quarantine process700A determines if the file602is malicious (step712) or benign (step714) based thereon. If the file was held (step706), the smart quarantine process700A can block the file602if malicious. Further, the smart quarantine process700A also includes the step704-4of allowing without a scan based on the policy704. InFIG.8, the smart quarantine process700B includes combining the machine learning702with the policy704. Here, the machine learning702front ends both the decision to quarantine and to allow and scan, not just the decision to allow and scan. Here, the smart quarantine process700B can include three outputs of the combined machine learning702and policy704, namely quarantine if the machine learning702determines the file602is malicious (step720-1), allow and scan if the machine learning702determines the file602is not malicious (step720-2), and allow without a scan if the policy704dictates for the file602(step720-3). The remainder of the steps in the smart quarantine process700B are the same as in the smart quarantine process700A. InFIG.9, the smart quarantine process700C also includes combining the machine learning702with the policy704, but here the machine learning702output is used in all three decisions. The machine learning702front ends all the decisions, namely, quarantine if the machine learning702determines the file602is malicious (step730-1), allow and scan if the machine learning702determines the file602is not malicious (step730-2), and allow without a scan if the policy704dictates for the file602and if the machine learning702determines the file is benign (step730-3). The remainder of the steps in the smart quarantine process700C are the same as in the smart quarantine process700A,700B. The machine learning702can include any of the techniques described herein. The policy704can be determined by a tenant associated with the user102. For example, the policy704can be based on a type of the file602, e.g., quarantine all executables, allow and scan all documents and PDFs, etc. The policy704can also be based on other factors such as user location, the user device250types, network access technique, etc. The smart quarantine process700A,700B,700C utilizes machine learning as a front end to decide whether or not to hold the file602(FIG.7), whether or not to hold or scan the file602(FIG.8), and whether or not to hold, scan, or allow the file602(FIG.9). The smart quarantine processes700A,700B,700C address the three metrics described herein—waiting time, cost, and risk. Specifically, the conventional quarantine process600has a baseline for each of these metrics. The smart quarantine processes700A,700B,700C improves all of these metrics relative to the conventional quarantine process600. The smart quarantine processes700A reduces risk relative to the conventional quarantine process600by utilizing the machine learning702to augment and improve the allow and scan step. Allow and scan is required for some files as the users102simply do not want every file602held for the sandbox708. Thus, allow and scan poses some risk. The machine learning702can reduce this risk such that some of the files602that would be allowed and scanned are now held based on the determination of the machine learning702. The smart quarantine processes700B both reduces the risk and the waiting time relative to the conventional quarantine process600by utilizing the machine learning702to augment and improve the allow and scan step and the quarantine step. Here, the smart quarantine processes700B provides the same benefits as the smart quarantine processes700A for the allow and scan step. Additionally, the smart quarantine processes700B only holds the files602if the output of the machine learning702determines the file602is malicious, thereby reducing the number of files602that are held. Finally, the smart quarantine processes700C reduces the waiting time, the cost, and the risk relative to the conventional quarantine process600by utilizing the machine learning702to augment and improve the allow and scan step, the quarantine step, and the allow without scan step. Again, the smart quarantine processes700C has the same benefits as the smart quarantine processes700A,700B. Additionally, the smart quarantine processes700C further augments the allow without scan only where the machine learning702determines the file602is benign. Also, the sandbox is minimized as it only handles files determined as suspicious in the machine learning702. Thus, the smart quarantine processes700C reduces processing costs—the sandbox only has to address suspicious files602. Experimental Results The following table illustrates a set of data from actual monitoring in the cloud-based system100, using the conventional quarantine process600. Here, the files602include documents (M.S. document), PDF files, and PE 32/64 files. In the quarantine step604-1, a total of 217,961 files were held, but only 1900 were malicious. Also, for the allow and scan step604-2, a total of 2538 files were allowed, but eventually determined to be malicious after the scanning. policyFile typemaliciousbenignsubtotalquarantinedMS document95920859217PDF0104905104905PE 32/6418915194853839All1900216061217961Allow and scanMS document83PDF0PE 32/642455All2538 The following table illustrates the same set of data with the introduction of the machine learning702. As can be seen in the above table, a lot of files602are held and scanned that may not have been necessary. In the below table, the machine learning702provides True Positives (T.P.) and False Positives (F.P.). Here, the machine learning702determines that only 3806 files should be held in quarantine, not 217,961. policyFile typeML TPML FPsubtotalAllMS document84216300PDF02222PE 32/6429735113484All30577493806Allow and scanMS document60182242PDF01717PE 32/6413662531619All14264521878 Model Evolution To describe the evolution of machine learning models, some example machine learning models include Random Forest, XGBoost, and LightGBM. Of course, other types of models are also contemplated herein. The Random Forest includes a software implementation around 2017 with a large model size, e.g., on the order of 600 Mb. XGBoost includes open source software providing a gradient boosting framework and a software implementation around 2018 includes a model size on the order of 100 Mb, but is limited to on the order of 15 million training samples. Finally, LightGBM also provides gradient boosting framework having a similar detection and False Positive (FP) rate as XGBoost, but with about a third the training time and a software implementation from2019includes about half the size of the XGBoost model size. Gradient boosting is a machine learning technique for regression and classification problems, which produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees. It builds the model in a stage-wise fashion like other boosting methods do, and it generalizes them by allowing optimization of an arbitrary differentiable loss function. Model Serving The present disclosure provides machine learning model abstraction for runtime. There are a couple drivers for the machine learning model abstraction. First, machine learning models can use different algorithms, such as in the example above, Random Forest→XGBoost→LightGBM. Each algorithm can include its own runtime library. The machine learning model abstraction layer can provide a single runtime library for multiple different algorithms. Second, each of these algorithms, in their software implementations, include training information that is not relevant to runtime. Examples of training information can include hyperparameters which are used to control the training process. Third, the machine learning model abstraction layer can abstract the algorithm into a tree structure that is very fast for model serving. The machine learning model abstraction layer is algorithm and programming language agnostic and can be in a portable format for any computer architecture. The machine learning model abstraction layer decouples the training process from the runtime process. In an embodiment, the present disclosure utilizes the LightGBM model. LightGBM is a gradient boosting framework that uses tree-based learning algorithms. The LightGBM model includes metadata that is extra overhead.FIG.10is a screenshot of an example of the metadata that is extra overhead. For example, an example of the extra overhead includes feature names, feature info, tree sizes, etc. The LightGBM model also includes information that is not useful at runtime.FIG.11is a screenshot of an example of information that is not useful at runtime. For example, split_gain, leaf_count, internal_value, and internal_count are not useful at runtime and the decision_type is duplicated. Finally, the LightGBM model includes parameters that are useful at understanding how the model was trained, but serve no purpose at runtime (model serving).FIG.12is a screenshot of parameters that are useful at understanding how the model was trained, but serve no purpose at runtime. In a further embodiment, the machine learning model abstraction layer can remove features from the machine learning model, namely features that are not used. This removes any “holes” in the model that waste space and eliminates the need to regenerate the feature set. In another embodiment, the machine learning model abstraction layer can include normalizing model probabilities into a machine learning score, such that scores are consider across model changes. Of note, each model trained is unique and has its own unique thresholds. The machine learning model abstraction layer can pick the best thresholds for a given model and normalize probabilities. For example, a benign score can be below 40, a suspicious score can be between 40 and 70, and a malicious score can be above 70. Of course, other embodiments are also contemplated. Model Abstraction Process FIG.13is a flowchart of a machine learning abstraction process800. The machine learning abstraction process800can be a computer-implemented method, embodied as instructions stored in non-transitory computer readable medium, and implemented via an apparatus such as the server200. The machine learning abstraction process800includes training a machine learning model with data for identifying features in monitored traffic in a network (step802); analyzing the trained machine learning model to identify information overhead therein, wherein the information overhead is utilized in part for the training (step804); removing the information overhead in the machine learning model (step806); and providing the machine learning model for runtime use for identifying the features in the monitored traffic, with the removed information overhead from the machine learning model (step808). The machine learning abstraction process800can further include identifying features that are not used in the trained machine learning model; and removing the identified features prior to the providing. The machine learning abstraction process800determining thresholds for the identifying features in the trained machine learning model; and normalizing the thresholds to a scoring system. The information overhead can include hyperparameters. The information overhead can include metadata that is extra overhead in the trained machine learning model. The information overhead can include information from the training that is not useful at runtime in the trained machine learning model. The information overhead can include parameters that are used to understand the training. The machine learning model can include a gradient boosting framework that uses tree-based learning algorithms. The providing step808can be to a cloud-based system that utilizes the machine learning model for inline monitoring of the monitored traffic. It will be appreciated that some embodiments described herein may include one or more generic or specialized processors (“one or more processors”) such as microprocessors; Central Processing Units (CPUs); Digital Signal Processors (DSPs): customized processors such as Network Processors (NPs) or Network Processing Units (NPUs), Graphics Processing Units (GPUs), or the like; Field Programmable Gate Arrays (FPGAs); and the like along with unique stored program instructions (including both software and firmware) for control thereof to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the methods and/or systems described herein. Alternatively, some or all functions may be implemented by a state machine that has no stored program instructions, or in one or more Application-Specific Integrated Circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic or circuitry. Of course, a combination of the aforementioned approaches may be used. For some of the embodiments described herein, a corresponding device in hardware and optionally with software, firmware, and a combination thereof can be referred to as “circuitry configured or adapted to,” “logic configured or adapted to,” etc. perform a set of operations, steps, methods, processes, algorithms, functions, techniques, etc. on digital and/or analog signals as described herein for the various embodiments. Moreover, some embodiments may include a non-transitory computer-readable storage medium having computer-readable code stored thereon for programming a computer, server, appliance, device, processor, circuit, etc. each of which may include a processor to perform functions as described and claimed herein. Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, an optical storage device, a magnetic storage device, a Read-Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash memory, and the like. When stored in the non-transitory computer-readable medium, software can include instructions executable by a processor or device (e.g., any type of programmable circuitry or logic) that, in response to such execution, cause a processor or the device to perform a set of operations, steps, methods, processes, algorithms, functions, techniques, etc. as described herein for the various embodiments. Although the present disclosure has been illustrated and described herein with reference to preferred embodiments and specific examples thereof, it will be readily apparent to those of ordinary skill in the art that other embodiments and examples may perform similar functions and/or achieve like results. All such equivalent embodiments and examples are within the spirit and scope of the present disclosure, are contemplated thereby, and are intended to be covered by the following claims. | 60,579 |
11861473 | For simplicity and clarity of illustration, the drawing figures illustrate the general manner of construction, and descriptions and details of well-known features and techniques may be omitted to avoid unnecessarily obscuring the present disclosure. Additionally, elements in the drawing figures are not necessarily drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help improve understanding of embodiments of the present disclosure. The same reference numerals in different figures denote the same elements. The terms “first,” “second,” “third,” “fourth,” and the like in the description and in the claims, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms “include,” and “have,” and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, device, or apparatus that comprises a list of elements is not necessarily limited to those elements, but may include other elements not expressly listed or inherent to such process, method, system, article, device, or apparatus. The terms “left,” “right,” “front,” “back,” “top,” “bottom,” “over,” “under,” and the like in the description and in the claims, if any, are used for descriptive purposes and not necessarily for describing permanent relative positions. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the apparatus, methods, and/or articles of manufacture described herein are, for example, capable of operation in other orientations than those illustrated or otherwise described herein. The terms “couple,” “coupled,” “couples,” “coupling,” and the like should be broadly understood and refer to connecting two or more elements mechanically and/or otherwise. Two or more electrical elements may be electrically coupled together, but not be mechanically or otherwise coupled together. Coupling may be for any length of time, e.g., permanent or semi-permanent or only for an instant. “Electrical coupling” and the like should be broadly understood and include electrical coupling of all types. The absence of the word “removably,” “removable,” and the like near the word “coupled,” and the like does not mean that the coupling, etc. in question is or is not removable. As defined herein, two or more elements are “integral” if they are comprised of the same piece of material. As defined herein, two or more elements are “non-integral” if each is comprised of a different piece of material. As defined herein, “real-time” can, in some embodiments, be defined with respect to operations carried out as soon as practically possible upon occurrence of a triggering event. A triggering event can include receipt of data necessary to execute a task or to otherwise process information. Because of delays inherent in transmission and/or in computing speeds, the term “real time” encompasses operations that occur in “near” real time or somewhat delayed from a triggering event. In a number of embodiments, “real time” can mean real time less a time delay for processing (e.g., determining) and/or transmitting data. The particular time delay can vary depending on the type and/or amount of the data, the processing speeds of the hardware, the transmission capability of the communication hardware, the transmission distance, etc. However, in many embodiments, the time delay can be less than approximately one second, two seconds, five seconds, or ten seconds. As defined herein, “approximately” can, in some embodiments, mean within plus or minus ten percent of the stated value. In other embodiments, “approximately” can mean within plus or minus five percent of the stated value. In further embodiments, “approximately” can mean within plus or minus three percent of the stated value. In yet other embodiments, “approximately” can mean within plus or minus one percent of the stated value. DESCRIPTION OF EXAMPLES OF EMBODIMENTS A number of embodiments can include a system. The system can include one or more processors and one or more non-transitory storage devices storing computing instructions configured to run on the one or more processors. The computing instructions can be configured to run on the one or more processors and perform acts of collecting historical data of a user; converting the historical data of the user into at least one feature vector; calculating a first user propensity score for the user using the at least one feature vector; calculating a second user propensity score for the user using the at least one feature vector, the second user propensity score representing a different user propensity than the first user propensity score; normalizing the first user propensity score; normalizing the second user propensity score; using the first user propensity score, as normalized, to place the user into a first segment; using the second user propensity score, as normalized, to place the user into a second segment different than the first segment; and facilitating delivery of a message to the user based on the first segment and the second segment. Various embodiments include a method. The method can include collecting historical data of a user; converting the historical data of the user into at least one feature vector; calculating a first user propensity score for the user using the at least one feature vector; calculating a second user propensity score for the user using the at least one feature vector, the second user propensity score representing a different user propensity than the first user propensity score; normalizing the first user propensity score; normalizing the second user propensity score; using the first user propensity score, as normalized, to place the user into a first segment; using the second user propensity score, as normalized, to place the user into a second segment different than the first segment; and facilitating delivery of a message to the user based on the first segment and the second segment. Several embodiments include a system. A system can include one or more processors and one or more non-transitory computer-readable media storing computing instructions that, when executed on the one or more processors, cause the one or more processors to perform certain acts. The acts can include calculating a first user propensity score and a second user propensity score for a user based on at least one feature vector of historical data of the user. The first user propensity score can represent first propensities for the user to take first actions. The second user propensity score can represent second propensities for the user to take second actions. The acts also can include using the first user propensity score to place the user into a first segment. The acts further can include using the second user propensity score to place the user into a second segment different than the first segment. The acts additionally can include facilitating delivery of a message to an electronic device of the user based on the first segment and the second segment. A number of embodiments include a method. A method can include being implemented via execution of computing instructions configured to run at one or more processors and configured to be stored at non-transitory computer-readable media. The method can include calculating a first user propensity score and a second user propensity score for a user based on at least one feature vector of historical data of the user. The first user propensity score can represent first propensities for the user to take first actions. The second user propensity score can represent second propensities for the user to take second actions. The method also can include using the first user propensity score to place the user into a first segment. The method further can include using the second user propensity score to place the user into a second segment different than the first segment. The method additionally can include facilitating delivery of a message to an electronic device of the user based on the first segment and the second segment. Turning to the drawings,FIG.1illustrates an exemplary embodiment of a computer system100, all of which or a portion of which can be suitable for (i) implementing part or all of one or more embodiments of the techniques, methods, and systems and/or (ii) implementing and/or operating part or all of one or more embodiments of the memory storage modules described herein. As an example, a different or separate one of a chassis102(and its internal components) can be suitable for implementing part or all of one or more embodiments of the techniques, methods, and/or systems described herein. Furthermore, one or more elements of computer system100(e.g., a monitor106, a keyboard104, and/or a mouse110, etc.) also can be appropriate for implementing part or all of one or more embodiments of the techniques, methods, and/or systems described herein. Computer system100can comprise chassis102containing one or more circuit boards (not shown), a Universal Serial Bus (USB) port112, a Compact Disc Read-Only Memory (CD-ROM) and/or Digital Video Disc (DVD) drive116, and a hard drive114. A representative block diagram of the elements included on the circuit boards inside chassis102is shown inFIG.2. A central processing unit (CPU)210inFIG.2is coupled to a system bus214inFIG.2. In various embodiments, the architecture of CPU210can be compliant with any of a variety of commercially distributed architecture families. Continuing withFIG.2, system bus214also is coupled to a memory storage unit208, where memory storage unit208can comprise (i) non-volatile memory, such as, for example, read only memory (ROM) and/or (ii) volatile memory, such as, for example, random access memory (RAM). The non-volatile memory can be removable and/or non-removable non-volatile memory. Meanwhile, RAM can include dynamic RAM (DRAM), static RAM (SRAM), etc. Further, ROM can include mask-programmed ROM, programmable ROM (PROM), one-time programmable ROM (OTP), erasable programmable read-only memory (EPROM), electrically erasable programmable ROM (EEPROM) (e.g., electrically alterable ROM (EAROM) and/or flash memory), etc. In these or other embodiments, memory storage unit208can comprise (i) non-transitory memory and/or (ii) transitory memory. In various examples, portions of the memory storage module(s) of the various embodiments disclosed herein (e.g., portions of the non-volatile memory storage module(s)) can be encoded with a boot code sequence suitable for restoring computer system100(FIG.1) to a functional state after a system reset. In addition, portions of the memory storage module(s) of the various embodiments disclosed herein (e.g., portions of the non-volatile memory storage module(s)) can comprise microcode such as a Basic Input-Output System (BIOS) operable with computer system100(FIG.1). In the same or different examples, portions of the memory storage module(s) of the various embodiments disclosed herein (e.g., portions of the non-volatile memory storage module(s)) can comprise an operating system, which can be a software program that manages the hardware and software resources of a computer and/or a computer network. The BIOS can initialize and test components of computer system100(FIG.1) and load the operating system. Meanwhile, the operating system can perform basic tasks such as, for example, controlling and allocating memory, prioritizing the processing of instructions, controlling input and output devices, facilitating networking, and managing files. Exemplary operating systems can comprise one of the following: (i) Microsoft® Windows® operating system (OS) by Microsoft Corp. of Redmond, Washington, United States of America, (ii) Mac® OS X by Apple Inc. of Cupertino, California, United States of America, (iii) UNIX® OS, and (iv) Linux® OS. Further exemplary operating systems can comprise one of the following: (i) the iOS® operating system by Apple Inc. of Cupertino, California, United States of America, (ii) the Blackberry® operating system by Research In Motion (RIM) of Waterloo, Ontario, Canada, (iii) the WebOS operating system by LG Electronics of Seoul, South Korea, (iv) the Android™ operating system developed by Google, of Mountain View, California, United States of America, (v) the Windows Mobile™ operating system by Microsoft Corp. of Redmond, Washington, United States of America, or (vi) the Symbian™ operating system by Accenture PLC of Dublin, Ireland. As used herein, “processor” and/or “processing module” means any type of computational circuit, such as but not limited to a microprocessor, a microcontroller, a controller, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a graphics processor, a digital signal processor, or any other type of processor or processing circuit capable of performing the desired functions. In some examples, the one or more processing modules of the various embodiments disclosed herein can comprise CPU210. Alternatively, or in addition to, the systems and procedures described herein can be implemented in hardware, or a combination of hardware, software, and/or firmware. For example, one or more application specific integrated circuits (ASICs) can be programmed to carry out one or more of the systems and procedures described herein. For example, one or more of the programs and/or executable program components described herein can be implemented in one or more ASICs. In many embodiments, an application specific integrated circuit (ASIC) can comprise one or more processors or microprocessors and/or memory blocks or memory storage. In the depicted embodiment ofFIG.2, various I/O devices such as a disk controller204, a graphics adapter224, a video controller202, a keyboard adapter226, a mouse adapter206, a network adapter220, and other I/O devices222can be coupled to system bus214. Keyboard adapter226and mouse adapter206are coupled to keyboard104(FIGS.1-2) and mouse110(FIGS.1-2), respectively, of computer system100(FIG.1). While graphics adapter224and video controller202are indicated as distinct units inFIG.2, video controller202can be integrated into graphics adapter224, or vice versa in other embodiments. Video controller202is suitable for monitor106(FIGS.1-2) to display images on a screen108(FIG.1) of computer system100(FIG.1). Disk controller204can control hard drive114(FIGS.1-2), USB port112(FIGS.1-2), and CD-ROM drive116(FIGS.1-2). In other embodiments, distinct units can be used to control each of these devices separately. Network adapter220can be suitable to connect computer system100(FIG.1) to a computer network by wired communication (e.g., a wired network adapter) and/or wireless communication (e.g., a wireless network adapter). In some embodiments, network adapter220can be plugged or coupled to an expansion port (not shown) in computer system100(FIG.1). In other embodiments, network adapter220can be built into computer system100(FIG.1). For example, network adapter220can be built into computer system100(FIG.1) by being integrated into the motherboard chipset (not shown), or implemented via one or more dedicated communication chips (not shown), connected through a PCI (peripheral component interconnector) or a PCI express bus of computer system100(FIG.1) or USB port112(FIG.1). Returning now toFIG.1, although many other components of computer system100are not shown, such components and their interconnection are well known to those of ordinary skill in the art. Accordingly, further details concerning the construction and composition of computer system100and the circuit boards inside chassis102are not discussed herein. Meanwhile, when computer system100is running, program instructions (e.g., computer instructions) stored on one or more of the memory storage module(s) of the various embodiments disclosed herein can be executed by CPU210(FIG.2). At least a portion of the program instructions, stored on these devices, can be suitable for carrying out at least part of the techniques and methods described herein. Further, although computer system100is illustrated as a desktop computer inFIG.1, there can be examples where computer system100may take a different form factor while still having functional elements similar to those described for computer system100. In some embodiments, computer system100may comprise a single computer, a single server, or a cluster or collection of computers or servers, or a cloud of computers or servers. Typically, a cluster or collection of servers can be used when the demand on computer system100exceeds the reasonable capability of a single server or computer. In certain embodiments, computer system100may comprise a portable computer, such as a laptop computer. In certain other embodiments, computer system100may comprise a mobile electronic device, such as a smartphone. In certain additional embodiments, computer system100may comprise an embedded system. Turning ahead in the drawings,FIG.3illustrates a block diagram of a system300that can be employed for sending messages based on user behavior, as described in greater detail below. System300is merely exemplary and embodiments of the system are not limited to the embodiments presented herein. System300can be employed in many different embodiments or examples not specifically depicted or described herein. In some embodiments, certain elements or modules of system300can perform various procedures, processes, and/or activities. In these or other embodiments, the procedures, processes, and/or activities can be performed by other suitable elements or modules of system300. Generally, therefore, system300can be implemented with hardware and/or software, as described herein. In some embodiments, part or all of the hardware and/or software can be conventional, while in these or other embodiments, part or all of the hardware and/or software can be customized (e.g., optimized) for implementing part or all of the functionality of system300described herein. In some embodiments, system300can include graphical user interface (“GUI”)310, web server320, internet330, user computer410,341, and/or GUI360,361. GUI310, web server320, internet330, user computer410,341, and/or GUI360,361can each be a computer system, such as computer system100(FIG.1), as described above, and can each be a single computer, a single server, or a cluster or collection of computers or servers, or a cloud of computers or servers. In another embodiment, a single computer system can host each of two or more of GUI310, web server320, internet330, user computer410,341, and/or GUI360,361. Additional details regarding GUI310, web server320, internet330, user computer410,341, and/or GUI360,361are described herein. In many embodiments, user computers340,341can comprise any of the elements described in relation to computer system100. In some embodiments, user computers340,341can be mobile devices. A mobile electronic device can refer to a portable electronic device (e.g., an electronic device easily conveyable by hand by a person of average size) with the capability to present audio and/or visual data (e.g., text, images, videos, music, etc.). For example, a mobile electronic device can comprise at least one of a digital media player, a cellular telephone (e.g., a smartphone), a personal digital assistant, a handheld digital computer device (e.g., a tablet personal computer device), a laptop computer device (e.g., a notebook computer device, a netbook computer device), a wearable user computer device, or another portable computer device with the capability to present audio and/or visual data (e.g., images, videos, music, etc.). Thus, in many examples, a mobile electronic device can comprise a volume and/or weight sufficiently small as to permit the mobile electronic device to be easily conveyable by hand. For examples, in some embodiments, a mobile electronic device can occupy a volume of less than or equal to approximately 1790 cubic centimeters, 2434 cubic centimeters, 2876 cubic centimeters, 4056 cubic centimeters, and/or 5752 cubic centimeters. Further, in these embodiments, a mobile electronic device can weigh less than or equal to 15.6 Newtons, 17.8 Newtons, 22.3 Newtons, 31.2 Newtons, and/or 44.5 Newtons. Exemplary mobile electronic devices can comprise (i) an iPod®, iPhone®, iTouch®, iPad®, MacBook® or similar product by Apple Inc. of Cupertino, California, United States of America, (ii) a Blackberry® or similar product by Research in Motion (RIM) of Waterloo, Ontario, Canada, (iii) a Lumia® or similar product by the Nokia Corporation of Keilaniemi, Espoo, Finland, and/or (iv) a Galaxy™ or similar product by the Samsung Group of Samsung Town, Seoul, South Korea. Further, in the same or different embodiments, a mobile electronic device can comprise an electronic device configured to implement one or more of (i) the iPhone® operating system by Apple Inc. of Cupertino, California, United States of America, (ii) the Blackberry® operating system by Research In Motion (RIM) of Waterloo, Ontario, Canada, (iii) the Palm® operating system by Palm, Inc. of Sunnyvale, California, United States, (iv) the Android™ operating system developed by the Open Handset Alliance, (v) the Windows Mobile™ operating system by Microsoft Corp. of Redmond, Washington, United States of America, or (vi) the Symbian™ operating system by Nokia Corp. of Keilaniemi, Espoo, Finland. Further still, the term “wearable user computer device” as used herein can refer to an electronic device with the capability to present audio and/or visual data (e.g., text, images, videos, music, etc.) that is configured to be worn by a user and/or mountable (e.g., fixed) on the user of the wearable user computer device (e.g., sometimes under or over clothing; and/or sometimes integrated with and/or as clothing and/or another accessory, such as, for example, a hat, eyeglasses, a wrist watch, shoes, etc.). In many examples, a wearable user computer device can comprise a mobile electronic device, and vice versa. However, a wearable user computer device does not necessarily comprise a mobile electronic device, and vice versa. In specific examples, a wearable user computer device can comprise a head mountable wearable user computer device (e.g., one or more head mountable displays, one or more eyeglasses, one or more contact lenses, one or more retinal displays, etc.) or a limb mountable wearable user computer device (e.g., a smart watch). In these examples, a head mountable wearable user computer device can be mountable in close proximity to one or both eyes of a user of the head mountable wearable user computer device and/or vectored in alignment with a field of view of the user. In more specific examples, a head mountable wearable user computer device can comprise (i) Google Glass™ product or a similar product by Google Inc. of Menlo Park, California, United States of America; (ii) the Eye Tap™ product, the Laser Eye Tap™ product, or a similar product by ePI Lab of Toronto, Ontario, Canada, and/or (iii) the Raptyr™ product, the STAR 1200™ product, the Vuzix Smart Glasses M100™ product, or a similar product by Vuzix Corporation of Rochester, New York, United States of America. In other specific examples, a head mountable wearable user computer device can comprise the Virtual Retinal Display™ product, or similar product by the University of Washington of Seattle, Washington, United States of America. Meanwhile, in further specific examples, a limb mountable wearable user computer device can comprise the iWatch™ product, or similar product by Apple Inc. of Cupertino, California, United States of America, the Galaxy Gear or similar product of Samsung Group of Samsung Town, Seoul, South Korea, the Moto 360 product or similar product of Motorola of Schaumburg, Illinois, United States of America, and/or the Zip™ product, One™ product, Flex™ product, Charge™ product, Surge™ product, or similar product by Fitbit Inc. of San Francisco, California, United States of America. In some embodiments, web server320can be in data communication through Internet330with user computers (e.g.,340,341). In certain embodiments, user computers340-341can be desktop computers, laptop computers, smart phones, tablet devices, and/or other endpoint devices. Web server320can host one or more websites. For example, web server320can host an eCommerce website that allows users to browse and/or search for products, to add products to an electronic shopping cart, and/or to purchase products, in addition to other suitable activities. In many embodiments, GUI310, web server320, internet330, user computer410,341, and/or GUI360,361can each comprise one or more input devices (e.g., one or more keyboards, one or more keypads, one or more pointing devices such as a computer mouse or computer mice, one or more touchscreen displays, a microphone, etc.), and/or can each comprise one or more display devices (e.g., one or more monitors, one or more touch screen displays, projectors, etc.). In these or other embodiments, one or more of the input device(s) can be similar or identical to keyboard104(FIG.1) and/or a mouse110(FIG.1). Further, one or more of the display device(s) can be similar or identical to monitor106(FIG.1) and/or screen108(FIG.1). The input device(s) and the display device(s) can be coupled to the processing module(s) and/or the memory storage module(s) GUI310, web server320, internet330, user computer410,341, and/or GUI360,361in a wired manner and/or a wireless manner, and the coupling can be direct and/or indirect, as well as locally and/or remotely. As an example of an indirect manner (which may or may not also be a remote manner), a keyboard-video-mouse (KVM) switch can be used to couple the input device(s) and the display device(s) to the processing module(s) and/or the memory storage module(s). In some embodiments, the KVM switch also can be part of GUI310, web server320, internet330, user computer410,341, and/or GUI360,361. In a similar manner, the processing module(s) and the memory storage module(s) can be local and/or remote to each other. In many embodiments, GUI310, web server320, and/or internet330can be configured to communicate with one or more user computers340and341. In some embodiments, user computers340and341also can be referred to as customer computers. In some embodiments, GUI310, web server320, and/or internet330can be configured to communicate with one or more user computers340and341can communicate or interface (e.g., interact) with one or more customer computers (such as user computers340and341) through a network or internet330. Internet330can be a public or private network. For example, internet330can be an intranet that is not open to the public. Accordingly, in many embodiments, GUI310, web server320, and/or internet330can be configured to communicate with one or more user computers340and341(and/or the software used by such systems) can refer to a back end of system300operated by an operator and/or administrator of system300, and user computers340,341(and/or the software used by such systems) can refer to a front end of system300used by one or more users350and351, respectively. In some embodiments, users350and351also can be referred to as customers, in which case, user computers340and341can be referred to as customer computers. In these or other embodiments, the operator and/or administrator of system300can manage system300, the processing module(s) of system300, and/or the memory storage module(s) of system300using the input device(s) and/or display device(s) of system300. In many embodiments, GUI310,360,361can be part of and/or displayed by web server320and/or user computers340,341, which also can be part of system300. In some embodiments, GUI310,360,361can comprise text and/or graphics (image) based user interfaces. In the same or different embodiments, GUI310,360,361can comprise a heads up display (“HUD”). When GUI310,360,361comprises a HUD, GUI310,360,361can be projected onto glass or plastic, displayed in midair as a hologram, or displayed on monitor106(FIG.1). In various embodiments, GUI310,360,361can be color or black and white. In many embodiments, GUI310,360,361can comprise an application running on a computer system, such as computer system100, user computers340,341, and/or server computer310. In the same or different embodiments, GUI310,360,361can comprise a website accessed through internet320. In some embodiments, GUI310,360,361can comprise an eCommerce website. In the same or different embodiments, GUI310,360,361can be displayed as or on a virtual reality (VR) and/or augmented reality (AR) system or display. In many embodiments, GUI310can be the same or different than GUI360,361. Meanwhile, in many embodiments, GUI310, web server320, internet330, user computer410,341, and/or GUI360,361also can be configured to communicate with one or more databases. The one or more databases can comprise a product database that contains information about products, items, or SKUs (stock keeping units) sold by a retailer. The one or more databases can be stored on one or more memory storage modules (e.g., non-transitory memory storage module(s)), which can be similar or identical to the one or more memory storage module(s) (e.g., non-transitory memory storage module(s)) described above with respect to computer system100(FIG.1). Also, in some embodiments, for any particular database of the one or more databases, that particular database can be stored on a single memory storage module of the memory storage module(s), and/or the non-transitory memory storage module(s) storing the one or more databases or the contents of that particular database can be spread across multiple ones of the memory storage module(s) and/or non-transitory memory storage module(s) storing the one or more databases, depending on the size of the particular database and/or the storage capacity of the memory storage module(s) and/or non-transitory memory storage module(s). The one or more databases can each comprise a structured (e.g., indexed) collection of data and can be managed by any suitable database management systems configured to define, create, query, organize, update, and manage database(s). Exemplary database management systems can include MySQL (Structured Query Language) Database, PostgreSQL Database, Microsoft SQL Server Database, Oracle Database, SAP (Systems, Applications, & Products) Database, and IBM DB2 Database. Meanwhile, communication between graphical user interface (“GUI”)310, web server320, internet330, user computer410,341, GUI360,361, and/or the one or more databases can be implemented using any suitable manner of wired and/or wireless communication. Accordingly, system300can comprise any software and/or hardware components configured to implement the wired and/or wireless communication. Further, the wired and/or wireless communication can be implemented using any one or any combination of wired and/or wireless communication network topologies (e.g., ring, line, tree, bus, mesh, star, daisy chain, hybrid, etc.) and/or protocols (e.g., personal area network (PAN) protocol(s), local area network (LAN) protocol(s), wide area network (WAN) protocol(s), cellular network protocol(s), powerline network protocol(s), etc.). Exemplary PAN protocol(s) can comprise Bluetooth, Zigbee, Wireless Universal Serial Bus (USB), Z-Wave, etc.; exemplary LAN and/or WAN protocol(s) can comprise Institute of Electrical and Electronic Engineers (IEEE) 802.3 (also known as Ethernet), IEEE 802.11 (also known as WiFi), etc.; and exemplary wireless cellular network protocol(s) can comprise Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Evolution-Data Optimized (EV-DO), Enhanced Data Rates for GSM Evolution (EDGE), Universal Mobile Telecommunications System (UMTS), Digital Enhanced Cordless Telecommunications (DECT), Digital AMPS (IS-136/Time Division Multiple Access (TDMA)), Integrated Digital Enhanced Network (iDEN), Evolved High-Speed Packet Access (HSPA+), Long-Term Evolution (LTE), WiMAX, etc. The specific communication software and/or hardware implemented can depend on the network topologies and/or protocols implemented, and vice versa. In many embodiments, exemplary communication hardware can comprise wired communication hardware including, for example, one or more data buses, such as, for example, universal serial bus(es), one or more networking cables, such as, for example, coaxial cable(s), optical fiber cable(s), and/or twisted pair cable(s), any other suitable data cable, etc. Further exemplary communication hardware can comprise wireless communication hardware including, for example, one or more radio transceivers, one or more infrared transceivers, etc. Additional exemplary communication hardware can comprise one or more networking components (e.g., modulator-demodulator components, gateway components, etc.). In many embodiments, the techniques described herein can provide a practical application and several technological improvements. In some embodiments, the techniques described herein can provide for automatic delivery of targeted messages using specific input data and a machine learning model to provide estimates in the face of uncertain outcomes. These techniques described herein can provide a significant improvement over conventional approaches of simply increasing a number of messages sent to increase interaction rates. Further, the techniques described herein can beneficially make determinations based on dynamic information that describes current conditions and/or conditions that have occurred during the same day of a user interaction with a message. In many embodiments, the techniques described herein can be used continuously at a scale that cannot be handled using manual techniques. For example, a number of daily interactions with messages can exceed a few thousand. In a number of embodiments, the techniques described herein can solve a technical problem that arises only within the realm of computer networks, as electronic messages do not exist outside the realm of computer networks. Moreover, the techniques described herein can solve a technical problem that cannot be solved outside the context of computer networks. Specifically, the techniques described herein cannot be used outside the context of computer networks, in view of a lack of sufficient data, and because the machine learning model cannot be performed without a computer. Turning ahead in the drawings,FIG.4illustrates a flow chart for a method400, according to an embodiment. Method400is merely exemplary and is not limited to the embodiments presented herein. Method400can be employed in many different embodiments or examples not specifically depicted or described herein. In some embodiments, the activities of method400can be performed in the order presented. In other embodiments, the activities of method400can be performed in any suitable order. In still other embodiments, one or more of the activities of method400can be combined or skipped. In many embodiments, system300(FIG.3) can be suitable to perform method400and/or one or more of the activities of method400. In these or other embodiments, one or more of the activities of method400can be implemented as one or more computer instructions configured to run at one or more processing modules and configured to be stored at one or more non-transitory memory storage modules. Such non-transitory memory storage modules can be part of a computer system such as graphical user interface (“GUI”)310, web server320, interne330, user computer410,341, and/or GUI360,361(FIG.3). The processing module(s) can be similar or identical to the processing module(s) described above with respect to computer system100(FIG.1). In many embodiments, method400can comprise an activity401of collecting historical data. In various embodiments, historical data can comprise interactions of a user with a GUI, interactions of a user with a past message, a past geographical location of a user, demographics of a user, a time since a user has opted-into automatic messages, a number of in-store transactions of a user, whether a user owns a home, household size of a user, a device type of a user (e.g. tablet, phone, computer, Apple, Android, etc.) a time of day a user is active on a GUI, and/or a browser type of a user (e.g. Safari, Chrome, Firefox, Edge, etc.). In the same or different embodiments, interactions of a user with a GUI can comprise views of an item of a category of items, cart adds of an item of a category of items, registry adds of an item of a category of items, transactions involving an item of the category of items, searches for the item of the category of items, mouse movements of a user, touch pad movements of a user, touchscreen interactions of a user, and/or eye movements of a user. In various embodiments, interactions of a user with a past message can comprise opening the past message, ignoring the past message, viewing a subject of the past message, viewing a portion of the past message, clicking on a selectable element within the past message (e.g., clicking on a link within an email, entering information into a push notification, etc.), responding to the past message, mouse movements of a user, touch pad movements of a user, touchscreen interactions of a user, and/or eye movements of a user. In many embodiments, historical data can be collected over a specific period of time. In some embodiments, a specific period of time can comprise 1 day, 2 days, 3 days, 4 days, 5 days, 1 month, 2 months, 3 months, 4 months, 5 months, 1 year, 2 years, 3 years, 4 years, 5 years, etc. In some embodiments, activity401and other activities in method400can comprise using a distributed network comprising distributed memory architecture to gather historical data and to store or save the historical data in memory within the computer system. This distributed architecture can reduce the impact on the network and system resources to reduce congestion in bottlenecks while still allowing data to be accessible from a central location. Using distributed architecture can be especially applicable for gathering historic data because gathering large datasets can reduce processing speeds and increase processing burdens on single processor computer systems as well as increase storage burdens on non-distributed systems. Further, in many embodiments, historical datasets can be so large that a human cannot reasonably remember them or record them in their entirety. In many embodiments, method400can comprise an activity402of converting historical data into at least one feature vector. In various embodiments, a feature vector can be configured to be used in a machine learning algorithm, as described in activities404-407. In the same or different embodiments, historical data can be stored in the computer system as feature vectors over discrete time periods. In many embodiments, a discrete time period can comprise 1 day, 2 days, 3 days, 4 days, 5 days, 1 month, 2 months, 3 months, 4 months, 5 months, 1 year, 2 years, 3 years, 4 years, 5 years, etc. In the same or different embodiments, a feature vector can comprise a count. In many embodiments, a count can describe interactions of a user with a first GUI, a past geographical location of a user, and/or demographics of a user. In some embodiments, opening the past message, ignoring the past message, viewing a subject of the past message, viewing a portion of the past message, clicking on a selectable element within the past message (e.g., clicking on a link within an email, entering information into a push notification, etc.), responding to the past message, mouse movements of a user, touch pad movements of a user, touchscreen interactions of a user, and/or eye movements of a user. In various embodiments, when interactions of a user with a GUI occur, a count can be added to a feature vector for that interaction. For example, when a user interacts with a web site for an item comprising a taxonomy of “Electronics/Camera/SLRcameras/Canon” counts will be added to feature vectors for: “Electronics,” “Electronics/Camera,” “Electronics/Camera/SLRcameras,” and “Electronics/Camera/SLRcameras/Canon.” In many embodiments, a feature vector can comprise information about a static attribute of a user. For example, a static attribute can comprise demographic information (e.g., gender, race, age, etc.). In embodiments where a feature vector comprises information about a static attribute of a user, a count can be assigned to a specific value of the static attribute. For example, when a gender of a user comprises male, a count of 20 can be applied to a feature vector for gender, and, when a gender of a user comprises female, a count of 25 can be applied to a feature vector for gender. In embodiments where a feature vector information about interactions of a user with a past message, a count can be added to a feature vector for that interaction. For example, when a user opens a past message and interacts with a selectable element within the past message, a count can be added to viewing a subject of the past message, opening the past message, viewing a portion of the past message, and/or clicking on a selectable element within the past message. In many embodiments, a feature vector can be stored in a database as described above. In some embodiments, activity402and other activities in method400can comprise using a distributed network comprising distributed memory architecture to convert historical data and store the historical data in memory within the computer system. This distributed architecture can reduce the impact on the network and system resources to reduce congestion in bottlenecks while still allowing data to be accessible from a central location. Using distributed architecture can be especially applicable for converting historic data into at least one feature vector, as storing and/or converting large datasets can reduce storage capacity thereby slowing down non-distributed systems. Further, in many embodiments, converting historical datasets into at least one feature vector can be so time consuming that a human cannot reasonably perform activity402. In many embodiments, after activity402, method400can continue with or comprise optional activity403of converting at least one feature vector into a sparse representation of the at least one feature vector. Storage efficiency can be improved by encapsulating feature vectors into coarser, conceptual feature vectors by utilizing a technique known as sparse representation. In various embodiments, activity403can comprise combining one or more similar feature vectors. As an example, instead of having a plurality of feature vectors representing a number of orders in different departments for a user, these feature vectors can be grouped into one conceptual feature vector that represents orders in the different departments. In some embodiments, a sparse representation of a feature vector can store only non-zero counts for features in the feature vector. Therefore, continuing with the above referenced example, when a user makes purchases only in a small number of departments rather than a large number of departments, many counts in a conceptual feature can be zero, and therefore not stored in the sparse representation of the feature vector. This technique, then, can reduce required storage space, and can consequently make subsequent reading and/or processing of the sparse representation of the feature vector faster than reading and/or processing of one or more feature vectors that are zero. In many embodiments, a sparse representation of a feature vector can be stored in a database as described above. In many embodiments, method400can comprise activity404of calculating a first user propensity score. In some embodiments, activity404occurs after activity403, and in other embodiments, activity404occurs after activity402without performing activity403. In some embodiments, a first user propensity score of activity404can comprise a likelihood of a user interacting with a message. For example, a first user propensity score can comprise a likelihood of a user opening a message, ignoring a message, viewing a subject of a message, viewing a portion of a message, clicking on a selectable element within a message (e.g., clicking on a link within an email, entering information into a push notification, etc.), responding to a message, etc. In the same or different embodiments, a first user propensity score can comprise a likelihood of a user completing a specific action on a GUI. For example, a first user propensity score can comprise a likelihood of a user viewing an item of a category of items, adding an item of a category of items to an electronic shopping cart, adding an item of a category of items to a registry, purchasing an item of a category of items, searching for an item of the category of items, navigating to a specific webpage, and/or selecting (e.g., clicking) an element of a GUI. In many embodiments, a category of items can correlate with a level in an item taxonomy. For example, a category of items can comprise electronics, home improvement, pets, grocery, etc. In various embodiments, an item taxonomy database can store an item taxonomy for a catalogue of items. In many embodiments, an item taxonomy can be configured to classify a catalogue of items based on properties of each item of the catalogue of items. In the same or different embodiments, properties of an item can comprise a title, a description, a price, a brand, a manufacturer, a color, a quantity, a volume, and/or a weight. In some embodiments, an item taxonomy can comprise distinct levels of item classification. In further embodiments, distinct levels of item classification can narrow as they go deeper into an item taxonomy. In various embodiments, distinct levels of item classification can comprise a super department, a department, a category, and/or a sub-category. In many embodiments, a department can be deeper in an item taxonomy than a super department. In the same or different embodiments, a category can be deeper in an item taxonomy than a department. In some embodiments, a sub-category can be deeper in an item taxonomy than a category. For example, an item taxonomy for Shamrock Farms whole milk can comprise a super department of “Eggs and Dairy,” a department of “Milk,” a category of “Dairy Milk,” and a sub-category of “Whole Milk.” In many embodiments, activity404can further comprise calculating a first user propensity score using at least one feature vector. In further embodiments, a feature vector in activity404can comprise a sparse representation of a feature vector as described in activity403. In the same or different embodiments, at least one feature vector can be used in a machine learning algorithm. In various embodiments, a machine learning algorithm can comprise an algorithm that iteratively determines equations for calculating probabilities of a user as described above. In some embodiments, a machine learning algorithm can comprise a logistic regression model. In the same or different embodiments, a logistic regression model can comprise an equation comprising: P(x)=11+e-(β0+β1x), wherein P(x) comprises a first user propensity score, x comprises a feature vector, β0comprises an intercept of the logistic regression model, and/or β1comprises a coefficient vector a same size as x. In many embodiments, activity404can comprise training a logistic regression model. In some embodiments, training a logistic regression model can comprise estimating internal parameters of a model configured to determine a first propensity score of a user. In various embodiments, a logistic regression model can be trained using labeled training data otherwise known as a training dataset. In many embodiments, a training dataset can comprise all or a part of historical data, as described in activities401-402, that has been labeled with its outcome (e.g., an interaction with a GUI and/or an interaction with a message) and/or any number of metalabels. In the same or different embodiments, training a logistic regression model can comprise maximizing an equation comprising: ℓ(β0,β1)∏i:yi=1P(xi)∏i′:yi′=0(1-P(xi′)), whereincomprises a likelihood function of β0and/or β1, β0comprises an intercept of the logistic regression model, β1comprises a coefficient vector a same size as x, i comprises an index of training instances with a label of 1, i′ comprises an index of training instances with a label of 0, y comprises a true label of a training instance of xi, P(xi) comprises a predicted label of xiwith a label of 1, and/or P(xi′) comprises a predicted label of xiwith a label of 0. In many embodiments, activity404can further comprise calculating a first user propensity score with a normal model or a strict model. In some embodiments, a normal model can calculate a first propensity score for a broader and/or larger segment of customers than a strict model. For example, a normal model can predict a probability of a user making a purchase in a home division level in an item taxonomy using feature vectors in the home division as well as feature vectors from an entertainment division, a fashion division, a services division, an enthusiast division, a professional division, and/or an everyday living division. On the other hand, a strict model can predict a probability of a user making a purchase in a home division level in an item taxonomy using feature vectors in only the home division. In various embodiments, a normal model can be trained on labeled training data (as described above). In the same or different embodiments, a normal model can be trained on labeled training data comprising a plurality of metalabels. In various embodiments, labeled training data comprising a plurality of metalabels can comprise one or more feature vectors tagged with one or more metalabels. In the same or different embodiments, labeled training data comprising a plurality of metalabels can comprise a plurality of feature vectors, where each feature vector of the plurality of feature vectors is tagged with a different metalabel. In many embodiments, a strict model can be trained on labeled training data (as described above). In the same or different embodiments, a strict model can be trained on labeled training data comprising only one metalabel. In various embodiments, labeled training data comprising one metalabel can comprise one or more feature vectors tagged with the same metalabel. In many embodiments, a metalabel can comprise a portion of an item taxonomy, as described above. For example, a metalabel can correspond with a department section of an eCommerce retailor (e.g., electronics, home improvement, pharmacy, pets, grocery, etc.). In many embodiments, after activity404, method400can comprise activity405of calculating a second user propensity score. In some embodiments, a second user propensity score can comprise a likelihood of a user interacting with a message. For example, a second user propensity score can comprise a likelihood of a user opening a message, ignoring a message, viewing a subject of a message, viewing a portion of a message, clicking on a selectable element within a message (e.g., clicking on a link within an email, entering information into a push notification, etc.), responding to a message, etc. In the same or different embodiments, a second user propensity score can comprise a likelihood of a user completing a specific action on a GUI. For example, a second user propensity score can comprise a likelihood of a user viewing of an item of a category of items, adding an item of a category of items to an electronic shopping cart, adding an item of a category of items to a registry, purchasing an item of a category of items, searching for an item of the category of items, navigating to a specific webpage, and/or selecting (e.g., clicking) an element of a GUI. In many embodiments, a category of items can correlate with a level in an item taxonomy. For example, a category of items can comprise electronics, home improvement, pets, grocery, etc. In various embodiments, a second user propensity can be different than a first user propensity score. For example, in embodiments where a first user propensity score comprises a likelihood of a user interacting with a message, a second user propensity score can comprise a likelihood of a user completing a specific action on a GUI (or vice versa). In many embodiments, activity405can further comprise calculating a second user propensity score using at least one feature vector. In further embodiments, a feature vector in activity404can comprise a sparse representation of a feature vector, as described in activity403. In the same or different embodiments, at least one feature vector can be used in a machine learning algorithm. In various embodiments, a machine learning algorithm can comprise an algorithm that iteratively determines equations for calculating probabilities of a user as described above. In some embodiments, a machine learning algorithm can comprise a logistic regression model. In the same or different embodiments, a logistic regression model can comprise an equation comprising: P(x)=11+e-(β0+β1x), wherein P(x) comprises a second user propensity score, x comprises a feature vector, β0comprises an intercept of the logistic regression model, and/or β1comprises a coefficient vector a same size as x. In many embodiments, activity405can comprise training a logistic regression model. In some embodiments, training a logistic regression model can comprise estimating internal parameters of a model configured to determine a second propensity score of a user. In various embodiments, a logistic regression model can be trained using labeled training data otherwise known as a training dataset. In many embodiments, a training dataset can comprise all or a part of historical data, as described in activities401-402, that has been labeled with its outcome (e.g., an interaction with a GUI and/or an interaction with a message) and/or any number of metalabels. In the same or different embodiments, training a logistic regression model can comprise maximizing an equation comprising: ℓ(β0,β1)∏i:yi=1P(xi)∏i′:yi′=0(1-P(xi′)), whereincomprises a likelihood function of β0and/or β1, β0comprises an intercept of the logistic regression model, β1comprises a coefficient vector a same size as x, i comprises an index of training instances with a label of 1, i′ comprises an index of training instances with a label of 0, y comprises a true label of a training instance of xia label 1, and P(xi) comprises a predicted label of xiwith a label of 1, and/or P(xi′) comprises a predicted label of xiwith a label of 0. In many embodiments, activity405can further comprise calculating a second user propensity score with a normal model or a strict model. In some embodiments, a normal model can calculate a second propensity score for a broader and/or larger segment of customers than a strict model. In various embodiments, a normal model can be trained on labeled training data (as described above). In the same or different embodiments, a normal model can be trained on labeled training data comprising a plurality of metalabels. In various embodiments, labeled training data comprising a plurality of metalabels can comprise one or more feature vectors tagged with one or more metalabels. In the same or different embodiments, labeled training data comprising a plurality of metalabels can comprise a plurality of feature vectors, where each feature vector of the plurality of feature vectors is tagged with a different metalabel. In many embodiments, a strict model can be trained on labeled training data (as described above). In the same or different embodiments, a strict model can be trained on labeled training data comprising only one metalabel. In various embodiments, labeled training data comprising one metalabel can comprise one or more feature vectors tagged with the same metalabel. In many embodiments, a metalabel can comprise a portion of an item taxonomy, as described above. For example, a metalabel can correspond with a department section of an eCommerce retailor (e.g., electronics, home improvement, pharmacy, pets, grocery, etc.). In many embodiments, method400can comprise an activity406of normalizing a first user propensity score. In some embodiments, when a number of training instances for a first label is lower than a number of training instances for a second label, normalizing a first user propensity score can comprise downsampling instances where a label comprises 0 (e.g., there is no label). In various embodiments, normalizing a first user propensity score can comprise participating instances where a label comprises 1 (e.g., there is a label). In many embodiments, when a number of training instances for a first label is lower than a number of training instances for a second label, normalizing a first user propensity score can comprise using a prior correction technique. In various embodiments, a prior correction technique can be configured to alter propensity scores to better scores reflect an actual probability, while also making propensity scores more comparable across different levels of an item taxonomy. In many embodiments, using a prior correction technique can comprise using an equation comprising =ln[(1-ττ)(y_1-y_)], whereincomprises an intercept of a logistic regression model, τ comprises a fraction of ones in a population, andycomprises a fraction of ones in a sample. In many embodiments, method400can comprise an activity407of normalizing a second user propensity score. In the same or different embodiments, when a number of training instances for a first label is lower than a number of training instances for a second label, normalizing a second user propensity score can comprise downsampling instances where a label comprises 0 (e.g., there is no label). In various embodiments, when a number of training instances for a first label is lower than a number of training instances for a second label, normalizing a second user propensity score can comprise participating instances where a label comprises 1 (e.g., there is a label). In some embodiments, normalizing a second user propensity score can comprise using a prior correction technique. In various embodiments, a prior correction technique can be configured to alter propensity scores to better scores reflect an actual probability, while also making propensity scores more comparable across different levels of an item taxonomy. In many embodiments, using a prior correction technique can comprise using an equation comprising =ln[(1-ττ)(y_1-y_)], whereincomprises an intercept of a logistic regression model, τ comprises a fraction of ones in a population, andycomprises a fraction of ones in a sample. Activity407occurs after activity405, and similarly, activity406occurs after activity404. Also, activity407can occur before or after activity404and/or406, and similarly, activity406can occur before or after activity405and/or407. Continuing with method400, in many embodiments, method400can comprise an activity408of placing a user into a first segment. In various embodiments, placing a user into a first segment can be done based upon a first user propensity score. In the same or different embodiments, placing a user into a first segment can be done based upon a normalized first user propensity score. For example, in one embodiment,FIG.6displays a distribution of users segmented by a first propensity score labeled from very low to very high. A number of users in each segment is displayed within each segment, and approximate first propensity scores or normalized first propensity scores for each segment are displayed in parentheses below the label. Returning toFIG.4, in many embodiments, method400can comprise an activity409of placing a user into a second segment. In various embodiments, placing a user into a second segment can be done based upon a second user propensity score. In some embodiments, a second segment can be different than a first segment much like a first propensity score can be different than a second propensity score, as described in activity405. In the same or different embodiments, placing a user into a second segment can be done based upon a normalized second user propensity score. For example, in one embodiment,FIG.7displays a distribution of users segmented by a second propensity score labeled from very low to very high. A number of users in each segment is displayed within each segment, and approximate second propensity scores or normalized second propensity scores for each segment are displayed in parentheses below the label. In many embodiments, a first segment and a second segment can be displayed in a same table. For example,FIG.9displays an embodiment where an x axis comprises a first user propensity score, as described above, and a y axis comprises a second user propensity score as described above. In this way, an administrator of a system and/or method can further stratify users into segments and/or combinations of segments. Returning, again, toFIG.4, activity409occurs after activities405and407, and similarly, activity408occurs after activities404and406. Also, activity409can occur before or after activities404,406, and408, and similarly, activity408can occur before or after activities405,407, and409. In many embodiments, activities404-409can occur in parallel (e.g., at the same time) with each other. Continuing with method400, in many embodiments, method400can comprise an activity410of facilitating a display of a first segment on a GUI. In many embodiments, a first segment can be displayed on a GUI as one or more selectable elements (e.g., buttons and/or checkboxes) displayed on the GUI. In the same or different embodiments, a first segment can be displayed as a part of a drop down menu as shown inFIG.8. In some embodiments, a first segment can be displayed as a search result after a search is completed by an administrator of method400, as shown inFIG.8. In many embodiments, method400can comprise an activity411of facilitating a display of a second segment on a GUI. In many embodiments, a second segment can be displayed on a GUI as one or more selectable elements (e.g., buttons and/or checkboxes) displayed on the GUI. In the same or different embodiments, a second segment can be displayed as a part of a drop down menu as shown inFIG.8. In some embodiments, a second segment can be displayed as a search result after a search is completed by an administrator of method400, as shown inFIG.8. Returning toFIG.4, in many embodiments, method400can comprise an activity412of facilitating a display of a normal model on a GUI. In many embodiments, a normal model can be displayed on a GUI as one or more selectable elements (e.g., buttons and/or checkboxes) displayed on the GUI. In the same or different embodiments, a normal model can be displayed as a part of a drop down menu as shown inFIG.8. In some embodiments, a normal model can be displayed as a search result after a search is completed by an administrator of method400(FIG.4), as shown inFIG.8. Returning, again, toFIG.4, in many embodiments, method400can comprise an activity413of facilitating a display of a strict model on a GUI. In various embodiments, activity413can occur at the same time or in conjunction with activity412as described above. In many embodiments, a strict model can be displayed on a GUI as one or more selectable elements (e.g., buttons and/or checkboxes) displayed on the GUI. In the same or different embodiments, a strict model can be displayed as a part of a drop down menu as shown inFIG.8. In some embodiments, a strict model can be displayed as a search result after a search is completed by an administrator of method400(FIG.4), as shown inFIG.8. Activity413can occur after activity411, and similarly, activity412can occur after activity410. Also, activities411and413, if performed, occur after activities405,407, and409, and can occur before or after activities404,406,408,410, and/or412. Similarly, activities410and412, if performed, occur after activities404,406, and408, and can occur before or after activities405,407,409,411, and/or413. In many embodiments, after activity409, method400can comprise an activity414of receiving a selection on a GUI. Activity414also can occur after activity410,411,412, and/or413, if one or more of activities410,411,412, and413are performed. For example, in some embodiments, only activities410and411are performed, and activities412and413are not performed. In further embodiments, activity414can comprise receiving a plurality of selections on a GUI. In the same or different embodiments, a selection on a GUI can comprise a selection of a selectable element, a drop down menu, and/or a search result. In various embodiments, a selection on a GUI can comprise a selection of a first segment, a selection of a second segment, a selection of a normal model, and/or a selection of a strict model. In many embodiments, activity414can further comprise filtering a set of users based upon a selection received from a GUI. For example, when a selection on a GUI comprises a first segment, users not in a first segment can be removed from a set to create a subset. As another example, when a selection on a GUI comprises a strict model, users not identified as having a propensity score can be removed from a set to create a subset. As a further example, when a selection on a GUI comprises a first segment and a strict model, users not in the first segment or not identified as having a propensity score can be removed, thereby creating a subset of users in a first subset and identified as having a propensity score by the strict model. In many embodiments, method400can comprise an activity415of facilitating delivery of a message. In some embodiments, a message can comprise text, images, and/or audio transmitted to an electronic device. For example, a message can comprise an email, a text message (e.g., SMS or MMS), a direct message, a push notification, a voicemail, a voice memo, etc. In various embodiments, a message can be delivered to only a subset of users, as described in activity414. In the same or different embodiments, a message can comprise information about metalabels used to train a normal model and/or a strict model depending on which is selected in activities412-413. In many embodiments, a message can be delivered to only users in a first segment and/or a second segment depending on which is selected in activities410-411. Activity415occurs after activity408and/or409, and can occur after activity414, if performed. In many embodiments, after activity415, method400can comprise an activity416of determining when a user interacted with a message. In the same or different embodiments, a user can interact with a message when opening the message, ignoring the message, viewing a subject of the message, viewing a portion of the message, clicking on a selectable element within the message (e.g., clicking on a link within an email, entering information into a push notification, etc.), responding to the message, moving a mouse over the message, moving a touch pad pointer over a message, tapping/touching or hovering over the message on a touchscreen device, and/or looking at a message on a device with gaze tracking technology. In many embodiments, activity416can further comprise adding data (comprising when a user has interacted with or ignored a message) to historical data as described in activity401. In this way, machine learning algorithms used to calculate user propensity scores can be further refined and made more accurate. Turning ahead in the drawings,FIG.5illustrates a block diagram of a system500that can be employed for behavior based messaging. System500is merely exemplary and embodiments of the system are not limited to the embodiments presented herein. System500can be employed in many different embodiments or examples not specifically depicted or described herein. In some embodiments, certain elements or modules of system700can perform various procedures, processes, and/or activities. In these or other embodiments, the procedures, processes, and/or activities can be performed by other suitable elements or modules of system500. Generally, therefore, system500can be implemented with hardware and/or software, as described herein. In some embodiments, part or all of the hardware and/or software can be conventional, while in these or other embodiments, part or all of the hardware and/or software can be customized (e.g., optimized) for implementing part or all of the functionality of system500described herein. In many embodiments, system500can comprise non-transitory memory storage module501. Memory storage module501can be referred to as historical data collecting module501. In many embodiments, historical data collecting module501can store computing instructions configured to run on one or more processing modules and perform one or more acts of method400(FIG.4) (e.g., activity401(FIG.4)). In many embodiments, system500can comprise non-transitory memory storage module502. Memory storage module502can be referred to as data converting module502. In many embodiments, data converting module502can store computing instructions configured to run on one or more processing modules and perform one or more acts of method400(FIG.4) (e.g., activity402(FIG.4)). In many embodiments, system500can comprise non-transitory memory storage module503. Memory storage module503can be referred to as feature vector converting module503. In many embodiments, feature vector converting module503can store computing instructions configured to run on one or more processing modules and perform one or more acts of method400(FIG.4) (e.g., activity403(FIG.4)). In many embodiments, system500can comprise non-transitory memory storage module504. Memory storage module504can be referred to as first propensity score calculating module504. In many embodiments, first propensity score calculating module504can store computing instructions configured to run on one or more processing modules and perform one or more acts of method400(FIG.4) (e.g., activity404(FIG.4)). In many embodiments, system500can comprise non-transitory memory storage module505. Memory storage module505can be referred to as second propensity score calculating module505. In many embodiments, second propensity score calculating module505can store computing instructions configured to run on one or more processing modules and perform one or more acts of method400(FIG.4) (e.g., activity405(FIG.4)). In many embodiments, system500can comprise non-transitory memory storage module506. Memory storage module506can be referred to as first propensity score normalizing module506. In many embodiments, first propensity score normalizing module506can store computing instructions configured to run on one or more processing modules and perform one or more acts of method400(FIG.4) (e.g., activity406(FIG.4)). In many embodiments, system500can comprise non-transitory memory storage module507. Memory storage module507can be referred to as second propensity score normalizing module507. In many embodiments, second propensity score normalizing module507can store computing instructions configured to run on one or more processing modules and perform one or more acts of method400(FIG.4) (e.g., activity407(FIG.4)). In many embodiments, system500can comprise non-transitory memory storage module508. Memory storage module508can be referred to as first segment placing module508. In many embodiments, first segment placing module508can store computing instructions configured to run on one or more processing modules and perform one or more acts of method400(FIG.4) (e.g., activity408(FIG.4)). In many embodiments, system500can comprise non-transitory memory storage module509. Memory storage module509can be referred to as second segment placing module509. In many embodiments, second segment placing module509can store computing instructions configured to run on one or more processing modules and perform one or more acts of method400(FIG.4) (e.g., activity409(FIG.4)). In many embodiments, system500can comprise non-transitory memory storage module510. Memory storage module510can be referred to as first segment display module510. In many embodiments, first segment display module510can store computing instructions configured to run on one or more processing modules and perform one or more acts of method400(FIG.4) (e.g., activity410(FIG.4)). In many embodiments, system500can comprise non-transitory memory storage module511. Memory storage module511can be referred to as second segment display module511. In many embodiments, first segment display module511can store computing instructions configured to run on one or more processing modules and perform one or more acts of method400(FIG.4) (e.g., activity411(FIG.4)). In many embodiments, system500can comprise non-transitory memory storage module512. Memory storage module512can be referred to as normal model display module512. In many embodiments, normal model display module512can store computing instructions configured to run on one or more processing modules and perform one or more acts of method400(FIG.4) (e.g., activity412(FIG.4)). In many embodiments, system500can comprise non-transitory memory storage module513. Memory storage module513can be referred to as strict model display module513. In many embodiments, strict model display module513can store computing instructions configured to run on one or more processing modules and perform one or more acts of method400(FIG.4) (e.g., activity413(FIG.4)). In many embodiments, system500can comprise non-transitory memory storage module514. Memory storage module514can be referred to as selection receiving module514. In many embodiments, selection receiving module514can store computing instructions configured to run on one or more processing modules and perform one or more acts of method400(FIG.4) (e.g., activity414(FIG.4)). In many embodiments, system500can comprise non-transitory memory storage module515. Memory storage module515can be referred to as message delivering module515. In many embodiments, message delivering module515can store computing instructions configured to run on one or more processing modules and perform one or more acts of method400(FIG.4) (e.g., activity415(FIG.4)). In many embodiments, system500can comprise non-transitory memory storage module516. Memory storage module516can be referred to as user interaction determining module516. In many embodiments, user interaction determining module516can store computing instructions configured to run on one or more processing modules and perform one or more acts of method400(FIG.4) (e.g., activity416(FIG.4)). Although systems and methods for behavior based messaging have been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes may be made without departing from the spirit or scope of the disclosure. Accordingly, the disclosure of embodiments is intended to be illustrative of the scope of the disclosure and is not intended to be limiting. It is intended that the scope of the disclosure shall be limited only to the extent required by the appended claims. For example, to one of ordinary skill in the art, it will be readily apparent that any element ofFIGS.1-8may be modified, and that the foregoing discussion of certain of these embodiments does not necessarily represent a complete description of all possible embodiments. For example, one or more of the procedures, processes, or activities ofFIG.4may include different procedures, processes, and/or activities and be performed by many different modules, in many different orders. All elements claimed in any particular claim are essential to the embodiment claimed in that particular claim. Consequently, replacement of one or more claimed elements constitutes reconstruction and not repair. Additionally, benefits, other advantages, and solutions to problems have been described with regard to specific embodiments. The benefits, advantages, solutions to problems, and any element or elements that may cause any benefit, advantage, or solution to occur or become more pronounced, however, are not to be construed as critical, required, or essential features or elements of any or all of the claims, unless such benefits, advantages, solutions, or elements are stated in such claim. Moreover, embodiments and limitations disclosed herein are not dedicated to the public under the doctrine of dedication if the embodiments and/or limitations: (1) are not expressly claimed in the claims; and (2) are or are potentially equivalents of express elements and/or limitations in the claims under the doctrine of equivalents. | 77,748 |
11861474 | Like reference numbers and designations in the various drawings indicate like elements. DETAILED DESCRIPTION The specification generally describes an operation placement assignment system that generates dynamic placement assignments that assign computationally-intensive operations represented in a computational graph across various devices in a computational environment. The operation placement assignment system improves execution speed of the computational graph and optimizes resource usage when the devices execute the operations of the computational graph. Devices in a computational environment often run applications that frequently run computationally expensive machine learning models using deep neural networks. These machine learning tasks can be computationally intensive, requiring significant resources from the devices executing them. For example, some machine learning models may have many parameters, e.g., millions of parameters, and are therefore difficult to deploy on a computing device with limited computational resources, e.g., on a mobile device. Each machine learning task can be in the form of a computational graph that includes nodes connected by directed edges. Each node in a computational graph represents an operation. An incoming edge to a node represents a flow of an input into the node, i.e., an input to the operation represented by the node. An outgoing edge from a node represents a flow of an output of the operation represented by the node to be used as an input to an operation represented by another node. Thus, a directed edge connecting a first node in the graph to a second node in the graph indicates that an output generated by the operation represented by the first node is used as an input to the operation represented by the second node. For example, a computational graph can represent the operations performed by a machine learning model to determine an output for a received input. Thus, for example, the directed edges may represent dependencies of a neural network. Activations can flow in the direction of the edges. As another example, the computational graph can represent the operations performed to train a machine learning model on training data. Thus the operations may comprise determining modified values for the parameters. An example operation placement assignment system assigns computational graph operations (e.g., machine learning operations or other types of computationally-intensive operations) across devices communicating with one another in a computational environment so that the tasks can be performed quickly and efficiently. FIG.1illustrates an example operation placement assignment system100. The operation placement assignment system100is an example of a system implemented as computer programs on one or more computers in one or more locations, in which the systems, components, and techniques described below can be implemented. The operation placement assignment system100determines optimal placement of computational graph operations for an application105running on a local device102in a computational environment101. Since these operations can be resource-intensive, e.g., when the operations are machine learning operations, requiring large amounts of processing power and energy, running the operations locally on a device executing the application may make the device, as well as the operations, slow and inefficient, may undesirably shorten the battery life of the device, or otherwise undesirably impact the performance of the device. Therefore, the operation placement assignment system100determines operation placement assignments across multiple devices, in a way that provides optimal execution of the operations. Ideal placement of graph operations depends on many variables including computing capabilities and constraints of computing devices in addition to optimization goals. For example, the operation placement assignment system100may determine that running operations locally makes a device102slow and inefficient. However, the operation placement system100needs to balance the speed and efficiency of the local device102with the capabilities of remote devices103a-cand the network connectivity and network speed of a data communication network104connecting the local device102and the remote devices103a-cwhen determining placement and subsequent execution of the operations. Remote devices may be other devices on a local network or may be physical or virtual devices in a cloud-computing environment (e.g., device103d). It can be difficult for an application developer to account for all the variables and determine the optimal execution location for each particular computational operation when developing the application. As shown inFIG.1, the operation placement assignment system100uses a machine learning model132to determine the optimal execution placement for any computational graph given context information from a computational environment and data characterizing the computational graph. In some implementations, the machine learning model132is a neural network, e.g., a deep neural network. Neural networks are machine learning models that employ one or more layers of neurons to generate an output, e.g., one or more classifications, for a received input. Deep neural networks include one or more hidden layers in addition to an output layer. The output of each hidden layer is used as input to the next layer in the network, i.e., the next hidden layer or the output layer. Each layer of the neural network generates an output from a received input in accordance with current values of a respective set of parameters for the layer. As illustrated inFIG.1, the machine learning model is trained by a model training system160located in the cloud, i.e., located remotely from the local device102, and then deployed on the local device102, e.g., over the network104. In other examples, however, the model training system160, the placement assignment system100, and, therefore, the machine learning model132are both implemented in the cloud, e.g., on one or more server computers that are remote from the local device102. In these examples, the local device102can send the data necessary for the system100to generate a placement assignment to the system100over the network104and receive the generated placement assignment from the system100over the network104. Neural networks can be trained using reinforcement learning to generate predicted outputs. Generally, in a reinforcement learning training technique, a reward is received and is used to adjust the values of the parameters of the neural network. The training process for the machine learning model132is described in more detail below. The operation placement assignment system100generates a model input142including context information140i.e., data characterizing the current state of the computational environment101, and computational graph data144, i.e., data characterizing the computational graph that is to be executed, and provides the model input142as input to the machine learning model132. The machine learning model132processes the model input142to determine placement assignments148of the computational graph operations across remote computing devices103a-cand the local computing device102in the computational environment101. Each placement assignment is an assignment of a specific computational operation in the computational graph to a computing device in the computational environment. For example, if the machine learning model determines from the context information that the local device102does not have enough battery power to perform a particular machine learning operation or group of operations to completion, the model can assign the operation or group of operations to a remote device103a-c. As another example, the machine learning model132can learn to assign execution of compute-heavy parts of a computational graph to remote devices with better processing capabilities than the local device, especially when little data is required to be sent over a network connection or the local device has access to good data connection. The machine learning model132can additionally or alternatively learn to dynamically change placement assignments based on the compute resources available on the local device. For example, if a local device has a fast GPU, the machine learning model takes the processing speed into consideration when determining whether to send computational data across a network to a remote device for evaluation and execution of an operation. The operation placement assignment system100provides the determined placement assignments148to the application105or process that provided the computational graph data. The application105or process then uses the placement assignments148to assign the operations of the computational graph to be executed by the devices of the computational environment101corresponding to the placement assignments. FIG.2illustrates an example flow diagram of an example process200for determining optimal placement of computational graph operations given a specific computational environment. For convenience, the process200will be described as being performed by a system of one or more computers, located in one or more locations, and programmed appropriately in accordance with this specification. For example, an operation placement system, e.g., the operation placement system100ofFIG.1, appropriately programmed, can perform process200. As illustrated inFIG.2, to determine placement assignments for a computational graph, the system obtains data characterizing the computational graph (210). The computational graph includes a plurality of nodes representing operations and directed edges representing data dependencies. The data characterizing the computational graph can be data from an application105needing to perform computationally-intensive tasks, e.g., machine learning tasks, represented by the computational graph. The data can include a description of the computational task to be carried out, the data required to carry out the computational task including data dependencies, the operations of the computational task, and the structure of the computational task including information about the operations to execute and any metadata associated with the operations. The computational graph data may also include information about the location of data required to carry out the computational task, e.g., the device on which the data is stored, the kind of memory in which the data is stored, or the network connection/speed required to access the data. In some implementations, the system embeds the operations of the computational graph. That is, the system generates or receives a respective embedding for each operation in the computational graph that is to be placed on one of the devices in the computational environment. An embedding is an ordered collection of numeric values, e.g., a vector of floating point or quantized floating point values, that represents an operation in an embedding space. For each input graph, the system collects the types of the graph's operations. An operation's type describes the underlying computation (e.g., matrix multiply or conversion to two dimensions) of the operation. For each type, the system stores a tunable embedding vector. The system generates an embedding by recording the size of each operation's list of output tensors and concatenating them into a fixed-size zero-padded list, referred to as the output shape. The system also identifies the one-hot encoding vector that represents the operations that are direct inputs and outputs to each operation, i.e., that are connected to the node representing the operation by an edge. The embedding of each operation is the concatenation of the embedding operation's type, its output shape, and its one-hot encoded, adjacency information. Other data that may be included may be: the depth of an operation within the network, a name of the operation, or the computational cost of an operation. The system also receives context information for a computational environment in which to perform the operations of the computational graph (220). The context information for the computational environment can include data identifying the available computing devices in the computational environment, the processing and/or storage capabilities of the available computing devices, available memory on the available computing devices, data about a network connecting the devices in the computational environment, e.g., one or more of the network bandwidth, a latency of communication on the network, or the network speed, battery life of the available computing devices, and other information about the computational environment148needed to make a decision about the appropriate device in which to execute a particular operation. Context information may also include current battery level or whether a device is charging. The context information may be represented in a manner which facilitates combining this information with the computational graph data for the model input, for example generating an embedding which, for each device, concatenates the device input(s) and output(s) and properties as described above. The computational environment may include any number of computing devices that can be connected together (e.g., by a wireless network, a wired network, Bluetooth, near field communication, RFID, or other communication technology). For example the computing devices may include devices in datacenters, personal computing devices, mobile devices, virtual devices, smart devices (e.g., a cloud-based voice service, a smart personal/home assistant, a smart thermostat, a digital media player, a smart appliance, a smart plug, or a smart thermostat), tablets, cloud computing devices, or any other computing devices with processing capabilities and/or data storage. Any computing device can be the local computing device running the application that needs to execute the computationally-intensive task. For example, a smart device may need to perform voice recognition. The smart device can run the machine learning model to determine placement of the voice recognition operations or the smart device can have a remote device, such as devices in the cloud, run the machine learning model to determine placement of the voice recognition operations. The voice recognition operations can then be executed on the determined devices in the computational environment and the results can be provided to the smart device for further processing. In some implementations, the system can additionally, optionally, receive, e.g., from the application, a constraint input that identifies which optimization goals should be emphasized during the processing of the graph. For example, the application may specify respective weights for one or more optimization goals, e.g., latency, battery, energy impact, bandwidth, and computational time. The constraints can be in the form of a parameterized vector that assigns a respective weight to each of the optimization goals. The system combines the computational graph data, the context information, and optionally the set of optimization constraints, to generate a model input for the system (230). Generating the model input includes transforming computational graph data and context information into an input of the type that the machine learning model is configured to receive. For example, the system can create a feature vector of one or more dimensions in which each attribute of the context information or graph operations data occupy one or more dimensions in the vector. Since the features of the computational graph and the context information may vary, the input length of the feature vector also varies in length depending on the features provided. The process for generating input for the model depends on the architecture of the model. In some implementations, the system generates a sequence of the received data with different kinds of data in predetermined positions in the sequence. The system then processes the model input using a machine learning model to generate an output defining placement assignments of the operations of the computational graph to computing devices in the computational environment (240). That is, as described above, the machine learning model has been trained to generate placement assignments for the operations of the computational graph that satisfy one or more optimization goals. In cases where the model input includes weights for the optimization goals, the machine learning model has been trained to generate placement assignments for the operations of the computational graph that satisfy the weights for the one or more optimization goals in the model input. In cases where the model input does not include weights for the optimization goals, the machine learning model has been trained to generate placement assignments for the operations of the computational graph that satisfy pre-determined weights for the one or more optimization goals. In some implementations, the output can be a sequence, e.g., a sequence of operator→placement instructions. A variety of machine learning models including convolutional networks and recurrent networks can be used to generate these outputs. For example, in some implementations, the model can be an autoregressive neural network conditioned on the model input. At each time step, the model receives already-generated assignments as input and generates the next assignment depending on these already-generated assignments, conditioned on the model input. The neural network may be a convolutional network as described by A. Oord et al. in “WaveNet: A Generative Model for Raw Audio,” https://arxiv.org/abs/1609.03499. As another example, the neural network may be a neural network as described by A. Oord et al. in “Conditional Image Generation with PixelCNN Decoders,” https://arxiv.org/abs/1606.05328. In other implementations, the model can be a recurrent neural network that receives the model input as a sequence. Once the input sequence has been processed, at each time step, the model then predicts an assignment for an operation corresponding to the time step based on the assignment of the operation corresponding to the previous time step. The recurrent neural network may be a recurrent neural network as described by Sepp Hochreither and Jurgen Schmidhuber in “Long Short-Term Memory.” Neural Computation 9(8) 1735-1780 (1997). The model can be run so that it always predicts over all possible placement devices, regardless of the capabilities of the device on which the model is running. If, for example, there is no GPU resource available on the device, the prediction to use a GPU resource can be ignored and the next highest scoring prediction can be used instead. The context information may be sufficiently rich to allow a large number of different computing environments to be represented. The model output may be relatively more constrained. In a case where a computing environment has more detail than is represented by the model output the environment may be simplified, for example by disregarding details or sub-sets of the environment, to map the output onto the computing environment. In such a case the system may be used recursively, to map to a complex computing environment. After processing the model input, the system assigns operations of the computational graph to computing devices in the computational environment according to the determined placement assignments (250). FIG.3illustrates example assignments of computational graph operations201a-f. As shown, devices103d-fare part of a computational environment that are in communication with one another, e.g., by a network. The system runs the machine learning model132on one device in the computational environment, e.g.,103d, using the computational graph data201a-fand context information from the computational environment. The machine learning model may determine, as shown, that to ensure optimal execution of the computational graph operations and satisfy a reward function on which the machine learning model has been trained, operation201ashould be assigned to device103d, operations201band201cshould be assigned to device103e, and operations201d,201e, and201fshould be assigned to device103f. A placement policy may include a definition of placement assignments for the operations of a computational graph or sub-graph thereof. Re-computing a policy for every time step or every time a feature of the computational environment changes may be prohibitively expensive for the system. Therefore, in some implementations, after determining assignments for a computational graph task, the system recognizes a repeated computational graph task or operation and assigns the repeated operations to the assignments previously determined for the task or operation. For example, the computational graph task may be a task of speech recognition. The system can determine which devices to run the operations of the task during one execution of the machine learning model and then repeat the assignments for subsequent speech recognition tasks. The assignments may be valid for a predetermined duration and may be recomputed after the duration is finished. The inferences from the placement model can be reused as long as nothing significantly changes with respect to the context. The system can assign a threshold amount by which each of the context attributes is allowed to change. For example, a network bandwidth change may be acceptable, e.g., not significant, if it is +/−1 Mbps or a battery level change may be insignificant if it is +/−3%. In some implementations, the determined assignments of computational graph operations can be stored in a cache or data store as a placement policy. The placement policy defines a mapping from the input context information and the specific computational task represented by the graph to computing devices in the computational environment. The system can then use the placement policy for a subsequent computational task if there is a placement policy defined for the computational task. The placement policy may be valid for a specific number of time steps or a predetermined duration. The number of time steps or predetermined duration may be calculated based on the expense, e.g., cost, in terms of time, resource, and energy, associated with evaluating a policy. The placement policy may also be valid as long as a certain number of inputs do not change or the inputs change within a threshold amount as described above. In order to provide optimal placement assignments, the system trains the machine learning model to predict placement assignments, based on given input. Referring toFIG.1, the model training system160can train the machine learning model132in the cloud, i.e., on one or more computers that are remote from the local device. In particular, the model can be trained in simulation or on a real population of devices to predict placements that achieve optimization goals (e.g., an energy vs. speed trade-off). FIG.4illustrates an example flow diagram of an example process400for training a machine learning model to determine optimal placements of computational graph operations given a specific computational environment. For convenience, the process400will be described as being performed by a system of one or more computers, located in one or more locations, and programmed appropriately in accordance with this specification. For example, a model training system, e.g., the model training system160ofFIG.1, appropriately programmed, can perform process400. To train the machine learning model, the system initializes the values of a set of parameters the model, e.g., to randomly assigned or pre-determined values. The system may determine current environment conditions (410) including the context information from the computational environment. In simulation, the system may generate the conditions for the simulated environment. The system also identifies computational graph data to assign to devices (415). In some implementations, the system generates weights for one or more optimization goals when the model expects weights as input. The system generates a model input from the current environment conditions and the computational graph data. The environmental conditions and the graph to be processed should ideally be from real usage—e.g., a model running in a camera application under a set of conditions on real devices. These conditions can be logged anonymously. Then the system can take the model/graph and conditions and run simulations to train the operation placement model. The system generates an assignment for the computational graph data by processing the model input using the model to predict assignments in accordance with the current values of the model parameters (425). The system then determines a reward based on the results of the real or simulated execution according to the assignment (430). The reward reflects how well the assignments satisfy the constraints of an optimization goal or set of optimization goals. In particular, the reward function includes a respective term for each of the optimization goals. Each optimization goal is associated with a measurable metric (e.g., time spent for execution, amount of data transmitted, and battery usage for the local device) and the term corresponding to the optimization goal in the reward function is a function of the measurable metric. Thus, the system measures the metrics associated with each optimization goal and then computes the reward. More specifically, the reward may be a weighted sum of, for each goal, a function of the measured metric for the goal. When the model is not configured to receive the weights as an input, the weights are fixed or, in some cases, annealed during the training. In cases where the model is configured to receive weights as input, the weights from the input are used when determining the reward. The system then updates the current values of the model parameters based on the reward using a reinforcement learning algorithm (440). That is, the system updates, using the reinforcement learning algorithm, the current values of the model parameters so that the model generates placements that result in an increased reward being generated. For example, the reinforcement learning algorithm can be a conventional actor-critic algorithm such as the actor-critic reference disclosed by Sutton, R. and Barto, A. in “Reinforcement Learning: an Introduction.” (MIT Press, 1998). In some implementations, the algorithm may be one as disclosed by Lillicrap et al. in “Continuous control with Deep Reinforcement Learning” https://arxiv.org/abs/1509.02971. In other implementations, the algorithm may be the algorithm disclosed by Mnih et al. in “Human-Level Control Through Deep Reinforcement Learning,” https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf. In still other implementations, the algorithm may be the algorithm disclosed by Foerster et al. in “Counterfactual Multi-Agent Policy Gradients,” https://arxiv.org/abs/1705.08926. The algorithm may also be the algorithm disclosed by Mnih et al. in “Asynchronous Methods for Deep Reinforcement Learning,” https://arxiv.org/abs/1602.01783. In some implementations, in order to ensure that the space of possible assignments is sufficiently explored during the training of the model, the system incorporates an exploration policy into the training that ensures that assignments other than those that the model currently predicts would be the best assignment can be selected. For example, in certain iterations of the training process400, the system may randomly select an assignment rather than selecting the assignment generated by the model. As another example, the system may include a term in the reward function that increases the reward when a new or rarely seen assignment is selected. The system repeats the training process400many times for different environment conditions and computational graphs to train the model to effectively account for numerous computational graph tasks being executed in a variety of computational environments. In some implementations, the system trains multiple different models having different architectures and then selects the best-performing model as the final model. The trained model can then predict placement assignments for any computational graph task in any given computational environment. For example, the model can determine the optimal placement of a computationally-intensive task from a computer game running on a user's mobile device given low battery power of the user's mobile device. The model can take in context information of the computational environment such as connectivity of the mobile device to the Internet or mobile network (e.g., 4G, 5G, or LTE) and availability and capabilities of remote devices to perform the computationally-intensive task. The model may weigh having to send data from the user's device to a remote device against the battery savings of performing the task remotely. The model can then predict the best operational assignments based on battery savings and overall processing time. Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non-transitory storage medium for execution by, or to control the operation of, data processing apparatus. The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them. Alternatively or in addition, the program instructions can be encoded on an artificially-generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. The term “data processing apparatus” refers to data processing hardware and encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can also be, or further include, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). The apparatus can optionally include, in addition to hardware, code that creates an execution environment for computer programs, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A computer program, which may also be referred to or described as a program, software, a software application, an app, a module, a software module, a script, or code, can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages; and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data, e.g., one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, e.g., files that store one or more modules, sub-programs, or portions of code. A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a data communication network. The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by special purpose logic circuitry, e.g., an FPGA or an ASIC, or by a combination of special purpose logic circuitry and one or more programmed computers. Computers suitable for the execution of a computer program can be based on general or special purpose microprocessors or both, or any other kind of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data. The central processing unit and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device, e.g., a universal serial bus (USB) flash drive, to name just a few. Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's device in response to requests received from the web browser. Also, a computer can interact with a user by sending text messages or other forms of message to a personal device, e.g., a smartphone, running a messaging application, and receiving responsive messages from the user in return. Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface, a web browser, or an app through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (LAN) and a wide area network (WAN), e.g., the Internet. The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some embodiments, a server transmits data, e.g., an HTML page, to a user device, e.g., for purposes of displaying data to and receiving user input from a user interacting with the device, which acts as a client. Data generated at the user device, e.g., a result of the user interaction, can be received at the server from the device. In addition to the embodiments of the attached claims and the embodiments described above, the following numbered embodiments are also innovative: Embodiment 1 is a method comprising: obtaining data characterizing a computational graph comprising a plurality of nodes representing operations and directed edges representing data dependencies; receiving context information for a computational environment in which to perform the operations of the computational graph, the context information including data representing a network connecting a plurality of computing devices in the computational environment; generating a model input comprising at least the context information and the data characterizing the computational graph; processing the model input using a machine learning model to generate an output defining placement assignments of the operations of the computational graph to the plurality of computing devices; and assigning operations of the computational graph to the plurality of computing devices according to the defined placement assignments. Embodiment 2 is the method of embodiment 1, wherein the machine learning model has been trained to generate placement assignments for the operations of the computational graph that satisfy at least one pre-determined weight for one or more optimization goals. Embodiment 3 is the method of any one of embodiments 1 or 2, further comprising prior to processing the model input using the machine learning model: receiving a constraint that identifies at least one optimization goal for graph processing; and generating the model input using the constraint in addition to the context information and the data characterizing the computational graph. Embodiment 4 is the method of embodiment 3, wherein the constraint is in the form of a vector that assigns a respective weight to one or more optimization goals. Embodiment 5 is the method of any one of embodiments 2 through 4, wherein the one or more optimization goals includes one or more of: latency, battery, energy impact, bandwidth, and computational time. Embodiment 6 is the method of any one of embodiments 1 through 5, wherein the context information further comprise information defining at least one computational capability of the plurality of computing devices in the computational environment including available battery life, available processing capability, available storage capacity, available memory, or network speed. Embodiment 7 is the method of any of embodiments 1 through 6, wherein the data representing a network connecting the plurality of computing devices includes data representing one or more of: measured or expected latency of the network, network speed, and available computing devices on the network. Embodiment 8 is the method of any of embodiments 1 through 7, wherein the computational graph comprises a plurality of repeated operations and further comprising: after determining a placement assignment for one of the repeated operations, assigning subsequent repeated operations to a same placement assignment for a predetermined number of computational time steps. Embodiment 9 is the method of embodiment 8, further comprising after the predetermined number of computational time steps, reevaluating the placement assignment of the repeated operations. Embodiment 10 is the method of any one of embodiments 1 through 9, wherein the computational graph or a sub-graph thereof represents a particular task and further comprising: after determining placement assignments for the operations of the computational graph or sub-graph thereof, creating a policy that defines placement assignments of the operations of the particular task from the determination of placement assignments for the operations; receiving data characterizing a second computational graph or sub-graph representing the same particular task as the computational graph comprising a plurality of nodes representing operations and directed edges representing data dependencies or the sub-graph thereof; and determining placement assignments of the operations of the second computational graph or sub-graph from the created policy. Embodiment 11 is the method of embodiment 10, further comprising: reevaluating the created policy after a predetermined number of computational time steps. Embodiment 12 is the method of embodiment 11, wherein the predetermined number of computational time steps is determined based on a cost associated with re-computing the policy. Embodiment 13 is a system comprising: one or more computers; and one or more storage devices storing instructions that are operable, when executed on one or more computers, to cause the one or more computers to perform any one of embodiments 1 through 12. Embodiment 14 is one or more non-transitory computer-readable storage mediums comprising instructions stored thereon that are executable by a processing device and upon such execution cause the processing device to perform any one of claims1through12. While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or on the scope of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially be claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. Particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. | 44,229 |
11861475 | DETAILED DESCRIPTION Described here are systems and methods for generating and implementing a hybrid machine learning model and mechanistic model to produce biological feature maps (e.g., cell density maps) or other measurements, predictions, or estimates of biological features (e.g., cell density) based on an input of data that are spatially varying, temporally varying, or both. As one example, the input data may include multiparametric magnetic resonance images. In one configuration of the present disclosure, the hybrid model includes a combination of a machine learning model (ML) and a proliferation-invasion (PI) mechanistic model that takes as an input multiparametric MRI data and mechanistics of tumor cell proliferation invasion to generate tumor cell density prediction under a graph-based semi-supervised learning (SSL) framework. The hybrid machine learning and mechanistic models described in the present disclosure have capabilities of learning patient-specific relationships between imaging features and cell density, and have a greater prediction accuracy than machine learning or proliferation-invasion alone, especially when applied to a GBM patient cohort. Additionally, the hybrid machine learning and mechanistic models described in the present disclosure provide a more balanced prediction in T2-weighted (T2W) regions-of-interest (ROIs) when compared to proliferation-invasion alone. For instance, PI alone can underestimate cell density, indicating that the hybrid machine learning and mechanistic model is more capable of capturing high density regions in brain around tumor (BAT). Contributions of each individual feature can be determined using a Relief algorithm that is configured specifically for the hybrid machine learning and mechanistic models described in the present disclosure. It was found in example studies that PI contributed significantly to the prediction, followed by all or a subset of MRI sequences T1+C (e.g., T1-weighted imaging with a contrast agent), fractional anisotropy (FA), T2 (e.g., T2-weighted imaging), and relative cerebral blood volume (rCBV). This highlighted the importance of incorporating mechanistic models to help improve prediction of the biological output (e.g., tumor cell density). Machine learning models can be trained to link localized imaging features of multiparametric MRI, or other imaging, at each biopsy location with pathologist quantified tumor cell density. This results in a predictive tumor cell density machine learning model map that can be applied over the entire tumor. Since machine learning models are trained on the data provided by image-localized biopsies from different regions of previous patients with tumors, which may be scant data, they are prone to vulnerability with regard to any biases or imbalance in the data feeding the model. Based on the breadth and depth of these training data, the resultant trained machine learning model can be used to predict the cell density of any location, including locations that are not biopsied. Mechanistic models are built on first principles understanding of biological and/or physiological processes that constrain interpretation as to how the multiparametric MRIs, or other imaging, might provide insights into these biological or physiological processes (e.g., tumor cell density across the brain). One mechanistic model is the Proliferation-Invasion model mentioned above. The PI model is based on the principle that tumors are proliferative and invasive, and thus simulations of the PI model are based on patient-specific estimates of the tumor cell net proliferation and invasion rates. These proliferation and invasion rates can be estimated for each patient using contrast enhanced T1-weighted and T2-weighted MRIs, or other imaging features. Based on the premise underlying the PI model, the PI model can produce a tumor cell density map for anywhere in a patient's brain given outlines of imaging abnormalities on pretreatment images along with gray/white matter segmentation of the patient's brain. The PI model aims to capture the most basic understanding of what cancer is: cells that grow uncontrollably and invade surrounding tissue. The invasion term is particularly relevant for glioblastomas, which are known to be diffusely invasive with the potential to migrate long distances in the human brain. Mathematically, the PI model can be written as follows: ∂c∂t︷RateofChangeofCellDensity=∇·(D(x)∇c)︷InvasionofCellsintoNearbyTissues+ρc(1-cK);ProliferationofCells(1) where c is the tumor cell density; D(x) is the net rate of diffusion, which is taken to be piecewise constant with different values in gray and white matter; ρ is the net rate of proliferation; and K is the cell carrying capacity. This model may be used to predict prognosis, radiation sensitivity, benefit from resection, and mutation status, such as IDH1 mutation status in the case of glioblastoma (GBM). Additionally, this model may be used to create untreated virtual controls for use in defining response metrics that are more prognostically significant. The PI model can also be used to model other biological systems, such as diseases such as Alzheimer's Disease (AD), in which c may indicate the density of a toxic protein, D(x) may indicate the diffusion of that protein and its conversion rate from a normal to a toxic form. It will be appreciated that mechanistic models other than the proliferation-invasion model described above can also be used to model other biological feature data. As noted, one example biological feature that can be mapped or otherwise measured using the systems and methods described in the present disclosure is cell density. More generally, the biological features can include data that can be mapped, measured, predicted or otherwise estimated using a mechanistic model of one or more biological or physiological processes. These biological feature data may include properties, characteristics, or other features of cells, proteins, macromolecules, biomolecules, or other chemical compounds. The type of biological feature data that are mapped, measured, predicted, or otherwise estimated may therefore be determined based in part on the mechanistic model used to augment the machine learning model. As one non-limiting example, a mechanistic model such as a Proliferation-Invasion-Hypoxia-Necrosis-Angiogenesis (PIHNA) model can be used. In these instances, the biological feature data may include data indicative of hypoxic cell density. Other mechanistic models can include mathematical models of biochemical reactions, including those that involve metabolism, signal transduction, gene expression, or combinations thereof. As noted, mechanistic models can also include mathematical models of other biological or physiological processes or systems, including mechanistic models related to disease systems, epidemiology, tumor grading, and so on. Other biological feature data may, therefore, include other histological properties or characteristics (e.g., cell shape, cell size, cell area, genomic characterization, molecular status). Examples of other mechanistic models may include models of complex disease systems that are modeled in terms of spatial heterogeneity of molecular characteristics and temporal dynamics, which can help elucidate the biological underpinning of disease formation, progression, and treatment resistance. Such models can find applicability in neurological diseases and cancer, and more generally in biology, ecology, and epidemiology. As one non-limiting example, the mechanistic model may be a mechanistic model of the gut microbiome, which can be characterized by spatial heterogeneity and temporal dynamics. Global imaging of the gut combined with localized biopsies can be used to give insight into imaging features driven by particular bacteria colonies. By understanding how specific individual clonal populations interact and grow, the creation of spatial-temporal biological feature maps using the systems and methods described in the present disclosure may allow for better prediction of how large-scale shifts in sub-populations may alter coexistence or lead to a dominant clone. Machine learning is a data-driven approach, which has the strength of utilizing the available data, but a model built on a particular dataset may not generalize well to other datasets. For instance, machine learning models for tumor cell density can make predictions that are counter to biological intuition and experience including suggesting unrealistic fluctuations in cell density over small distances or predicting unlikely significant regions of high tumor cell density distant from the imageable component of the tumor. The PI model has generalizability because it is a mechanistic model based on the underlying principles of tumor growth, but assumes cell density monotonically decreases from around the center of the tumor mass (i.e., enhancing core on a T1+C image) to the surrounding non-enhancing parenchyma (i.e., the so-called brain around the tumor (BAT)), not allowing significant local fluctuations. While it is generally true that higher cell densities are in the center of the imaging abnormality and the lower cell densities are on the outskirts, the monotonic nature limits the high resolution accuracy of the PI model estimates in BAT. In one configuration of the present disclosure, a hybrid machine learning and mechanistic model, called ML-PI, is disclosed. This hybrid model integrates machine learning and PI models to increase accuracy in predicting intratumoral cell density distribution for a given patient. In some implementations, the hybrid machine learning and mechanistic model adopts a semi-supervised learning (SSL) framework, which utilizes both biopsy samples (called labeled data) and biopsy-free sub-regions of the tumor (called unlabeled data). An SSL framework may be used in applications in which labeled data are scarce but unlabeled data are readily available and in a large quantity. In general, available biopsy samples are limited for each patient and there are abundant sub-regions of the tumor that are not biopsied, but with image features readily available. There are many types of SSL algorithms, including generative, self-training, co-training, low-density separation, and graph-based models. In one configuration of the present disclosure, a graph-based SSL method is used to integrate PI with ML. Graph-based SSL has relatively high accuracy and efficiency. The basic idea is to construct a graph with vertices being labeled and unlabeled samples in a training set and edges weighted by vertex proximity in the feature space. There are two types of graph-based SSL: transductive and inductive learning models. The former aims to formulate a method to propagate label information from labeled samples to unlabeled samples in a specific dataset. In this way, the unlabeled samples in the dataset are classified/predicted. The latter aims to train a model using labeled and unlabeled samples, which is not only used to predict the unlabeled samples in training but also new samples. Under a graph-based SSL framework, the hybrid machine learning and mechanistic models described in the present disclosure can incorporate biological feature data estimated by a mechanistic model (e.g., cell density data estimated with a PI model) to regularize a multiparametric MRI-based SSL model. The hybrid machine learning and mechanistic model is then able to learn patient-specific predictive relationships between imaging features and cell density that is superior to each modeling method alone. The resultant machine learning and mechanistic model improves the ability to capture substantial intra-patient and inter-patient heterogeneity. As mentioned above, in some configurations of the present disclosure, a Relief-ML-PI algorithm can be implemented to quantify the contribution from each feature (e.g., each MRI sequence and PI) to the final cell density prediction. This algorithm can be used to examine feature contributions of the model post-training, as opposed to being used for feature selection pre-model training. Finding their respective contributions to prediction of tumor cell density helps knowledge discovery about GBM. Also, knowing the contribution from PI relative to imaging features reveals the importance of incorporating mechanistic models into data-driven machine learning. Imbalance of labeled samples has been shown to significantly bias SSL models, in general. In one configuration, the data are naturally imbalanced with more samples concentrated toward the high end of the cell density spectrum than the low end, due to the ease of acquiring high-density samples in surgical biopsy. A data augmentation strategy may identify proper intratumoral regions from which to take “virtual” biopsy samples guided by PI. The augmented dataset contains balanced samples with density ranging from high to low, thus warranting good performance of the hybrid machine learning and mechanistic models described in the present disclosure. As mentioned above, in some configurations, the hybrid ML-PI models described in the present disclosure can incorporate biological feature data estimated with a mechanistic model (e.g., PI-estimated regional cell density) into a graph-based SSL. The SSL framework is an extension of a classical supervised learning (SL) model, which may take the following form: f*=argminf∈HK1L∑l=1L(yl-f(zl))2+γAfK2;(2) where L is the number of biopsy samples in a training dataset; ylis the pathologically measured tumor cell density for the lthsample; zlcontains features computed from a localized region of multiparametric MRI corresponding to the biopsy location; ƒ(zl) is a predictive function for cell density; (yl−ƒ(zl))2is a loss function that measures the discrepancy between the pathological and predicted density of each biopsy sample; ƒ is a function on the reproducing kernel Hilbert space (“RKHS”), HK, with a Mercer kernel, K; ∥ƒ∥K2is a norm on HK, which encourages stability and generalizability of the solution; and γAis a tuning parameter. In some configurations, the localized region may have a size of 8×8 voxels, which is roughly the size of biopsy samples in the image space. Eqn. (2) is a supervised learning model because it uses the biopsy samples as labeled data. To incorporate unlabeled data and PI-estimated density into the model, a graph may be built on SSL with all labeled and unlabeled samples. For instance, one graph, G=(V, W), may be built for each patient, where V is the set of vertices and W contains the edge weight of the edge between each pair of vertices. Letting n=L+U be the number of vertices of the graph, where L is the number of all biopsy samples and U is the number of voxels on a pre-segmented tumoral ROI for the target patient (e.g., where the localized region for each voxel has a size of 8×8 voxels). The edge weight between vertices viand vj, for i,j=1, . . . , n can be computed using a product of two Gaussian functions as, wij=wij,z×wij,PI=exp(-zi-zj22ψz2)×exp(-(PIi-PIj)22ψPI2);(3) where PIiis PI-estimated cell density averaged over all the voxels in the localized region, and ψzand ψPIare parameters to adjust contributions to the weight from image features and PI, respectively. In other instances, the PI-estimated cell density in Eqn. (3) can be replaced with other biological feature data values depending on the mechanistic model used and the biological feature data to be estimated. In essence, wijreflects the closeness between two samples/vertices in terms of their respective image features, wij,zand PI estimations, wij,PI. In addition to tuning the values of ψzand ψPI, graph sparsification may be used for improving prediction accuracy and computational efficiency of the hybrid machine learning and mechanistic model. Sparsification of a graph, G(V, W), may include two steps. First, an edge between vertices viand vjis kept if the edge weight is greater than a selected value, such as wij>ε. The edge between these vertices is otherwise removed. The remaining edges are then reweighted using Eqn. (3). Sufficient connectedness of the labeled biopsy instances with the unlabeled instances in the graph may ensure proper label diffusion (i.e., having non-zero wijvalues, where, without loss of generality, i is a labeled instance and j is an unlabeled instance). It is contemplated that choosing a value of ε such that 5-15% of the labeled instances are connected may produce high accuracy results. The resultant, sparsified graph, Gs=(V,Ws) can then be encoded into a Laplacian matrix, which may be defined as Ω=D−W, where D is the vertex degree matrix, which can be a diagonal matrix with diagonal elements being the total sum of edge weights associated with each vertex, and W is the matrix of all the edge weights. Then, the model in Eqn. (2) can be augmented by incorporating the graph Laplacian matrix, which gives the proposed hybrid machine learning and mechanistic model as, f*=argminf∈HK1L∑l=1L(yl-f(xl))2+γAfK2+γI∑i,jwijfTΩf;(4) where xl=(zl,PIl); f contains predictive density (or other biological feature data) for each labeled and unlabeled sample, i.e., f=(ƒ(x1), . . . , ƒ(xL), ƒ(xL+1), . . . , ƒ(xL+U))T; ∑i,jwij is a sum of all the edge weights in the graph; and γIis another tuning parameter. Because of patient heterogeneity, the graph of each patient may have a wide range of sparsity levels, which may cause difficulty in choosing a common search range for the tuning parameter, γI. Adding the sum, ∑i,jwij addresses this problem by normalizing each patient-specific graphs to allow for γIto be tuned within a common range. Through some algebra, the last term in Eqn. (4) can be described as, fTΩf=∑i,j=1L+U(f(xi)-f(xj))2wij,z×wij,PI.(5) With this change, it can be seen that the minimization in Eqn. (4) pushes samples that are closer in image features (i.e., with a larger wij,z) and in PI estimations (i.e., with a larger wij,PI) to have more similar predictions. This is traded off with the loss on the labeled data (i.e., the first term in Eqn. (4)) and the smoothness of the predictive function in RKHS (i.e., the second term in Eqn. (4)). In the extreme case when wij,z=wij,PI=0 for all the edges, Eqn. (4) becomes the supervised learning model in Eqn. (2). In essence, the role of PI in the proposed model is to regularize the learning of the predictive function in order to make sure the spatial proximity of predicted densities conform with that of PI densities to some extent. This implicitly takes into account the bio-mechanism of tumor growth, which is the foundation of the PI model. The Representer Theorem can be used to show that an analytical solution for Eqn. (4) exists in HK. The solution of the optimization in Eqn. (4) can be written as the following expansion in terms of both labeled and unlabeled samples: f*(x)=∑i=1L+UαiK(xi,x);(6) where X is any sample for which the cell density is to be predicted, which can be an unlabeled sample included in the hybrid machine learning and mechanistic model in Eqn. (4) or not (e.g., a sample outside the ROI or on a different slice of the tumor), and αiare coefficients. With the form of the solution to Eqn. (4), the coefficients, αineed to be estimated. To achieve this, Eqn. (6) can be inserted into Eqn. (4), in order to obtain the following convex differentiable objective function of α=[α1. . . αL+U]T: α*=argmin1L(y-JKα)T(y-JKα)+γAαTKα+γI∑i,jwijαTKΩKα;(7) where J is an (L+U)×(L+U) diagonal matrix in which the first L entries are 1s and the rest are 0s, K is an (L+U)×(L+U) Gram matrix over labeled and unlabeled samples, y is an (L+U)×1 vector defined by y=[y1. . . yL, 0 . . . 0]T. Furthermore, taking the derivative with respect to α, the following expression is obtained: 1L(y-JKα)T(-JK)+(γAK+γIL∑i,jwijKΩK)α=0.(8) Solving for α yields a solution as, α*=(JK+γALI+γIL∑i,jwijΩK)-1y;(9) Where I is an (L+U)×(L+U) identity matrix. Inserting the coefficient, αi, obtained above into Eqn. (5), the predictive function, ƒ*(x), is obtained. This predictive function can be used to generate a predicted cell density for every voxel within the ROI and thus can be used to form an intratumoral cell density map. The tuning parameters of Eqn. (4)—namely, γA, γI, and η, the latter of which is the width of the radial basis function kernel, K(xi,xj)=exp(−∥xi−xj∥2/2η2)—can then be adjusted to find the value for ƒ*(x) that maximizes the accuracy of the hybrid machine learning and mechanistic model. FIG.1illustrates an example flowchart of a hybrid machine learning and mechanistic model (e.g., an ML-PI model). First, the machine learning and mechanistic models take as input image-localized biopsies and multiparametric MRI to make predictions of tumor cell density. InFIG.1, the cell density maps show predictions of low density (blue) to high density (red). The hybrid machine learning and mechanistic model then encodes similarities between voxel intensities of the mechanistic model into a Laplacian matrix, Ω, which is used to help regulate prediction in the training of the machine learning model. In one configuration, a transfer learning approach can also be implemented. Transfer learning (TL) blends the two approaches mentioned above by first building a Bayesian framework from predominant trends in group analysis and then transferring these prior distributions, through a process called domain selection, to construct individual models for each patient that are tuned according to that patient's specific MRI and histologic data. A transfer learning routine may be selected from various forms, such as: W^I=argminW{∑k=1Kyk-Xkwk22+λ1W1+λ2(QlogΩ+tr(LWΩ-1WT))};(10) where W=(w1, . . . , wK), and Ŵ is a Bayesian MAP estimate for W; ykand Xkdenote the data for the response and predictors of the k-th domain k=1, . . . , K; ∥ . . . ∥22and ∥ . . . ∥1denote the L1-norm and L2-norm, respectively; λ1=2σ2/b and λ2=σ2; λ1≥0 and λ2≥0 serve as regularization parameters to control the sparsity of Ŵ and the amount of prior knowledge used for estimating W; and the hyper-parameter Ω is a matrix of potentially high dimensionality that encodes the prior knowledge about the correlation structure between domains. For each domain, k, there is a model that links X to Y by coefficients wk. Another structure for a transfer learning routine may include: W^II=argminwK{yk-Xkwk22+λ1wK1+λ2(Qlog(ϚK-ω_KTΩ~-1ω_K)+1ϚK-ω_KTΩ~-1ω_K(wK-μK)TL(wK-μK))}whereΩ=(Ω~ω_Kω_KTϚK).(11); Still another structure for a transfer learning routine may include: (W^III,Ω^III)=argminw,Ω{{∑k=1Kyk-Xkwk22+λ1W1+λ2(QlogΩ+tr(LWΩ-1WT))}}whereΩ^=wTLwQ.(12); In one non-limiting example, one model is built for each patient to account for potential patient differences while coupling the estimation processes of the patient-specific models to allow for knowledge transfer between the models. Specifically, suppose there are N patients in the training dataset. A linear model may be established between imaging features and cell density for patient, k, such as, yk=Xkwk+εkfork=1, . . . ,N(13); where ykare the cell density measurements for nkbiopsy samples, Xkare the MRI features for the biopsy samples, wkare the model coefficients yet to be estimated, and εkare random errors following a Gaussian distribution. The original cell density measurement, which is between 0 and 1, may be transferred using a suitable function, such as a sigmoid function. Furthermore, to couple the models from different patients, a Bayesian framework may be adopted. It can also be assumed that the patient-specific model coefficients, W=wi, . . . , wN, share the same prior distribution, i.e., p(W|Ω,Φ,b)∝∏k=1KLaplace(wk,b)×MN(W;0,Ω,I);(14) where Laplace(wk;b) is a Laplace distribution to facilitate sparsity in model estimation (i.e., to produce a parsimonious model for better interpretability), and MN(W;0, Ω,I) is a zero-mean matrix-variate normal distribution. Specifically, the covariance matrix, Ω, encodes the correlation between different patients. Furthermore, given the prior distribution in Eqn. (14), and the likelihood based on the training data, p(yk∥Xk,wk)˜N(yk;Xkwk,σ2I), the posterior distribution of W can be obtained as, p(W|{yk,Xk}k=1K,Ω,Φ,b)∝p(W|Ω,Φ,b)∏k=1Kp(yk|Xk,wk).(15) Then, the maximum a priori (“MAP”) estimator for W can be obtained by solving the following optimization problem, W^=argminW,Ω{∑k=1Nyk-Xkwk22+λ1W1+λ2(QlogΩ+tr(WΩ-1WT))};(16) where ∥ . . . ∥22and ∥ . . . ∥1denote the L1-norm and L2-norm, respectively; and λ1≥0 and λ2≥0 are two regularization parameters that control the sparsity and the amount of knowledge transferred between the models of different patients, respectively. The parameters λ1and λ2can be selected to maximize the leave-one-out-cross-validation (LOOCV) accuracy. LOOCV may be used to reduce overfitting. Alternatively, other approaches for reducing overfitting can be used, such as using dropouts or other regularizations. Eqn. (16) is a transfer learning model in the sense that it allows a joint estimation of patient-specific model coefficients, wkfor k=1, . . . , N. An advantage of the transfer model in Eqn. (16) is that it does not require a pre-specification on the correlation between patients, Ω, but can estimate it in a data-driven manner. To solve the optimization problem in Eqn. (16) (i.e., to estimate W and Ω), an efficient alternating algorithm that estimates W and Ω can be implemented. That is, given Ω, the optimization problem with respect to W is convex and may be solved using the accelerated gradient algorithm. Given W, Ω can be solved analytically. Referring now toFIG.2, a flowchart is illustrated as setting forth a non-limiting example of generating and implementing a hybrid machine learning and mechanistic model to produce biological feature maps (e.g., cell density maps), or otherwise measure or predict one or more biological features (e.g., cell density), based on input multiparametric magnetic resonance images. Three types of input may be used to train a hybrid machine learning and mechanistic model, including image-localized biopsies acquired at step210, multiparametric MRI acquired at step220, and a biological feature map (e.g., a cell density map) generated at step250. As one example, the biological feature map can be a cell density map, such as a cell density map generated by a PI model generated or otherwise provided at step240. In general, the PI model simulates tumor cell proliferation and invasion using partial differential equations. As noted above, other mechanistic models can also be used to model other biological and/or physiological processed and to generate other biological feature data. The image-localized biopsies acquired at step210may be T2-weighted images, T1-weighted images acquired with a contrast agent, or any other suitable magnetic resonance image, parametric map generated from magnetic resonance images (e.g., fractional anisotropy maps, relative cerebral blood volume maps), and so on. The multiparametric MR images acquired at step220may include T2-weighted images, T1-weighted images acquired with a contrast agent, images acquired with an echo-planar imaging (EPI) pulse sequence and with a contrast agent, mean diffusivity (MD), fractional anisotropy (FA), relative cerebral blood volume (rCBV), and the like. For instance, in a glioma patient cohort, various MRI sequences containing complementary information may be used to assist clinical decision making, including T1-weighted imaging, which can depict bulk tumor and blood-brain-barrier disruption; T2-weighted imaging, which can depict non-specific region surrounding; diffusion tensor imaging (DTI), which can be used to measure white matter infiltration; and perfusion imaging, which can be used to measure microvessel morphology. The rCBV metric, which can be computed based on images obtained with perfusion imaging, may be used as a marker of microvessel blood volume on T2 perfusion MRI. Mean Diffusivity (MD) may be used to image bulk water movement measured on DTI and may be a marker of cell density. Fractional Anisotropy (FA) may provide for directional water movement measured on DTI and may be a marker of white matter integrity/invasion. EPI+contrast may also be a marker of cell density. Mapping intratumoral cell density distribution can take advantage of multi-sequence or multiparametric MRI. Labeled samples, which may be biopsy samples, and unlabeled samples are generated at step230. These inputs may be integrated under a graph-based semi-supervised learning (SSL) framework to train a predictive function at step260between localized MRI features and one or more biological features, such as cell density. In the ML-PI integration, biopsy samples are used to minimize the error between predicted and pathological cell density (considered as ground truth). The PI map and multiparametric magnetic resonance images are converted to a graph Laplacian matrix, Ω, which encodes the similarities between voxel intensities in the multivariate space (PI and multiparametric magnetic resonance images) to regularize the predictive function of ML-PI. Once the predictive function is estimated, it can be used to generate a cell density map at step270for spatial distribution of low cell density to high density within a region-of-interest (ROI) using localized MRI features. As described above, this process can be adapted to integrate mechanistic models other than a PI model and to estimate biological features other than cell density. The quantitative contribution of each feature (e.g., imaging features and PI-estimated density, other biological features) may be determined for its contribution to the prediction made by the hybrid machine learning and mechanistic model (e.g., a hybrid ML-PI model). All of the included MRI sequences and PI are biologically relevant to tumor cell density. Therefore, inclusion of them as features in building the ML-PI model may be valuable, while their relative contributions may vary. In one configuration of the present disclosure, instead of employing feature selection (e.g., a step prior to building a predictive model with purpose of removing irrelevant features), a post-processing step that identifies how much each feature contributes to the prediction may be used. For instance, let X be a feature used in the hybrid machine learning and mechanistic model (e.g., ML-PI model), which can be a feature computed from an MRI sequence or biological feature data (e.g., PI-estimated cell density). A score, s(x), for that feature, X, which represents the contribution of that feature, may be computed. As mentioned above, in some implementations these contributions can be quantified by adapting a Relief algorithm, which was originally developed as a feature selection algorithm for supervised learning models. In one configuration of the present disclosure, the adapted Relief algorithm is used a post-analysis algorithm for feature contribution analysis of the hybrid machine learning and mechanistic models described in the present disclosure. The score, s(x), of a feature can be defined as follows. The training data, T, from which the hybrid machine learning and mechanistic model has been built includes both labeled and unlabeled samples, as described above. Letting i and irbe samples in the training data, T, iris the rthnearest neighbor of i on the graph, G. Furthermore, the predicted biological feature data (e.g., cell density) of the two samples by the hybrid machine learning and mechanistic model can be referred to as ŷiand ŷir, and their respective measurements on the feature, X, can be referred to as xiand xir. The definition of the score, s(x), can be based on the difference between two probabilities as, s(x)=P(xiandxiraredifferent|y^iandy^iraredifferent)-P(xiandxiraredifferent|y^iandy^iraresimilar).(17) The first term represents the probability that the feature, X, is able to separate samples with different prediction values, while the second term represents the probability that the feature, X, separates samples with similar prediction values. The larger the first probability and the smaller the second, the higher the value of the score, s(x). Furthermore, using Bayes' rule, Eqn. (17) can be written as, s(x)=P(y^iandy^irarediff.|xiandxirarediff.)×P(xiandxirarediff.)P(y^iandy^irarediff.)-{1-P(y^iandy^irarediff.|xiandxirarediff.)}×P(xiandxirarediff.)1-P(y^iandy^irarediff.)(18) The format of s(x) in Eqn. (18) makes it relatively easier than Eqn. (17) to develop an algorithm to estimate s(x). For instance, m samples can be randomly selected from the training data, T. For each sample, its k nearest neighbors ir, r=1, . . . , k, are Algorithm 1 Example Relief-ML-PIInput: measurement data xiand predicted response ŷifor each sample intraining set T; tuning parameters m, k.Output: s(x)1:Initialize:2:s(x) ← 0; Ndy(x) ← 0; Ndx(x) ← 0; Ndy&dx(x) ← 0;3:for i = 1 to m do4:Randomly select a sample i from T;5:Find k nearest neighbors for sample i, i1, . . . , ikon graphG;6:for r = 1 to k do7:Ndy(x) ← Ndy(x) + d(ŷi, ŷir) × δ(i, ir);8:Ndx(x) ← Ndx(x) + d(xi, xir) × δ(i, ir);9:Ndy&dx(x) ← Ndy&dx(x) + δ(ŷi, ŷir) ×d(xi, xir) × δ(i, ir);10:end for11:end for12:s(x)←Ndy&dx(x)Ndy(x)-Ndx(x)-Ndy&dx(x)m-Ndy(x); found. Then, the probabilities in Eqn. (18) are estimated in order to estimate the score, s(x) using lines 7-9 of the following algorithm, in which d(y^i,y^ir)=y^i-y^irmax(y^j|j∈T)-min(y^j|j∈T),d(xi,xir)=xi-xirmax(xj|j∈T)-min(xj|j∈T), as the normalized difference between the response variables or feature values of two samples, and δ(i,ir)=δ′(i,ir)∑l=1kδ′(i,ir),δ′(i,ir)=e-(rank(i,ir)σ)2. δ′(i, ir) weights each of the k nearest neighbors for sample i and δ(i, ir) normalizes the weights. The rank of the k nearest neighbors may be used instead of computing the numerical distance to make sure different samples are equally accounted for. In some configurations, the biopsy samples used to build the ML-PI model may be biased toward high cell density. For example, in one example study, a dataset was used in which there was a concentration of samples above 50% density with mean density equal to 63%. The imbalance of biopsy samples toward higher densities may be because of inherent difficulty in acquiring low-density samples. This imbalance can create bias in model training, i.e., it will tend to train a model that over-predicts cell density of any given sample. To address this issue, in some instance low density samples can be weighted more than high density samples. However, the ML-PI model aims to predict the numerical density on a continuous scale, for which sample weighting is not straightforward, so this implementation may not be the most convenient in all instances. Alternatively, the biopsy samples of each patient can be augmented with “virtual” biopsy samples, for which the density measurement takes the value generated from PI. In this way, an augmented dataset that contains balanced samples with density ranging from high to low can be generated and used. Because the PI-estimated density of each virtual biopsy sample will be treated the same as pathologically measured density, the locations of the virtual biopsies can be preferentially be selected as those where PI estimations are accurate. In some instances, a procedure is provided to guide virtual biopsy site selection according to both biological and statistical criteria. Purposes of the biological criteria are to avoid certain anatomical structures of the brain (e.g., skull, midline) where PI estimation is known to be inaccurate and to appeal to certain sub-areas of the tumor where PI estimation is highly likely to be accurate. Statistical criteria include considerations on the spatial consistency of PI estimations and on properties of the virtual biopsy samples in the feature space to facilitate statistical model estimation. Combining imaging data with mathematical models of brain tumor proliferation integrates the advantages of having empirical information from images with scientific knowledge of the underlying cellular mechanisms of tumor growth. Glioblastoma ranks among the most lethal of all human cancers. Poor survival is largely attributed to tumoral invasion and intratumoral heterogeneity (sub-regions within the same tumor having different molecular signatures and therapeutic sensitivities). The hybrid models described in the present disclosure, which integrate machine learning built from multiparametric MRI features with mechanistic models of biological or physiological processes (e.g., tumor invasion, which may be modeled using a proliferation-invasion, PI, mechanistic model), may increase accuracy in predicting regional biological features (e.g., tumor cell density) for each patient. As described above, imaging data-driven machine learning (e.g., graph-based semi-supervised learning) may be integrated with mechanistic models. Biopsy samples used in training these hybrid machine learning and mechanistic model(s) may be augmented with virtual biopsy samples guided by PI, effectively tackling sample imbalance and improving statistical learning power. In some instances, a Relief-ML-PI, adapted from the Relief algorithm, may be used to perform post-analysis to quantify contributions from different MRI sequences and PI, which may advance model validation and promote understanding of tumor bio-physiology. In one example study, the ML-PI framework described in the present disclosure was implement to generate cell density maps for a clinical cohort of primary GBM patients undergoing surgical biopsy and resection. In this study, a high accuracy in cell density prediction was achieved in comparison with competing methods. PI was found to contribute most significantly to the prediction, followed by MRI sequences T1+C, FA, T2W, rCBV, all of which were shown relevant to cell density in existing imaging studies. Predicted cell density maps were generated for each patient across the tumor mass and BAT, allowing for precision treatment. Patients were recruited with clinically suspected glioma undergoing preoperative stereotactic multi-parametric MRI for surgical biopsy and/or resection. The absence of previous treatment was confirmed. Approval was obtained from the institutional review boards and informed consent was obtained from each subject prior to enrollment. 82 biopsy samples were collected from 18 glioma patients, with each patient having 2-14 biopsy samples. Six multiparametric images were included in the study, including T1+C, T2W, EPI+C, MD, FA, and rCBV. Cell density predictions were generated for the abnormality shown on T2W (called T2W ROI hereafter), which includes both the tumor mass enhanced on T1+C and non-enhanced BAT. The latter is known to harbor residual tumor cells after resection, which lead to treatment failure and recurrence. The T2W ROI of each tumor was manually segmented by a board-certified neuroradiologist. An 8×8 voxel box was placed at the location of co-registered images that corresponded to each biopsy sample. The average gray-level intensity over the 64 voxels within the box was computed for each image sequence. In addition to computing features for the biopsy samples (i.e., labeled samples), features were also computed for unlabeled samples in the following way. One slice of MRI was chosen for each patient, which is approximately the cross-section that included a balanced amount of enhancing mass and non-enhancing BAT. Furthermore, 8×8 voxel boxes were placed one pixel apart on the T2W ROI, and the same image features as those of the biopsy samples were computed for each box. Using the T1+C and T2W images of each patient as input, voxel-wise density estimation was generated by the PI model. Average PI density over the pixels in each 8×8 box on the selected slice was computed. To provide a balanced dataset for ML-PI model training, virtual biopsies were identified for each patient (if necessary) to balance the high density samples with “virtual” low density samples according to the steps described above. A total of 39 virtual biopsy samples were added with each patient having 0-6 samples. The histogram of pathological density for the real biopsy samples in the dataset used in this study indicated a clear imbalance toward high density. A histogram of augmented samples indicated good balance. Furthermore, for each virtual biopsy sample, the same approach was used to compute imaging features and average PI density as was used for real biopsy samples. Virtual biopsy samples were used in model training. The virtual biopsy selection method for each patient included the following procedure. The number of biopsy samples with density greater than 70% was selected and denoted as r. The number of real biopsies with low-density was denoted as r′. The number of virtual biopsy samples with low-density (e.g., less than 30%) that were to be found, in order to create balanced samples for the patient, was computed as v=r−r′. The BAT for the patient was located by subtracting the ROI segmented on T1+C from the ROI segmented on T2W. On the PI-estimated density map over the BAT, a sub-area from which to take the virtual biopsy was selected according to a set of biological criteria. As one example, the following biological criteria were used: (1) the sub-area needs to be away from the skull and the midline of the brain, since PI estimation tends to be less accurate at locations with physical barriers; and (2) the sub-area should be close to the peripheral of the T2W ROI, where there is much lower chance to harbor high cell density. Considering spatial continuity of cell density distribution, the PI estimation at a neighborhood of the biopsy sample should be more likely to be accurate if there is a real biopsy sample with low density whose PI density is also low. If the density of the real biopsy sample disagrees with PI density, the neighborhood of the sample should be avoided. On the sub-area that was picked, the following statistical criteria were further applied to select virtual biopsy samples. First, the spatial consistency of PI density was considered. For each pixel in the sub-area, an 8×8 voxel box was placed around it. Then, the mean and variance of PI densities over the 64 pixels within the box were computed. The boxes with a low mean (e.g., less than 30%) and a low variance were retained as potential virtual biopsy samples. Next, separation in the imaging feature space was considered. Good virtual biopsy samples should be at a certain distance away from each other in the input (imaging features) space (called leverage samples in statistics) in order to stabilize model fitting. To find the leverage samples, a highly flexible and efficient clustering algorithm (e.g., DBSCAN) can be used to cluster the boxes that have survived using imaging features. Parameters of DBSCAN are set to produce approximately v clusters. Then, one box from each cluster was picked as the virtual biopsy sample. For real biopsies, pre-operative conventional MRI, including T1-weighted contrast-enhanced (T1+C) and T2-weighted sequences (T2W), was used to guide biopsy selection. Each neurosurgeon collected an average of 5-6 tissue specimens from each tumor by using stereotactic surgical localization, following the smallest possible diameter craniotomies to minimize brain shift. Specimens were collected from both enhancing mass (as seen on T1+C) and non-enhancing BAT (as seen on T2W) for each tumor. The neurosurgeons recorded biopsy locations via screen capture to allow subsequent coregistration with multiparametric MRI datasets. The biopsy tissue specimens were reviewed blinded to diagnosis by a neuropathologist and assessed for tumor content. Taking into account all visible cells (neurons, inflammatory cells, reactive glia, tumor cells, etc.), the percent tumor nuclei were estimated. Before applying ML-PI, a graph was constructed for each patient/tumor (called target patient hereafter). Vertices of the graph corresponded to boxes placed on the T2W ROI of the selected slice for the target patient as well as biopsy samples from other patients. As described above, the ML-PI model includes three parameters that can be tuned: γA, γI, and η. The tuning parameter η is the width of the radial basis function kernel, K(xi,xj)=e-xi-xj2/2η2. The tuning ranges used were γI, γA∈{10−10, . . . , 104}; η∈{10−1, . . . , 102}. Two tuning strategies were compared in this example study: patient-specific tuning and uniform tuning. The former finds the optimal tuning parameters for each patient while the latter assumes the same optimal tuning parameters across all patients. In patient-specific tuning, an ML-PI model was trained for each patient using the augmented biopsy samples from other patients in the loss term. No real or virtual biopsy samples from the target patient were used in training in order to avoid overfitting. Then, the trained model was used to predict the real biopsy samples of the target patient. The optimal tuning parameters were those that minimized the mean absolute prediction error (MAPE) of the target patient. In uniform tuning, a single set of tuning parameters that minimized the MAPE across all patients was looked for. Theoretically, uniform tuning should perform no better than patient-specific tuning. Referring toFIG.3A, examples of non-limiting predicted cell density maps are shown overlaid on T2W image for two patients (8 and 16) by three different models. Gradation in the contrast represents 100%-0% density. A circle indicates location of a biopsy sample. For patient 8, the pathological density of the biopsy is 90% and predicted densities by ML-PI, PI, and ML are 79.0%, 59.2%, and 56.4%, respectively. For patient 16, the pathological density of the biopsy is 70% and predicted densities by ML-PI, PI, and ML are 79.4%, 82.9%, and 54.9%, respectively. Referring toFIG.3B, examples of non-limiting predicted density by ML-PI, PI, and ML are shown against pathological density for 82 biopsy samples; predicted density by ML-PI, PI, and ML are also shown against pathological density for 33 biopsy samples in non-enhancing (BAT) region. Additionally, r denotes the Pearson correlation coefficient. The effect of the three tuning parameters on model accuracy when allowed to be patient-specific was also investigated in this example study. A third tuning strategy was added to facilitate these comparisons. This third tuning strategy was referred to as partially-uniform tuning, in which two of the three tuning parameters were kept the same across all patients while the remaining one was allowed to vary from patient to patient. This resulted in three models corresponding to γA, γI, or η as the parameter allowed to be patient-specific, respectively. In general, patient-specific tuning of γAresulted in a significantly improved MAPE and Pearson correlation (p=0.023 and 0.011). Patient-specific tuning of η did not appear to result in a significantly improved MAPE and Pearson correlation, however the improvement of MAPE approached the 0.05 significance threshold (p=0.087 and 0.17). Patient-specific tuning of γIdid not appear to significantly improve the MAPE and Pearson correlation (p=0.22 and 0.35). Based on these results, it is contemplated that patient-specific tuning of γAalone may not significantly deteriorate the performance in terms of MAPE and Pearson correlation (p=0.14 and 0.39), while patient-specific tuning of η alone shows a greater difference in MAPE and Pearson correlation (p=0.057 and 0.044) and γIexhibits a significant deterioration in MAPE and Pearson correlation (p=0.012 and 0.014). These results indicate that γI(and, to some extent, η) may require less sensitive tuning between patients, suggesting that the Laplacian matrix that incorporates PI similarities successfully accounts for patient differences (thus not necessitating the need for a patient-specific γI). In one non-limiting example, a predicted cell density map was generated for the T2W ROI in order to guide neurosurgery and radiation therapy. In this experiment, the trained ML-PI model was used to predict tumor cell density on every 8×8 voxel box placed one pixel apart on the T2W ROI. This generated a predicted density map on the T2W ROI. ML-PI was able to predict a wider spread of density than PI alone, making it possible to capture high-density regions in the BAT. Referring toFIG.4, an example of contributions of PI and MRI sequences to ML-PI cell density prediction is depicted. Using Relief-ML-PI, a contribution score was computed for each image feature (one feature per MRI sequence) and PI from the ML-PI model specific to each patient. In one non-limiting example, to identify the contributions aggregated over all the patients, the score was normalized for each feature within each patient to be between 0 and 1 by dividing the score by a sum over the scores of all the features. Then, the normalized scores from each patient were added together to produce an aggregated score showing contribution from each feature. From these results, it is contemplated that PI contributes the most, followed by T1+C, FA, T2, rCBV, all of which are relevant to cell density. Thus, systems and methods are provided that utilize the above-described ML-PI model, for example, to use multiparametric MRI and PI and to regularize tumor cell density prediction under a graph-based SSL framework. ML-PI had capabilities of learning patient-specific relationships between imaging features and cell density, and was found to have a greater prediction accuracy than ML or PI alone when applied to a GBM patient cohort from BNI/MCA. Additionally, ML-PI showed a more balanced prediction in the T2W ROIs when compared to PI, while the latter underestimated the cell density, indicating that ML-PI was more capable of capturing high density regions in BAT. The Relief-ML-PI technique can determine contributions of each individual feature to ML-PI prediction. PI contributed most significantly to the prediction, followed by MRI sequences rCBV and MD. This highlighted the utility of incorporating mechanistic models in the form of PI to help improve tumor cell density prediction. Referring now toFIG.5, an example of a system500for generating and implementing a hybrid machine learning and mechanistic model in accordance with some embodiments of the systems and methods described in the present disclosure is shown. As shown inFIG.5, a computing device550can receive one or more types of data (e.g., multiparametric MRI data, image-localized biopsy data, cell density map data, biological feature data) from image source502. In some embodiments, computing device550can execute at least a portion of a biological feature mapping system504to generate biological feature maps (e.g., cell density maps) or otherwise measure or predict biological features from data received from the image source502. Additionally or alternatively, in some embodiments, the computing device550can communicate information about data received from the image source502to a server552over a communication network554, which can execute at least a portion of the biological feature mapping system504to generate biological feature maps (e.g., cell density maps) or otherwise measure or predict biological features from data received from the image source502. In such embodiments, the server552can return information to the computing device550(and/or any other suitable computing device) indicative of an output of the biological feature mapping system504to generate biological feature maps (e.g., cell density maps) or otherwise measure or predict biological features from data received from the image source502. In some embodiments, computing device550and/or server552can be any suitable computing device or combination of devices, such as a desktop computer, a laptop computer, a smartphone, a tablet computer, a wearable computer, a server computer, a virtual machine being executed by a physical computing device, and so on. The computing device550and/or server552can also reconstruct images from the data. In some embodiments, image source502can be any suitable source of image data (e.g., measurement data, images reconstructed from measurement data), such as an MRI system, another computing device (e.g., a server storing image data), and so on. In some embodiments, image source502can be local to computing device550. For example, image source502can be incorporated with computing device550(e.g., computing device550can be configured as part of a device for capturing, scanning, and/or storing images). As another example, image source502can be connected to computing device550by a cable, a direct wireless link, and so on. Additionally or alternatively, in some embodiments, image source502can be located locally and/or remotely from computing device550, and can communicate data to computing device550(and/or server552) via a communication network (e.g., communication network554). In some embodiments, communication network554can be any suitable communication network or combination of communication networks. For example, communication network554can include a Wi-Fi network (which can include one or more wireless routers, one or more switches, etc.), a peer-to-peer network (e.g., a Bluetooth network), a cellular network (e.g., a 3G network, a 4G network, etc., complying with any suitable standard, such as CD MA, GSM, LTE, LTE Advanced, WiMAX, etc.), a wired network, and so on. In some embodiments, communication network108can be a local area network, a wide area network, a public network (e.g., the Internet), a private or semi-private network (e.g., a corporate or university intranet), any other suitable type of network, or any suitable combination of networks. Communications links shown inFIG.5can each be any suitable communications link or combination of communications links, such as wired links, fiber optic links, Wi-Fi links, Bluetooth links, cellular links, and so on. Referring now toFIG.6, an example of hardware600that can be used to implement image source502, computing device550, and server554in accordance with some embodiments of the systems and methods described in the present disclosure is shown. As shown inFIG.6, in some embodiments, computing device550can include a processor602, a display604, one or more inputs606, one or more communication systems608, and/or memory610. In some embodiments, processor602can be any suitable hardware processor or combination of processors, such as a central processing unit (“CPU”), a graphics processing unit (“GPU”), and so on. In some embodiments, display604can include any suitable display devices, such as a computer monitor, a touchscreen, a television, and so on. In some embodiments, inputs606can include any suitable input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, and so on. In some embodiments, communications systems608can include any suitable hardware, firmware, and/or software for communicating information over communication network554and/or any other suitable communication networks. For example, communications systems608can include one or more transceivers, one or more communication chips and/or chip sets, and so on. In a more particular example, communications systems608can include hardware, firmware and/or software that can be used to establish a Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on. In some embodiments, memory610can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor602to present content using display604, to communicate with server552via communications system(s)608, and so on. Memory610can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof. For example, memory610can include RAM, ROM, EEPROM, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on. In some embodiments, memory610can have encoded thereon, or otherwise stored therein, a computer program for controlling operation of computing device550. In such embodiments, processor602can execute at least a portion of the computer program to present content (e.g., images, user interfaces, graphics, tables), receive content from server552, transmit information to server552, and so on. In some embodiments, server552can include a processor612, a display614, one or more inputs616, one or more communications systems618, and/or memory620. In some embodiments, processor612can be any suitable hardware processor or combination of processors, such as a CPU, a GPU, and so on. In some embodiments, display614can include any suitable display devices, such as a computer monitor, a touchscreen, a television, and so on. In some embodiments, inputs616can include any suitable input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, and so on. In some embodiments, communications systems618can include any suitable hardware, firmware, and/or software for communicating information over communication network554and/or any other suitable communication networks. For example, communications systems618can include one or more transceivers, one or more communication chips and/or chip sets, and so on. In a more particular example, communications systems618can include hardware, firmware and/or software that can be used to establish a Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on. In some embodiments, memory620can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor612to present content using display614, to communicate with one or more computing devices550, and so on. Memory620can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof. For example, memory620can include RAM, ROM, EEPROM, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on. In some embodiments, memory620can have encoded thereon a server program for controlling operation of server552. In such embodiments, processor612can execute at least a portion of the server program to transmit information and/or content (e.g., data, images, a user interface) to one or more computing devices550, receive information and/or content from one or more computing devices550, receive instructions from one or more devices (e.g., a personal computer, a laptop computer, a tablet computer, a smartphone), and so on. In some embodiments, image source502can include a processor622, one or more image acquisition systems624, one or more communications systems626, and/or memory628. In some embodiments, processor622can be any suitable hardware processor or combination of processors, such as a CPU, a GPU, and so on. In some embodiments, the one or more image acquisition systems624are generally configured to acquire data, images, or both, and can include an RF transmission and reception subsystem of an MRI system. Additionally or alternatively, in some embodiments, one or more image acquisition systems624can include any suitable hardware, firmware, and/or software for coupling to and/or controlling operations of an MRI system or an RF subsystem of an MRI system. In some embodiments, one or more portions of the one or more image acquisition systems624can be removable and/or replaceable. Note that, although not shown, image source502can include any suitable inputs and/or outputs. For example, image source502can include input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, a trackpad, a trackball, and so on. As another example, image source502can include any suitable display devices, such as a computer monitor, a touchscreen, a television, etc., one or more speakers, and so on. In some embodiments, communications systems626can include any suitable hardware, firmware, and/or software for communicating information to computing device550(and, in some embodiments, over communication network554and/or any other suitable communication networks). For example, communications systems626can include one or more transceivers, one or more communication chips and/or chip sets, and so on. In a more particular example, communications systems626can include hardware, firmware and/or software that can be used to establish a wired connection using any suitable port and/or communication standard (e.g., VGA, DVI video, USB, RS-232, etc.), Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on. In some embodiments, memory628can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor622to control the one or more image acquisition systems624, and/or receive data from the one or more image acquisition systems624; to images from data; present content (e.g., images, a user interface) using a display; communicate with one or more computing devices550; and so on. Memory628can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof. For example, memory628can include RAM, ROM, EEPROM, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on. In some embodiments, memory628can have encoded thereon, or otherwise stored therein, a program for controlling operation of image source502. In such embodiments, processor622can execute at least a portion of the program to generate images, transmit information and/or content (e.g., data, images) to one or more computing devices550, receive information and/or content from one or more computing devices550, receive instructions from one or more devices (e.g., a personal computer, a laptop computer, a tablet computer, a smartphone, etc.), and so on. In some embodiments, any suitable computer readable media can be used for storing instructions for performing the functions and/or processes described herein. For example, in some embodiments, computer readable media can be transitory or non-transitory. For example, non-transitory computer readable media can include media such as magnetic media (e.g., hard disks, floppy disks), optical media (e.g., compact discs, digital video discs, Blu-ray discs), semiconductor media (e.g., random access memory (“RAM”), flash memory, electrically programmable read only memory (“EPROM”), electrically erasable programmable read only memory (“EEPROM”)), any suitable media that is not fleeting or devoid of any semblance of permanence during transmission, and/or any suitable tangible media. As another example, transitory computer readable media can include signals on networks, in wires, conductors, optical fibers, circuits, or any suitable media that is fleeting and devoid of any semblance of permanence during transmission, and/or any suitable intangible media. Although the systems and methods described in the present disclosure have been described with respect to mechanistic models of biological and/or physiological processes, it will be appreciated that the hybrid machine learning and mechanistic models can be applicable to estimate feature data associated with other systems, too. In these instances, feature data can be mapped, measured, predicted, or otherwise estimated using a hybrid machine learning and mechanistic model that is suitably trained on relevant training data. Examples of other applications include atmospheric models, meteorological models, polling data models, and so on. Generally, these more general hybrid machine learning and mechanistic models can be used to map, measure, predict, or otherwise estimate feature data that may be spatially and/or temporally resolved data. Input data can include 2D and/or 3D maps of data relevant to the underlying mechanistic model used to augment the machine learning model. Such mechanistic models may include an adaptation of the proliferation-invasion model to mathematically describe a rate of change of a density of a given population of items as a function of invasion of the item into nearby locations and increase of items. Another example of an ecologically equivalent scenario is in predicting animal and insect repopulation of a forest that has been partially destroyed by fire and imaged by satellite. More generally, any spatial-temporal system that is traditionally viewed from a macroscopic view (e.g., biomedical or satellite imagery), but encompasses individual level behavior and population level dynamics could have feature maps generated using the systems and methods described in the present disclosure. The present disclosure has described one or more preferred embodiments, and it should be appreciated that many equivalents, alternatives, variations, and modifications, aside from those expressly stated, are possible and within the scope of the invention. | 65,793 |
11861476 | DETAILED DESCRIPTION It will be readily understood that the components of the embodiments of the invention, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations in addition to the described exemplary embodiments. Thus, the following more detailed description of the embodiments of the invention, as represented in the figures, is not intended to limit the scope of the embodiments of the invention, as claimed, but is merely representative of exemplary embodiments of the invention. Reference throughout this specification to “one embodiment” or “an embodiment” (or the like) means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. Thus, appearances of the phrases “in one embodiment” or “in an embodiment” or the like in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in at least one embodiment. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the invention. One skilled in the relevant art may well recognize, however, that embodiments of the invention can be practiced without at least one of the specific details thereof, or can be practiced with other methods, components, materials, et cetera. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention. The illustrated embodiments of the invention will be best understood by reference to the figures. The following description is intended only by way of example and simply illustrates certain selected exemplary embodiments of the invention as claimed herein. It should be noted that the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, apparatuses, methods and computer program products according to various embodiments of the invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises at least one executable instruction for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. Specific reference will be made here below toFIGS.1-4. It should be appreciated that the processes, arrangements and products broadly illustrated therein can be carried out on, or in accordance with, essentially any suitable computer system or set of computer systems, which may, by way of an illustrative and non-restrictive example, include a system or server such as that indicated at12′ inFIG.4. In accordance with an example embodiment, most if not all of the process steps, components and outputs discussed with respect toFIGS.1-3can be performed or utilized by way of a processing unit or units and system memory such as those indicated, respectively, at16′ and28′ inFIG.4, whether on a server computer, a client computer, a node computer in a distributed network, or any combination thereof. The problem with the service model is that information of the data owner is shared with not only the client, but also the service provider and possibly other data owners. In the current environment where information is treated much like a currency, giving away or exposing information for free is very undesirable for a data owner. In other words, data owners do not trust each other or the service provider, so the data owners want to ensure that the information stays private from other data owners and the service provider. The data owner also wants to ensure that any information that is not responsive to the query provided by the client is kept private from the client. Thus, there have been many attempts to either reduce or eliminate the amount of information of a data owner that is shared with entities other than the client. A conventional technique for reducing the amount of information that is shared with other entities, is to use a fully-trusted or semi-trusted service provider. In such a system the service provider is trusted with the information gathered from the data owners and is trusted to not misuse the information for purposes other than providing a response to the client query. Similarly, some systems may use multiple service providers that are all at least partially trusted and each service provider receives a portion of the information that is responsive to the query. The service providers then collaborate to provide the final response to the query. In such a technique, none of the service providers are provided enough information to be able to discern the full information set. However, these systems require that the service provider is trusted, which may be difficult for a data owner to accept. Additionally, in conventional systems, the client is not provided with a guarantee that the information provided is accurate or the best information accessible to the service provider. Accordingly, an embodiment provides a system and method for using a service provider to provide a response to a query using a plurality of data owners while maintaining privacy among the data owners and between the data owners and the service provider. The service provider in the described system receives a query from a user or client. The service provider provides the query to each of the data owners that are connected to the service provider. Each of the data owners has a dataset having information responsive to the query and a local machine learning model that is trained using the dataset of the data owner. The data owners also work together, using secure multi-party computation, to train a meta-model and, after the meta-model is trained, each of the data owners has a share of the meta-model. Each data owner runs or evaluates the query using the local machine learning model of the data owner to compute an output responsive to the query. From the output, the data owner extracts meta-features. Additionally, the data owner hashes the model and the query to generate randomness to make the output private. The data owners secret share the model output with each other, thereby giving each of the data owner a share from each of the other data owners and the output of the data owner. Each data owner encrypts the shares corresponding to each of the other data owners, the output of the data owner, and sampled local differential privacy noise. Each data owner extracts meta-features from the encrypted model outputs (i.e., the encrypted shares of the other data owners and the output of the data owner) using meta-training samples that are produced from the dataset of the data owner and cluster centroids from the other data owners. Cluster centroids are generated by each data owner by clustering the meta-training samples of the data owner and identifying the cluster centroids of those meta-training samples. The cluster centroids are then secret shared with the other data owners and the meta-training samples are kept secret from other data owners and the service provider. Using the meta-features from the data owners, the service provider generates a response to the query. In generating the response, the meta-features are evaluated using the meta-model to determine weights to be assigned to outputs on each of the individual data owner models. Thus, the purpose of the meta-model is to derive the importance that the query should place on the query response received from each of the data owners. The meta-model outputs weight vectors that are used by the query to place weights on the predictions or output provided by each local machine learning model within the ensemble. The weightings and which models are selected to be used in generating the response to the query are hidden by adding noise to the weight vectors. The final query response is generated by running an inner-product on the weight vector and the vector of the individual model outputs to hide the output of individual models. Such a system provides a technical improvement over current systems for providing query responses using a service provider that is connected to multiple data owners. The described system and method provides a technique that enables the data owners to collaborate to provide a response to the query, while ensuring that neither the other data owners or the service provider will learn information about the information sets or models of the data owner. The system provides a method for a private inference using an ensemble of private models and a private meta-model. In other words, the described system allows each data owner to keep their data private from other entities. Additionally, the system enables to the service provider to cryptographically prove the performance of the service even though the service includes different models that are trained from different data distributions. The system is designed so that, unlike conventional systems, the system does not have to include one or more entities that are either semi-trusted or fully trusted. Additionally, the described system ensures fairness for clients during inferences from an ensemble of models by ensuring the same owner models are being used for all client queries of the same type. Thus, clients using the service can be guaranteed that the service provider cannot use different information sets or models for answering the same type of queries from different clients. In other words, if the service provider commits to a certain set of models or information sets as a part of its service, then every client using the service will have the same types of queries answered using the same set of models that the service provider committed to, thereby ensuring fairness for all the clients. The system enables a technique to ground the ensemble of models used in the query responses and cryptographically verify their use in inferences leading up to a query response. Such proof of performance and guaranteed fairness to clients is not contemplated using conventional systems. Thus, the described system provides a private and fair ensemble learning and inference using machine models or information sets of multiple data owners. FIG.1illustrates a method for using a service provider to provide a response to a query using a plurality of data owners while maintaining privacy among the data owners and between the data owners and the service provider. At101the service provider receives a query from a user or client. The service provider is connected with a plurality of data owners, each having at least one dataset that includes information responsive to the query. Additionally, each of the data owners has a local machine learning model that is trained on the dataset of a corresponding data owner. The data owner does not want to reveal information regarding the local machine learning model or underlying dataset to other data owners or the service provider. The group of local machine learning models across the data owners are referred to as an ensemble. The models within the ensembles are independently trained with respect to other models within the ensemble and can, therefore, be different types of models, have different data distributions, and are trained using different data as compared to other models within the ensemble. In other words, the models within the ensemble do not have to have any uniformity between models across the data owners. FIG.2provides an illustration of the setting of the described system. Clients201can access the service provider202to provide a query that can be run on data of data owners203A,203B, and203C. In such a system, a trust boundary204exists where the data owners203A-203C do not trust each other or the service provider. In the described system the service provider202is a machine-learning-as-a-service (MLaaS) service provider. In a typical MLaaS system, the service provider uses the information of the data owners to provide predictions to the client that are responsive to the query of the client. The service provider runs a machine-learning model on the query and then provides the predictions that are output by the machine-learning model to the client. The machine-learning model of the service provider is trained using the information sets of the data owners. In the described system the result is similar. However, in order to preserve privacy and fairness, some steps are performed by different entities of the described systems as compared to conventional systems, as described in more detail herein. Each of the data owners extracts a set of meta-features from the local machine learning model of the data owner that is trained using the dataset of the data owner. In other words, the set of meta-features is computed by a data owner using the data owner's model and dataset. Additionally, the data owners collaborate to extract additional meta-features across the models of all the data owners. These meta-features will be referred to as global meta-features to distinguish from the local meta-features extracted by each data owner individually from their own model and dataset. To extract the global meta-features, the data owners perform a collaboration facilitated by the service provider. The collaboration may include a secure multi-party computation that securely extracts the meta-features from across the models of the data owners. In order to preserve privacy of the data owners, the global meta-features are extracted in an approximate way while also mostly preserving the accuracy of the models. Additionally, the efficiency of the protocol is maintained. From the meta-features and the global meta-features, the data owners work together, for example, using a multi-party computation algorithm that is encrypted and secure, to train a meta-model. While the local models are complex, the meta-model is a simpler model. For example, the local models may be many layered models, whereas the meta-model may be a one- or two-layer model. After the meta-model training, each data owner has a random share of the meta-model. This ensures that no single party can learn any information regarding the underlying local models or datasets, but allows the data owners to collaborate and use the meta-model shares together in a secure manner to answer a query. Thus, each data owner has a complex local model and a share of the simpler meta-model. The data owners can then work together in a secure collaboration facilitated by the service provider and using the local models and shared meta-model to answer a query provided by a client. At102the service provider provides the query to each of the plurality of data owners. Each of the data owners evaluates the query on the local model of the data owner. This evaluation is performed independently at each data owner. In other words, each data owner evaluates the query using the local machine learning model of the data owner and irrespective of the models of other data owners or evaluation being performed by the other data owners. In performing the evaluation, the data owner and the client run a secure two-party computation protocol. This protocol is facilitated by the service provider who transmits messages between the entities. Using the protocol, the data owner computes the model output for the query using the local model corresponding to the data owner. In order to assist in ensuring privacy and prevent other entities from inferring information regarding the local models and underlying datasets, the data owner hashes the model and the query. The hash is used to generate randomness for subsequent steps where information is shared among different entities of the system. At103the service provider facilitates sharing of model output among the data owners. The sharing may be facilitated using a multi-party computation algorithm in order to ensure security and privacy. Each data owner will share the hashed model output generated by that data owner with all the other data owners. Thus, each data owner will have a hashed model output from each of the other data owners and the model output from their own local model. In sharing the model output, the data owners use a protocol so that the model output is secret shared with the other data owners. The data owners also sample local differential privacy noise using a standard differential privacy (DP) mechanism for private model inference. Each data owner then individually encrypts the model output share of the other data owners along with the sampled differential privacy noise. When encrypting the model output share and noise, the data owner uses a public key that corresponds to the data owner of the share being encrypted. In other words, each model share and noise gets individually encrypted by each data owner using a public key corresponding to the data owner that generated the corresponding share. Each data owner also encrypts their own model output and noise using their own public key. At104the service provider receives a set of meta-features corresponding to the query from each of the plurality of data owners. Thus, the service provider receives a number of sets of meta-features, where the number is equal to the number of local training models used in the ensemble. It should be understood that one data owner may have multiple local training models and would perform the described steps for each local training model. In generating the set of meta-features, the data owners use a set of meta-training samples. The meta-training samples are generated from the underlying dataset. Each data owner clusters its own meta-training samples and identifies cluster centroids from the clusters. A fixed number of the cluster centroids are shared with the other data owners. The number of cluster centroids that will be used is the same number for all data owners. Using its own meta-training samples and the cluster centroids of the other data owners, each data owner runs a meta-feature extraction on the encrypted model outputs and using a multi-party computation to extract the meta-features for the query. Thus, from the output of the model, the corresponding data owner extracts meta-features from the model. Unlike the training phase, during this phase, the extracted set of meta-features are based upon the query. In other words, since the query is evaluated against the model, the output of the model is based upon the query. Thus, the meta-features that are extracted from this output is also based upon the query. The result is that each data owner has a local model evaluation result and a set of meta-features that are based upon the query. These sets of meta-features from each data owner are then provided to the service provider. Some examples of meta-features include complex models forward pass and post processing, feature-space neighborhood: K-NN for each sample based on the distribution of features, decision-space neighborhood: K-NN for each sample based on the distribution of output probabilities, different features of a neighborhood (e.g., local accuracy in the feature-space, extent of consensus in the feature-space, overall accuracy in the feature-space, degree of confidence for the input sample, etc.), and the like. Some example mathematical algorithms that can be used in the multi-party computation to extract the meta-features include DNN forward pass: ABY3 or Secure NN, feature-space neighborhood: Euclidean distance protocol and sorting protocol, decision-space neighborhood: sorting protocol, counting mismatch protocol, dot product protocol, no extra cost, and the like. Other meta-features may be extracted and other mathematical algorithms may be used to extract meta-features. The service provider determines, from the received sets of meta-features, whether a response to the query can be generated at105. If the response cannot be generated, the system may take no action at106. Alternatively, the system may provide an indication that no response can be generated. One reason that a response cannot be generated is if none of the data owners have information or enough information that would be responsive to the query. Another reason that a response cannot be generated is if a set of meta-features could not be generated by the data owners. If, on the other hand, a response can be generated, the service provider may generate a response to the query at107using the meta-model. To generate the response, the service provider evaluates the meta-model using the meta-features that were extracted by the data owners and provided to the service provider. The service provider facilitates a multi-party computation for the inference of the meta-model. The meta-features are provided as input to the meta-model. The output of the meta-model are weight vectors that identifying weights to be applied to outputs of the individual models. In other words, the purpose of the meta-model is to derive the importance that the query should place on each of the individual local models. Thus, the meta-model outputs weight vectors that are used by the client query to place weights on the predictions provided as output from each of the models within the ensemble. To hide the weights that are applied to a local model and which models are selected and not selected in returning a response to the query, the system samples, using a differential privacy mechanism for private counting, a differential privacy noise vector to add to the weight vectors. This noise hides the individual models that are selected. The final output is the result of running an inner-product on the weight vector and the vector of the individual model outputs. That result is added to the product of the weight vector and the noise that was added to hide the output of the individual models. Stated differently, the final output may be calculated by an inner-product on <weight vector, vector of model outputs>+<weight vector, p1>; where p1is the noise used to hide the output of the individual models. This two-layered-differential-privacy approach protects both the identity of the models chosen from the ensemble (differential privacy for private counting) and the information about the training data from the model's prediction (differential privacy for model inference). The service provider then provides the result to the client providing the query, where the result is the outputs from the selected models and the weights so that the query knows what importance to place onto the outputs. The described system also provides a technique for benchmarking the performance of the service provider. In other words, the described system provides a technique that allows a user to determine if the service provider is fulfilling the performance guaranteed by the service provider. Thus, the described system provides provable performance by the service provider. To benchmark the performance, the system creates a zero-knowledge proof to prove the ensemble of models and the meta-model has the performance claimed by the service provider using hashes stored in a hash commitment directory. The performance proof is created from a public benchmarking dataset. The data owners perform a similar process as described above with respect to a query received from a client. However, in this case the benchmarking dataset is treated like the query. To generate the proof, each of the data owners creates a vector of outputs for each sample in the benchmarking dataset and also generates a commitment on the vector of outputs, for example a Pedersen vector commitment. These commitments are stored in the hash commitment directory. The data owners each locally compute the meta-features on the meta-training dataset generated from the benchmarking data and also computes its share for the data that has samples from all data owners, similar to that discussed before. Commitments are generated on all the meta-features and, for the meta-features that involve all data owners, a multi-prover zero-knowledge proof is generated for the correct computation of that part using the commitments of the inputs. The meta-model is privately evaluated and commitments are generated on its output, which are the weight vectors. Additionally, a multi-prover zero-knowledge proof is generated for the meta-model's correct evaluation. A multi-prover version of the zero-knowledge inner product argument is then run to obtain a proof of the benchmarking result starting from the vector commitments of the model evaluation and the weight vector. With the zero knowledge proofs, the system provides a fairness guarantee ensuring that all client queries of the same type are answered using the same ensemble of models. FIG.3illustrates an example system architecture for using a service provider to provide a response to a query using a plurality of data owners while maintaining privacy among the data owners and between the data owners and the service provider. A client301provides a query to a service provider who is in communication with multiple data owners303. Each data owner303has a local machine learning model304and a portion of a meta-model305. The service provider302facilitates a collaboration, using secure multi-party computation, between the data owners303to generate a response to the query provided by the client301. This collaboration ensures that information of a data owner is not revealed to other data owners or the service provider. Additionally, the data owners303can work together to generate a proof that the service provider302is performing as promised. This proof is generated using a hash commitment director307and benchmarking data306. The result is a system that ensures privacy of data owner information while ensuring accuracy of responses to queries and performance of the service provider. As shown inFIG.4, computer system/server12′ in computing node10′ is shown in the form of a general-purpose computing device. The components of computer system/server12′ may include, but are not limited to, at least one processor or processing unit16′, a system memory28′, and a bus18′ that couples various system components including system memory28′ to processor16′. Bus18′ represents at least one of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus. Computer system/server12′ typically includes a variety of computer system readable media. Such media may be any available media that are accessible by computer system/server12′, and include both volatile and non-volatile media, removable and non-removable media. System memory28′ can include computer system readable media in the form of volatile memory, such as random access memory (RAM)30′ and/or cache memory32′. Computer system/server12′ may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system34′ can be provided for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a “hard drive”). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to bus18′ by at least one data media interface. As will be further depicted and described below, memory28′ may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention. Program/utility40′, having a set (at least one) of program modules42′, may be stored in memory28′ (by way of example, and not limitation), as well as an operating system, at least one application program, other program modules, and program data. Each of the operating systems, at least one application program, other program modules, and program data or some combination thereof, may include an implementation of a networking environment. Program modules42′ generally carry out the functions and/or methodologies of embodiments of the invention as described herein. Computer system/server12′ may also communicate with at least one external device14′ such as a keyboard, a pointing device, a display24′, etc.; at least one device that enables a user to interact with computer system/server12′; and/or any devices (e.g., network card, modem, etc.) that enable computer system/server12′ to communicate with at least one other computing device. Such communication can occur via I/O interfaces22′. Still yet, computer system/server12′ can communicate with at least one network such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter20′. As depicted, network adapter20′ communicates with the other components of computer system/server12′ via bus18′. It should be understood that although not shown, other hardware and/or software components could be used in conjunction with computer system/server12′. Examples include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc. This disclosure has been presented for purposes of illustration and description but is not intended to be exhaustive or limiting. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiments were chosen and described in order to explain principles and practical application, and to enable others of ordinary skill in the art to understand the disclosure. Although illustrative embodiments of the invention have been described herein with reference to the accompanying drawings, it is to be understood that the embodiments of the invention are not limited to those precise embodiments, and that various other changes and modifications may be affected therein by one skilled in the art without departing from the scope or spirit of the disclosure. The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. | 38,470 |
11861477 | DETAILED DESCRIPTION The following detailed description of example implementations refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. Many times when users access a document, the users do not have enough time to review the entire document, or the document may be too long to review entirely. In such instances, the users may wish to review information (e.g., insights) that provide links between concepts provided in the document. However, there are currently no systems that automatically highlight insights in a document for quick and easy review of the document. Some implementations described herein may provide an insight platform that utilizes machine learning models to identify insights in a document. For example, the insight platform may receive document information associated with a document, and may receive a request to identify insights in the document information. The insight platform may perform natural language processing on the document information to identify words, phrases, and sentences, and may utilize a first machine learning model with the words, the phrases, and the sentences to identify abstract insights, concrete insights, and non-insights. The insight platform may utilize a second machine learning model to match the abstract insights with particular concrete insights, and may utilize a third machine learning model to determine particular insights based on the non-insights. The insight platform may generate an insight document that includes the concrete insights, the abstract insights matched with the particular concrete insights, and the particular insights determined based on the non-insights. The insight platform may determine recommended documents based on the insight document, and may provide the insight document and the recommended documents for display. The term document, as used herein, is to be broadly interpreted to include any machine-readable and machine-storable work product. A document may include, for example, a web page, an e-mail, a business listing, a file, a combination of files, one or more files with embedded links to other files, a news group posting, a blog, an e-book, and/or the like. In the context of the Internet, a common document is a web page. A document may include textual information, embedded information, such as meta information, images, hyperlinks, and/or the like, and/or embedded instructions, such as Javascript. In some implementations, metadata (e.g., face tags, computer vision labels, and other types of metadata) may relate to, be derived, and/or be associated with a document. The term insight, as used herein, is to be broadly interpreted to include any information in a document that provides links between concepts provided in the document, information in the document that provides an understanding of a true nature of the document, and/or the like. The term concept, as used herein, is to be broadly interpreted to include a general notion or idea provided in the document, an idea of something formed by combining the characteristics or particulars provided in the document (e.g., a construct), and/or the like. FIGS.1A-1Gare diagrams of an overview of an example implementation100described herein. As shown inFIG.1A, a user may be associated with a client device, a server device, and an insight platform. Assume that the user wishes to utilize the client device to access a document provided by the server device. As further shown inFIG.1A, and by reference number105, the server device may provide document information to the client device. In some implementations, the document information may include information associated with the document, such as textual information provided in the document, information indicating locations of the document at the server device (e.g., uniform resource locators (URLs)), information indicating folders storing information associated with the document in the server device, information indicating files associated with the document that are stored in the server device, and/or the like. As further shown inFIG.1A, the client device may receive the document information, and may provide a user interface (e.g., a web browser) that displays the document information to the user. As further shown, the user interface may include a mechanism (e.g., a button, a link, a browser plugin, and/or the like) which, when selected, may cause the client device to generate a request to identify insights in the document information (e.g., document insights). As further shown inFIG.1A, and by reference number110, if the user selects the mechanism, the client device may provide, to the insight platform, the document information and the request to identify the document insights. The insight platform may receive the document information and the request to identify the document insights. As shown inFIG.1B, and by reference number115, the insight platform may perform a natural language processing technique on the document information in order to generate a processed document. In some implementations, the natural language processing technique utilizes computing resources to analyze, understand, and derive meaning from the document information in a useful way. In some implementations, rather than treating the document information as a mere sequence of symbols, the natural language processing technique may consider a hierarchical structure of language (e.g., several words can be treated as a phrase, several phrases can be treated as a sentence, and the words, phrases, and/or sentences convey ideas that can be interpreted) in the document information. In some implementations, the natural language processing technique may analyze the document information in order to perform functions, such as automatic text summarization, sentiment analysis, topic extraction, named entity recognition, parts-of-speech tagging, relationship extraction, stemming, and/or the like. In some implementations, the natural language processing technique may convert the machine-readable and machine-storable form of the document information into a language form (e.g., the processed document) that includes recognizable words, phrases, sentences, and/or the like. As further shown inFIG.1B, and by reference number120, the insight platform may identify words, phrases, and sentences in the processed document based on the natural language processing technique. For example, the insight platform may identify words (e.g., “Movement,” “is,” “expressive,” “and,” etc.), phrases (e.g., “much smarter,” “The sun, always rises, in the East,” etc.), and sentences (e.g., “Movement is expressive, and we are much smarter than we were,” “The sun always rises in the East,” “Life is a canvas of many strokes, but cats hate dogs,” “The president made a statement today,” etc.) in the processed document. As shown inFIG.1C, and by reference number125, the insight platform may filter meaningless words, phrases, and/or sentences from the identified words, phrases, and sentences. In some implementations, the insight platform may determine whether a particular word, a particular phrase, or a particular sentence is relevant to remaining words, phrases, and sentences, and may filter (e.g., delete) the particular word, the particular phrase, or the particular sentence when the particular word, the particular phrase, or the particular sentence is not relevant to the remaining words, phrases, and sentences. For example, if a particular percentage of the words, phrases, and sentences relate to a particular concept or topic, the insight platform may determine whether the particular word, the particular phrase, or the particular sentence relates to the particular concept. The insight platform may delete the particular word, the particular phrase, or the particular sentence when the particular word, the particular phrase, or the particular sentence does not relate to the particular concept. As shown inFIG.1C, the insight platform may determine that the phrase “of many strokes” is a meaningless phrase, and may delete the phrase “of many strokes” from the identified words, phrases, and sentences. In some implementations, the insight platform may utilize a data cleansing method to filter meaningless words, phrases, and/or sentences from the identified words, phrases, and sentences. The data cleansing method may include detecting the meaningless words, phrases, and/or sentences, and then deleting the meaningless words, phrases, and/or sentences. The data cleansing method may detect and delete meaningless words, phrases, and/or sentences caused by user entry errors, by corruption in transmission or storage, by natural language processing errors, and/or the like. As further shown inFIG.1C, and by reference number130, the insight platform may utilize a machine learning model, with the identified words, phrases, and sentences, to identify concepts in the identified words, phrases, and sentences. In some implementations, the concepts may include one or more of opinions, facts, guesses, theories, satire, insights, and/or the like. The opinions may include views, judgments, or appraisals about a particular matter, beliefs stronger than impressions and less strong than positive knowledge, formal expressions of judgments or advice, and/or the like. The facts may include information about things that have actual existences, pieces of information having objective reality, and/or the like. The guesses may include opinions that are based on probability or that are formed in the absence of evidence, and/or the like. The theories may include a group of tested general propositions, commonly regarded as correct, that can be used as principles of explanation and prediction for a class of phenomena, proposed explanations whose status are still conjectural and subject to experimentation, and/or the like. The satire may include information in a document that uses humor, irony, exaggeration, ridicule, and/or the like to expose and criticize stupidity or vices, particularly in a context of contemporary politics and/or other topical issues. The insights may include any information in a document that provides links between concepts provided in the document, information in the document that provides an understanding of a true nature of the document, and/or the like. In some implementations, the machine learning model, used to identify the concepts, may include a supervised machine learning model (e.g., a decision tree learning model, a learning classifier systems model, a nearest neighbor model, a support vector machine model, and/or the like), an unsupervised machine learning model (e.g., a clustering model, a neural network model, a latent variable model, and/or the like), or a combination of the aforementioned, described elsewhere herein. As shown inFIG.1D, and by reference number135, the insight platform may utilize a machine learning model to analyze the identified insights, to identify abstract insights, concrete insights, and non-insights in the identified insights. For example, as shown inFIG.1D, the insight platform may determine that “Movement is expressive” “we are much smarter than we were” and “Life is a canvas” are abstract insights, that “The sun always rises in the East” and “cats hate dogs” are concrete insights, and that “The president made a statement today” is a non-insight. The abstract insights may include insights that are not based on realities, specific objects, or actual instances, insights that express qualities or characteristics apart from specific objects or instances, theoretical insights, and/or the like. The concrete insights may include insights that are based on realities, specific objects, or actual instances, insights pertaining to realities or actual instances, insights applied to actual substances or things, and/or the like. The non-insights may include one or more of the opinions, the facts, the guesses, or the theories described above. In some implementations, the machine learning model used to identify abstract insights, concrete insights, and non-insights may include a supervised machine learning model (e.g., a decision tree learning model, a learning classifier systems model, a nearest neighbor model, a support vector machine model, and/or the like), an unsupervised machine learning model (e.g., a clustering model, a neural network model, a latent variable model, and/or the like), or a combination of the aforementioned. As further shown inFIG.1D, and by reference number140, the insight platform may utilize a machine learning model to match the abstract insights with particular concrete insights. In some implementations, the particular concrete insights are different than the concrete insights determined from the insights identified in the processed document. In such implementations, the insight platform may be associated with a repository that includes concrete insights related to a variety of concepts. The insight platform may utilize the machine learning model to compare the abstract insights with the concrete insights provided in the repository, and to match the abstract insights with the particular concrete insights provided in the repository. For example, the insight platform may utilize the machine learning model to match an abstract insight (e.g., “we are much smarter than we were”) with a particular concrete insight (e.g., “Statistics show that humans are smarter”). In some implementations, the machine learning model used to match the abstract insights with the particular concrete insights may include a supervised machine learning model (e.g., a decision tree learning model, a learning classifier systems model, a nearest neighbor model, a support vector machine model, and/or the like), an unsupervised machine learning model (e.g., a clustering model, a neural network model, a latent variable model, and/or the like), or a combination of the aforementioned. As shown inFIG.1E, and by reference number145, the insight platform may utilize a machine learning model to determine particular insights based on the non-insights. In some implementations, the particular insights are different than the insights identified in the processed document. In such implementations, the insight platform may be associated with a repository that includes insights related to a variety of concepts. The insight platform may utilize the machine learning model to compare the non-insights with the insights provided in the repository, and to match the non-insights with the particular insights provided in the repository. For example, the insight platform may utilize the machine learning model to match a non-insight (e.g., “The president made a statement today”) with a particular insight (e.g., “Real leaders are confident”). In some implementations, the machine learning model used to determine the particular insights based on the non-insights may include a supervised machine learning model (e.g., a decision tree learning model, a learning classifier systems model, a nearest neighbor model, a support vector machine model, and/or the like), an unsupervised machine learning model (e.g., a clustering model, a neural network model, a latent variable model, and/or the like), or a combination of the aforementioned. As shown inFIG.1F, and by reference numbers150and155, the insight platform may create an insight document (e.g., a new document different than the original document) based on the concrete insights, the abstract insights matched with the particular concrete insights, and the particular insights determined based on the non-insights. In some implementations, the insight platform may combine the concrete insights, the abstract insights matched with the particular concrete insights, and the particular insights determined based on the non-insights in a particular manner in order to generate the insight document. For example, the insight platform may combine the concrete insights, the abstract insights matched with the particular concrete insights, and the particular insights determined based on the non-insights in a manner such that the concrete insights, the abstract insights matched with the particular concrete insights, and the particular insights determined based on the non-insights, that are related to a particular concept, are grouped together in the insight document. As further shown inFIG.1F, the insight document may include the concrete insights (e.g., “The sun always rises in the East” and “Cats hate dogs”), the abstract insights (e.g., “We are much smarter than we were”) matched with the particular concrete insights (e.g., “Statistics show that humans are smarter”), and the particular insights (e.g., “Real leaders are confident”) determined based on the non-insights. As shown inFIG.1G, and by reference number160, the insight platform may determine recommended documents based on the insight document. In some implementations, the insight platform may be associated with a document repository that includes documents related to a variety of concepts. In such implementations, the insight platform may compare the information provided in the insight document with the documents provided in the document repository, and may match the information provided in the insight document with particular documents (e.g., recommended documents) provided in the document repository. For example, the insight platform may match the information provided in the insight document (e.g., “The sun always rises in the East,” “Cats hate dogs,” “We are much smarter than we were,” and “Statistics show that humans are smarter”) with particular documents (e.g., “The Complete Sun Guide,” “Everything To Know About House Pets,” and “How Humans Have Developed,” respectively) provided in the document repository. As further shown inFIG.1G, and by reference number165, the insight platform may provide the insight document and information identifying the recommended documents to the client device. The client device may receive the insight document, and may provide the insight document for display to the user via a user interface. The client device may receive the information identifying the recommended documents, and may provide the information identifying the recommended documents for display to the user via the user interface providing the insight document, or via a separate user interface. In this way, the insight platform may utilize machine learning models to quickly identify and display insights in a document, which may improve speed and efficiency associated with identifying insights in the document and with reviewing the document, and may conserve computing resources (e.g., processors, memory, and/or the like). Furthermore, implementations described herein use a computerized process to perform tasks or roles that were not previously performed or were previously performed using subjective human intuition or input. For example, prior solutions are unable to identify insights in a document so that a reader of the document can quickly and easily review the insights. Finally, utilizing machine learning models to quickly identify and display insights in a document conserves computing resources (e.g., processors, memory, and/or the like) that would otherwise be wasted in unsuccessfully attempting to identify insights in the document. As indicated above,FIGS.1A-1Gare provided merely as examples. Other examples are possible and may differ from what was described with regard toFIGS.1A-1G. FIG.2is a diagram of an example environment200in which systems and/or methods, described herein, may be implemented. As shown inFIG.2, environment200may include a client device210, an insight platform220, a network230, and a server device240. Devices of environment200may interconnect via wired connections, wireless connections, or a combination of wired and wireless connections. Client device210includes one or more devices capable of receiving, generating, storing, processing, and/or providing information, such as information described herein. For example, client device210may include a mobile phone (e.g., a smart phone, a radiotelephone, etc.), a laptop computer, a tablet computer, a desktop computer, a handheld computer, a gaming device, a wearable communication device (e.g., a smart wristwatch, a pair of smart eyeglasses, etc.), or a similar type of device. In some implementations, client device210may receive information from and/or transmit information to insight platform220and/or server device240. Insight platform220includes one or more devices that utilize machine learning models to identify insights in a document (e.g., provided by server device240to client device210). In some implementations, insight platform220may be designed to be modular such that certain software components may be swapped in or out depending on a particular need. As such, insight platform220may be easily and/or quickly reconfigured for different uses. In some implementations, insight platform220may receive information from and/or transmit information to one or more client devices210and/or server devices240. In some implementations, as shown, insight platform220may be hosted in a cloud computing environment222. Notably, while implementations described herein describe insight platform220as being hosted in cloud computing environment222, in some implementations, insight platform220may not be cloud-based (i.e., may be implemented outside of a cloud computing environment) or may be partially cloud-based. Cloud computing environment222includes an environment that hosts insight platform220. Cloud computing environment222may provide computation, software, data access, storage, etc. services that do not require end-user knowledge of a physical location and configuration of system(s) and/or device(s) that hosts insight platform220. As shown, cloud computing environment222may include a group of computing resources224(referred to collectively as “computing resources224” and individually as “computing resource224”). Computing resource224includes one or more personal computers, workstation computers, server devices, or other types of computation and/or communication devices. In some implementations, computing resource224may host insight platform220. The cloud resources may include compute instances executing in computing resource224, storage devices provided in computing resource224, data transfer devices provided by computing resource224, etc. In some implementations, computing resource224may communicate with other computing resources224via wired connections, wireless connections, or a combination of wired and wireless connections. As further shown inFIG.2, computing resource224includes a group of cloud resources, such as one or more applications (“APPs”)224-1, one or more virtual machines (“VMs”)224-2, virtualized storage (“VSs”)224-3, one or more hypervisors (“HYPs”)224-4, and/or the like. Application224-1includes one or more software applications that may be provided to or accessed by client device210and/or server device240. Application224-1may eliminate a need to install and execute the software applications on client device210and/or server device240. For example, application224-1may include software associated with insight platform220and/or any other software capable of being provided via cloud computing environment222. In some implementations, one application224-1may send/receive information to/from one or more other applications224-1, via virtual machine224-2. Virtual machine224-2includes a software implementation of a machine (e.g., a computer) that executes programs like a physical machine. Virtual machine224-2may be either a system virtual machine or a process virtual machine, depending upon use and degree of correspondence to any real machine by virtual machine224-2. A system virtual machine may provide a complete system platform that supports execution of a complete operating system (“OS”). A process virtual machine may execute a single program, and may support a single process. In some implementations, virtual machine224-2may execute on behalf of a user (e.g., a user of client device210and/or server device240, or an operator of insight platform220), and may manage infrastructure of cloud computing environment222, such as data management, synchronization, or long-duration data transfers. Virtualized storage224-3includes one or more storage systems and/or one or more devices that use virtualization techniques within the storage systems or devices of computing resource224. In some implementations, within the context of a storage system, types of virtualizations may include block virtualization and file virtualization. Block virtualization may refer to abstraction (or separation) of logical storage from physical storage so that the storage system may be accessed without regard to physical storage or heterogeneous structure. The separation may permit administrators of the storage system flexibility in how the administrators manage storage for end users. File virtualization may eliminate dependencies between data accessed at a file level and a location where files are physically stored. This may enable optimization of storage use, server consolidation, and/or performance of non-disruptive file migrations. Hypervisor224-4may provide hardware virtualization techniques that allow multiple operating systems (e.g., “guest operating systems”) to execute concurrently on a host computer, such as computing resource224. Hypervisor224-4may present a virtual operating platform to the guest operating systems, and may manage the execution of the guest operating systems. Multiple instances of a variety of operating systems may share virtualized hardware resources. Network230includes one or more wired and/or wireless networks. For example, network230may include a cellular network (e.g., a fifth generation (5G) network, a long-term evolution (LTE) network, a third generation (3G) network, a code division multiple access (CDMA) network, etc.), a public land mobile network (PLMN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a telephone network (e.g., the Public Switched Telephone Network (PSTN)), a private network, an ad hoc network, an intranet, the Internet, a fiber optic-based network, and/or the like, and/or a combination of these or other types of networks. Server device240includes one or more devices capable of receiving, generating, storing, processing, and/or providing information, such as information described herein. For example, server device240may include a laptop computer, a tablet computer, a desktop computer, a server device, a group of server devices, or a similar type of device, that provides a social media application for access by client device210. In some implementations, server device may receive information from and/or transmit information to client device210and/or insight platform220. The number and arrangement of devices and networks shown inFIG.2are provided as an example. In practice, there may be additional devices and/or networks, fewer devices and/or networks, different devices and/or networks, or differently arranged devices and/or networks than those shown inFIG.2. Furthermore, two or more devices shown inFIG.2may be implemented within a single device, or a single device shown inFIG.2may be implemented as multiple, distributed devices. Additionally, or alternatively, a set of devices (e.g., one or more devices) of environment200may perform one or more functions described as being performed by another set of devices of environment200. FIG.3is a diagram of example components of a device300. Device300may correspond to client device210, insight platform220, computing resource224, and/or server device240. In some implementations, client device210, insight platform220, computing resource224, and/or server device240may include one or more devices300and/or one or more components of device300. As shown inFIG.3, device300may include a bus310, a processor320, a memory330, a storage component340, an input component350, an output component360, and a communication interface370. Bus310includes a component that permits communication among the components of device300. Processor320is implemented in hardware, firmware, or a combination of hardware and software. Processor320is a central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), a microprocessor, a microcontroller, a digital signal processor (DSP), a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), or another type of processing component. In some implementations, processor320includes one or more processors capable of being programmed to perform a function. Memory330includes a random access memory (RAM), a read only memory (ROM), and/or another type of dynamic or static storage device (e.g., a flash memory, a magnetic memory, and/or an optical memory) that stores information and/or instructions for use by processor320. Storage component340stores information and/or software related to the operation and use of device300. For example, storage component340may include a hard disk (e.g., a magnetic disk, an optical disk, a magneto-optic disk, and/or a solid state disk), a compact disc (CD), a digital versatile disc (DVD), a floppy disk, a cartridge, a magnetic tape, and/or another type of non-transitory computer-readable medium, along with a corresponding drive. Input component350includes a component that permits device300to receive information, such as via user input (e.g., a touch screen display, a keyboard, a keypad, a mouse, a button, a switch, and/or a microphone). Additionally, or alternatively, input component350may include a sensor for sensing information (e.g., a global positioning system (GPS) component, an accelerometer, a gyroscope, and/or an actuator). Output component360includes a component that provides output information from device300(e.g., a display, a speaker, and/or one or more light-emitting diodes (LEDs)). Communication interface370includes a transceiver-like component (e.g., a transceiver and/or a separate receiver and transmitter) that enables device300to communicate with other devices, such as via a wired connection, a wireless connection, or a combination of wired and wireless connections. Communication interface370may permit device300to receive information from another device and/or provide information to another device. For example, communication interface370may include an Ethernet interface, an optical interface, a coaxial interface, an infrared interface, a radio frequency (RF) interface, a universal serial bus (USB) interface, a Wi-Fi interface, a cellular network interface, and/or the like. Device300may perform one or more processes described herein. Device300may perform these processes based on processor320executing software instructions stored by a non-transitory computer-readable medium, such as memory330and/or storage component340. A computer-readable medium is defined herein as a non-transitory memory device. A memory device includes memory space within a single physical storage device or memory space spread across multiple physical storage devices. Software instructions may be read into memory330and/or storage component340from another computer-readable medium or from another device via communication interface370. When executed, software instructions stored in memory330and/or storage component340may cause processor320to perform one or more processes described herein. Additionally, or alternatively, hardwired circuitry may be used in place of or in combination with software instructions to perform one or more processes described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software. The number and arrangement of components shown inFIG.3are provided as an example. In practice, device300may include additional components, fewer components, different components, or differently arranged components than those shown inFIG.3. Additionally, or alternatively, a set of components (e.g., one or more components) of device300may perform one or more functions described as being performed by another set of components of device300. FIG.4is a flow chart of an example process400for utilizing machine learning models to identify insights in a document. In some implementations, one or more process blocks ofFIG.4may be performed by insight platform220. In some implementations, one or more process blocks ofFIG.4may be performed by another device or a group of devices separate from or including insight platform220, such as client device210and/or server device240. As shown inFIG.4, process400may include receiving document information and a request to identify insights in the document information (block410). For example, insight platform220(e.g., via computing resource224, processor320, memory330, and/or the like) may receive document information and a request to identify insights in the document information. In some implementations, server device240may provide document information to client device210. In some implementations, the document information may include information associated with a document, such as textual information provided in the document, information indicating locations of the document at server device240, information indicating folders storing information associated with the document in server device240, information indicating files associated with document that are stored in server device240, and/or the like. In some implementations, client device210may receive the document information, and may provide a user interface that displays the document information to the user. In some implementations, the user may cause client device210to generate a request to identify insights in the document information. In such implementations, client device210may provide, to insight platform220, the document information and the request to identify the document insights. Insight platform220may receive the document information and the request to identify the document insights. In some implementations, insight platform220may automatically determine document insights without receiving the request to identify the document insights. For example, insight platform220may automatically determine the document insights based on past behavior (e.g., the user regularly asks for insights when visiting particular web pages), based on user settings (e.g., the user may set up browser settings to indicate whether to provide document insights), based on a type of document (e.g., if the document is an electronic book or a chapter of an electronic book), based on a time of day (e.g., the user may specify a particular time of day when insights are to be provided), and/or the like. In some implementations, insight platform220may always automatically determine the document insights so that the document insights are ready instantly if the user wants the document insights. In such implementations, insight platform220may store the document insights in an insight data structure (e.g., a database, a table, a linked list, and/or the like) for future use. In some implementations, insight platform220may store the document information in a data structure (e.g., a database, a linked list, a tree, a table, and/or the like) associated with insight platform220. In this way, insight platform220may receive the document information and the request to identify the insights in the document information. As further shown inFIG.4, process400may include performing natural language processing on the document information to identify words, phrases, and sentences (block420). For example, insight platform220(via computing resource224, processor320, memory330, and/or the like) may perform natural language processing on the document information to identify words, phrases, and sentences in the document information. In some implementations, insight platform220may perform a natural language processing technique on the document information in order to generate a processed document. In some implementations, the natural language processing technique may convert the machine-readable and machine-storable form of the document information into a language form (e.g., the processed document) that includes recognizable words, phrases, sentences, and/or the like. In some implementations, the natural language processing technique may include one or more of lemmatization (e.g., determining a dictionary form of a word based on the word's meaning); morphological segmentation (e.g., separating words into individual grammatical units and identifying a class of the grammatical units); part-of-speech tagging (e.g., given a sentence, determining a part of speech for each word in the sentence, such as a noun, a verb, and/or the like); parsing (e.g., determining relationships between words in a sentence and performing a grammatical analysis of the sentence); sentence breaking (e.g., given a block of text, determining sentence boundaries in the block of text based on punctuation marks); stemming (e.g., reducing inflected or derived words to their word stem, base, or root form); word segmentation (e.g., separate a block of continuous text into separate words); terminology extraction (e.g., automatically extracting relevant terms from a corpus); lexical semantics (e.g., determining a computational meaning of individual words in context); machine translation (e.g., automatically translating text from one human language to another human language); named entity recognition (e.g., given a block of text, determining which items in the text map to proper names, such as people or places, and determining a type of each proper name, such as a person, a location, an organization, and/or the like); natural language generation (e.g., converting information from machine-readable form into readable human language); natural language understanding (e.g., converting text into more formal representations, such as first-order logic structures that are easier for computer programs to manipulate); optical character recognition (e.g., determining corresponding text from an image representing printed text); question answering (e.g., determining an answer to a human-language question); recognizing textual entailment (e.g., given two text fragments, determining if one text fragment being true causes negation of the other text fragment or allows the other text fragment to be either true or false); relationship extraction (e.g., identifying relationships among named entities in text); sentiment analysis (e.g., extracting subjective information from documents to determine sentiments about specific subjects); topic segmentation and recognition (e.g., separate text into segments devoted to different topics, and identifying the topic of each segment); word sense disambiguation (e.g., selecting a meaning of a word that makes the most sense in context); coreference resolution (e.g., determining words that refer to the same objects); discourse analysis (e.g., identifying discourse structure of connected text); and/or the like. In some implementations, insight platform220may identify words, phrases, and sentences in the processed document based on the natural language processing technique. For example, insight platform220may utilize morphological segmentation to separate words of the processed document into individual grammatical units and to identify a class of the grammatical units; part-of-speech tagging to determine a part of speech for each word in a sentence of the processed document; parsing to determine relationships between words in a sentence of the processed document and to perform a grammatical analysis of the sentence; sentence breaking to determine sentence boundaries in the processed document; word segmentation to separate a block of continuous text of the document into separate words; and/or the like. In this way, insight platform220may perform the natural language processing on the document information to identify the words, the phrases, and the sentences in the document information. As further shown inFIG.4, process400may include utilizing a machine learning model to analyze the words, the phrases, and the sentences to identify abstract insights, concrete insights, and non-insights (block430). For example, insight platform220(via computing resource224, processor320, memory330, and/or the like) may utilize a machine learning model to analyze the words, the phrases, and the sentences to identify abstract insights, concrete insights, and non-insights. In some implementations, the machine learning model used to identify the abstract insights, the concrete insights, and the non-insights may include one or more of a decision tree learning model, a learning classifier systems model, a nearest neighbor model, a support vector machine model, a clustering model, a neural network model, a latent variable model, and/or the like. A decision tree learning model may use a decision tree data structure to perform machine learning. A decision tree data structure classifies a population into branch-like segments that form an inverted tree with a root node, internal nodes, and leaf nodes. For example, the decision tree learning model may use a decision tree as a predictive model to map observations about an item (e.g., represented in the branches of the tree data structure) to conclusions about a target value of the item (e.g., represented in the leaves of the tree data structure). The process of building a decision tree may include partitioning the data set into subsets, shortening branches of the tree, and selecting a tree (e.g., the smallest tree) that fits the data. In some implementations, a decision tree model may be a classification tree (e.g., where the target variable can take a discrete set of values) in which leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Alternatively, a decision tree model may be a regression tree (e.g., where the target variable can take continuous values, such as real numbers). A learning classifier systems model may use learning classifier systems to perform machine learning. Learning classifier systems are a paradigm of rule-based machine learning methods that combine a discovery component (e.g. typically a genetic algorithm) with a learning component (e.g., performing either supervised learning, reinforcement learning, or unsupervised learning). Learning classifier systems seek to identify a set of context-dependent rules that collectively store and apply knowledge in a piecewise manner in order to perform functions such as classification, regression, data mining, and/or the like. Learning classifier systems allow complex solution spaces to be broken up into smaller, simpler parts. A nearest neighbor model may use a k-nearest neighbors model to perform machine learning (e.g., pattern recognition). A k-nearest neighbors model is a non-parametric method that may be used for classification (e.g., where the output is a class membership) in which an object is classified by a majority vote of its neighbors, with the object being assigned to the class most common among its k nearest neighbors, or may be used for regression (e.g., where the output is a property value for the object) in which the value is the average of the values of its k nearest neighbors. Additionally, weights may be assigned to the contributions of the neighbors, so that the nearer neighbors contribute more to the average of the values than the more distant neighbors. A support vector machine model may use a support vector machine (also known as a support vector network) to perform machine learning. A support vector machine is a supervised learning model with associated learning algorithms that analyze data used for classification and regression analysis. Given a set of training examples, each marked as belonging to one or the other of two categories, a support vector machine training algorithm builds a model that assigns new examples to one category or the other. A support vector machine model represents examples as points in space that are mapped so that the examples of separate categories are divided by a clear gap. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall. A clustering model may use cluster analysis (also known as clustering) to perform machine learning. Cluster analysis is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar to each other than to objects in other groups. Cluster analysis can be achieved by various methods that differ significantly in their notion of what constitutes a cluster and how to efficiently find them. Different cluster models may include connectivity models (e.g., where hierarchical clustering builds models based on distance connectivity), centroid models (e.g., where the k-means algorithm represents each cluster by a single mean vector), distribution models (e.g., where clusters are modeled using statistical distributions, such as multivariate normal distributions used by the expectation-maximization algorithm), density models (e.g., where clusters are defined as connected dense regions in the data space), and/or the like. A neural network model may use an artificial neural network to perform machine learning. An artificial neural network utilizes a collection of connected units or nodes called artificial neurons. Each connection between artificial neurons can transmit a signal from one artificial neuron to another artificial neuron. The artificial neuron that receives the signal can process the signal and then provide a signal to artificial neurons connected to it. In some artificial neural network implementations, the signal at a connection between artificial neurons may be a real number, and the output of each artificial neuron may be calculated by a non-linear function of the sum of its inputs. Artificial neurons and connections typically have a weight that adjusts as learning proceeds. The weight may increase or decrease the strength of the signal at a connection. Additionally, an artificial neuron may have a threshold such that the artificial neuron may send a signal if the aggregate signal satisfies the threshold. Artificial neurons may be organized in layers, and different layers may perform different kinds of transformations on their inputs. A latent variable model may use latent variables (e.g., variables that are inferred rather than directly observed) to perform machine learning. A latent variable model may infer the latent variables (e.g., through a mathematical model) from other variables that are observed (e.g., directly measured). In some cases, latent variables may correspond to aspects of physical reality that can be measured, but may not be measured for practical reasons. In such cases, latent variables may be referred to as hidden variables. In other cases, latent variables may correspond to abstract concepts, such as categories, behavioral or mental states, or data structures. In such cases, latent variables may be referred to as hypothetical variables or hypothetical constructs. In some implementations, the machine learning model may include a language streamlining function that maps each word in a sentence to a simpler, more common, or structured version of the word in a language streamlining table (e.g., a table created from a thesaurus and a dictionary). In some implementations, the machine learning model may include a relationship type identification function that maps each of relationship located in a sentence to a relationship type from a relationship function type table (e.g., which may be provided in an insights table as a function type and a function). In some implementations, the machine learning model may include a target and source concept identification function that identifies objects in a sentence, and groups the objects by relationship, so that a long or complex sentence with many relationships may be organized by relationship. For example, a sentence “math is a very interesting field, filled with accidental discoveries and many mysteries yet to be solved, and strongly correlates with insights from physics” may be transformed into a set of relationships: “math and field,” “math and discoveries,” “math and mysteries,” and “math and insights from physics.” Each of these relationships may be a standalone insight or may fit into a more complex insight, but may be explored individually. In some implementations, the machine learning model may include an insight aggregation function that determines if an insight in a sentence is a standalone insight or is to be grouped with other insights in the same sentence due to having dependencies between the insights in that sentence. An example of a sentence containing a complex insight may include “True cryptography involves only a private key, a public key, the information to be kept private, and an encryption algorithm, which would ideally be collision-free.” Such a sentence may be divided into a first statement (e.g., cryptography involves a private key, a public key, private information, an encryption algorithm) and a second statement (e.g., the encryption algorithm must be collision-free). The insight aggregation function may determine that there is a dependency between the two statements; that the first statement is not complete without the second statement, because the second statement contains an essential requirement for a component of the object “cryptography” in the first statement. So rather than storing the two statements as two separate insights, the two statements may be stored as one insight (e.g., “cryptography depends on a private key, a public key, private information, and a collision-free encryption algorithm”). In some implementations, the machine learning model may include a find topics function that identifies relevant topics for a particular insight, such as math, chemistry, security, and/or the like. In some implementations, the machine learning model may include a map objects function that attempts to synchronize objects that appear in a pair of insights. When the map objects function identifies a match between two objects, the map object function may attempt to match remaining unmatched objects. In some implementations, the machine learning model may include a find structural patterns function that identifies structural linguistic insight patterns, and returns a list of positive pattern matches for an insight. In some implementations, the machine learning model may include a find semantic patterns function that identifies semantic insight patterns, and returns a list of positive pattern matches for an insight. In some implementations, the machine learning model may include a fit to network function that determines a degree to which a new insight fits in an existing abstract network, or, if the new insight does not fit in the existing network, a degree to which the existing network may be distorted in order to fit the new insight. In some implementations, the machine learning model may include an abstractify function that takes in an abstract level and an insight as input, alters the insight to be at a specified abstraction level, and returns a new more abstract version of the insight, at the specified level of abstraction. The abstractify function may use the language streamlining table to determine a synonym of each word in an insight that is at the specified level of abstraction. In some implementations, the machine learning model may include a uniqueness function that graphs relationships between two objects as a vector, where a vector direction indicates a causal property of the relationship, and a shape of the vector indicates a type of relationship function linking the two objects (e.g., concepts). For example, the uniqueness function may graph an existing insight “security by obscurity is sub-optimal” as a link between objects “security” and “obscurity” in the graph, and a vector linking the objects would include a shape indicating a sub-optimal relationship function type. The uniqueness function may receive a new insight (e.g., possibly from an article explaining basic security concepts: “cryptography depends on a private key, a public key, private information, and a collision-free encryption algorithm”), and may map concepts (e.g., private key, public key, private information, and encryption algorithm) on the graph of objects. The uniqueness function may graph vectors between concepts (e.g., four vectors from each of the concepts, and another vector leading to “security”). The uniqueness function may provide two graphed insights (e.g., one for the “security by obscurity” insight, and another for the “cryptography” insight), and both insights may have one node in common (e.g., “security”). The uniqueness function may evaluate the two insights at the same abstraction level, but the “security by obscurity” insight is at a higher abstraction level than the “cryptography” insight, a “private key” is a concrete object, and “obscurity” is an abstract concept. The machine learning model may determine which insight is less abstract, using an abstract level function, may convert less abstract insight into a more abstract version using the abstractify function, and may re-compare the two insights. The uniqueness function may determine when a sentence equals another sentence which, at first, might seem very different. In some implementations, the machine learning model may include a newness function that evaluates whether an insight is new, as opposed to being considered common knowledge. In some implementations, the machine learning model may include a truth function that evaluates an insight for truth. The truth function may compare the insight to known insight structural and semantic patterns, known facts, known insights, and may determine whether the insight fits into a cohesive concept network, or whether the insight requires a drastic distortion of the network to fit in. A drastic distortion may indicate that the insight is either not true or invalidates a drastic degree of what we assumed to be true. The cohesive concept network is called an abstract network, and links all abstract concepts. The insight needs to fit into the concept network in such a way that it does not require drastically distorting existing interactions of the concept network, and does not contradict another known insight. In some implementations, the machine learning model may include an obviousness function that evaluates whether an insight is obvious, based on a significance of a time required to intuit the insight. An intuition time of an insight may be similar to computational complexity of an algorithm, and may be determined by identifying sub-insight logical steps to arrive at an insight conclusion, calculating a thinking time required for each sub-step, and summing the thinking times of the sub-steps to estimate the intuition time. In some implementations, the machine learning model may include an abstract level function that identifies an abstract level of a concept or insight based on a scale (e.g., of zero, indicating concrete, physical objects, to ten, indicating an abstract concept). In some implementations, the machine learning model may include a structural insight pattern identification function that identifies structural insight patterns in an insight. In some implementations, the machine learning model may include a semantic insight pattern identification function that identifies semantic insight patterns in an insight. In some implementations, the machine learning model may include an assessing insight probability function that returns a Boolean true or false value based on whether a sentence is an insight. In this way, insight platform220may utilize the machine learning model to analyze the words, the phrases, and the sentences to identify the abstract insights, the concrete insights, and the non-insights. As further shown inFIG.4, process400may include utilizing a machine learning model to match the abstract insights with particular concrete insights (block440). For example, insight platform220(via computing resource224, processor320, memory330, and/or the like) may utilize a machine learning model to match the abstract insights with particular concrete insights. In some implementations, insight platform220may be associated with a repository that includes concrete insights related to a variety of concepts. Insight platform220may utilize the machine learning model to compare the abstract insights with the concrete insights provided in the repository, and to match the abstract insights with the particular concrete insights provided in the repository. In some implementations, the machine learning model, used to match the abstract insights with the particular concrete insights, may include one or more of a decision tree learning model, a learning classifier systems model, a nearest neighbor model, a support vector machine model, a clustering model, a neural network model, a latent variable model, and/or the like, described elsewhere herein. In this way, insight platform220may utilize the machine learning model to match the abstract insights with the particular concrete insights. As further shown inFIG.4, process400may include utilizing a machine learning model to determine particular insights based on the non-insights (block450). For example, insight platform220(via computing resource224, processor320, memory330, and/or the like) may utilize a machine learning model to determine particular insights based on the non-insights. In some implementations, insight platform220may be associated with a repository that includes insights related to a variety of concepts. The insight platform may utilize the machine learning model to compare the non-insights with the insights provided in the repository, and to match the non-insights with the particular insights provided in the repository. In some implementations, the machine learning model used to determine the particular insights based on the non-insights may include one or more of a decision tree learning model, a learning classifier systems model, a nearest neighbor model, a support vector machine model, a clustering model, a neural network model, a latent variable model, and/or the like, described elsewhere herein. In this way, insight platform220may utilize the machine learning model to determine the particular insights based on the non-insights. As further shown inFIG.4, process400may include generating an insight document that includes the concrete insights, the abstract insights matched with the particular concrete insights, and the particular insights determined based on the non-insights (block460). For example, insight platform220(via computing resource224, processor320, memory330, and/or the like) may generate an insight document that includes the concrete insights, the abstract insights matched with the particular concrete insights, and the particular insights determined based on the non-insights. In some implementations, insight platform220may create the insight document based on the concrete insights, the abstract insights matched with the particular concrete insights, and the particular insights determined based on the non-insights. In some implementations, insight platform220may combine the concrete insights, the abstract insights matched with the particular concrete insights, and the particular insights determined based on the non-insights in a manner such that the concrete insights, the abstract insights matched with the particular concrete insights, and the particular insights determined based on the non-insights that are related to a particular concept are grouped together in the insight document. Alternatively, or additionally, insight platform220may emphasize (e.g., via highlighting, via bold text, via italics text, via bold and italics text, via color coding, and/or the like) the concrete insights, the abstract insights matched with the particular concrete insights, and the particular insights determined based on the non-insights in the processed document (e.g., as shown inFIG.1B) to generate an insight-emphasized document. In such implementations, since the particular concrete insights and the particular insights are not part of the processed document, insight platform220may add particular concrete insights and the particular insights to the processed document, and may emphasize the particular concrete insights and the particular insights. The insight-emphasized document may enable a reader of the insight-emphasized document to quickly and easily locate the insights in the original document. In some implementations, insight platform220may assign scores to the concrete insights, the abstract insights matched with the particular concrete insights, and the particular insights determined based on the non-insights, based on likelihoods of being true. In such implementations, insight platform220may rank the concrete insights, the abstract insights matched with the particular concrete insights, and the particular insights determined based on the non-insights, based on the scores, to generate a ranked list of insights, and may include the ranked list of insights in the insight document. In this way, insight platform220may generate the insight document that includes the concrete insights, the abstract insights matched with the particular concrete insights, and the particular insights determined based on the non-insights. As further shown inFIG.4, process400may include determining recommended documents based on the insight document, and providing the insight document and the recommended documents for display (block470). For example, insight platform220(via computing resource224, processor320, memory330, and/or the like) may determine recommended documents based on the insight document, and may provide the insight document and the recommended documents for display. In some implementations, insight platform220may be associated with a document repository that includes documents related to a variety of concepts. In such implementations, insight platform220may compare the information provided in the insight document with the documents provided in the document repository, and may match the information provided in the insight document with particular documents (e.g., recommended documents) provided in the document repository. In some implementations, insight platform220may provide the insight document (e.g., or the insight-emphasized document) and information identifying the recommended documents to client device210. Client device210may receive the insight document, and may provide the insight document for display to the user via a user interface. Client device210may receive the information identifying the recommended documents, and may provide the information identifying the recommended documents for display to the user via the user interface providing the insight document, or via a separate user interface. In this way, insight platform220may determine the recommended documents based on the insight document, and may provide the insight document and the recommended documents for display. In some implementations, before utilizing the machine learning models described herein, insight platform220may train the machine learning models. In such implementations, insight platform220may utilize historical information to train the machine learning models, and to generate trained machine learning models. For example, insight platform may train a machine learning models by providing the historical information (e.g., training data) to the machine learning model, and receiving predictions (e.g., indicating whether a document includes concepts, abstract insights, concrete insights, non-insights, and/or the like) based on providing the historical information to the machine learning model. Based on the predictions, insight platform220may update the machine learning model, and may provide the historical information to the updated machine learning model. Insight platform220may repeat this process until correct predictions are generated by the machine learning model. In some implementations, the historical information may include information associated with concepts included in documents, abstract insights included in documents, concrete insights included in documents, non-insights included in documents, and/or the like. In some implementations, an insight may include a relationship function between two or more objects. The objects may include abstract concepts (e.g., finance, games, or truth), but may include concrete concepts (e.g., real, easily measured and detailed objects, such a dollar). For example, a relationship function may include a verb (e.g., “a computer is like a brain”), figurative language (e.g., metaphors, such as “a computer is like a brain”), and adjectives (e.g., “a computer is primarily like a brain but structurally similar to evolution”). In some implementations, an insight may be abstract (e.g., “the arc of the moral universe is long, but it bends toward justice” or concrete (e.g., “global GDP is directly related to tech progress”). In some implementations, insights may include facts that are insightful because they reveal deeper rules that we can eventually use to derive facts (e.g., the insight “global GDP is directly related to tech progress” is a fact that reveals a deeper, more abstract and insightful rule, such as “when tech is prioritized first, cascading cross-domain solutions follow”). In some implementations, insights may include interactions between objects, but not every interaction between objects is an insight (e.g., saying “if you go over the edge of a waterfall, you'll be in danger” is an interaction between objects, but it is not insightful because it is obvious to everyone who understands gravity). In some implementations, an insight cannot be obvious, and an insight cannot be common knowledge. In some implementations, insights may be unique, and may advance human knowledge or technological progress in some way. In some implementations, insights may be declarative sentences (e.g., “x is definitely a”), but may be embedded in questions (“is x a?”) or speculative sentences (“x could be a”). In some implementations, an insight may follow certain structural patterns common to other insights since certain relationship types are newer and may appear more often in insights. In some implementations, an insight may relate to concepts that are not perfectly understood (e.g., many things about chemistry, physics, and math are unknown, so insights are likelier to relate to objects in these fields). In some implementations, insights may relate to complex concepts that appear more often in soft sciences, such as economics, law, and psychology. In some implementations, there may be insight trajectory patterns across different fields. In some implementations, insight platform220may automatically analyze published content (e.g., without being requested to analyze the published content) to identify insights, and may create an insight data structure (e.g., a database, a table, a linked list, and/or the like) with the identified insights. AlthoughFIG.4shows example blocks of process400, in some implementations, process400may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.4. Additionally, or alternatively, two or more of the blocks of process400may be performed in parallel. Some implementations described herein may provide an insight platform that utilizes machine learning models to identify insights in a document. For example, the insight platform may receive document information associated with a document, and may receive a request to identify insights in the document information. The insight platform may perform natural language processing on the document information to identify words, phrases, and sentences, and may utilize a first machine learning model with the words, the phrases, and the sentences to identify abstract insights, concrete insights, and non-insights. The insight platform may utilize a second machine learning model to match the abstract insights with particular concrete insights, and may utilize a third machine learning model to determine particular insights based on the non-insights. The insight platform may generate an insight document that includes the concrete insights, the abstract insights matched with the particular concrete insights, and the particular insights determined based on the non-insights. The insight platform may determine recommended documents based on the insight document, and may provide the insight document and the recommended documents for display. The foregoing disclosure provides illustration and description, but is not intended to be exhaustive or to limit the implementations to the precise form disclosed. Modifications and variations are possible in light of the above disclosure or may be acquired from practice of the implementations. As used herein, the term component is intended to be broadly construed as hardware, firmware, or a combination of hardware and software. Certain user interfaces have been described herein and/or shown in the figures. A user interface may include a graphical user interface, a non-graphical user interface, a text-based user interface, or the like. A user interface may provide information for display. In some implementations, a user may interact with the information, such as by providing input via an input component of a device that provides the user interface for display. In some implementations, a user interface may be configurable by a device and/or a user (e.g., a user may change the size of the user interface, information provided via the user interface, a position of information provided via the user interface, etc.). Additionally, or alternatively, a user interface may be pre-configured to a standard configuration, a specific configuration based on a type of device on which the user interface is displayed, and/or a set of configurations based on capabilities and/or specifications associated with a device on which the user interface is displayed. It will be apparent that systems and/or methods, described herein, may be implemented in different forms of hardware, firmware, or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the implementations. Thus, the operation and behavior of the systems and/or methods were described herein without reference to specific software code—it being understood that software and hardware may be designed to implement the systems and/or methods based on the description herein. Even though particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of possible implementations. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of possible implementations includes each dependent claim in combination with every other claim in the claim set. No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Furthermore, as used herein, the term “set” is intended to include one or more items (e.g., related items, unrelated items, a combination of related and unrelated items, etc.), and may be used interchangeably with “one or more.” Where only one item is intended, the term “one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. | 72,660 |
11861478 | DESCRIPTION OF EMBODIMENTS The present disclosure is described below in further detail with reference to accompanying drawings and embodiments. It should be understood that embodiments described herein are merely used for explaining the present disclosure, instead of limiting the present disclosure. In addition, embodiments provided below are used to implement some embodiments of the present disclosure, instead of providing all embodiments for implementing the present disclosure, and the technical solutions described in the embodiments of the present disclosure may be implemented in any combined manner if there is no conflict. Before the present disclosure is described in further detail, nouns and terms involved in the embodiments of the present disclosure are described, and nouns and terms involved in the embodiments of the present disclosure are explained as follows:1) Machine Learning (ML): a process of analyzing a sample in a training set to obtain a machine learning model (also briefly referred to as a parameter below) that can predict a target variable of a sample.2) Supervised learning: a parameter of a model is adjusted based on a feature and a target variable of a sample in a training set, so that the model has performance of predicting the target variable based on the feature of the sample. The target variable may be qualitative (for example, a class) or may be quantitative (for example, consecutive values).3) Training set: a set of samples (also referred to as training samples) used to train a machine learning model in a supervised manner.A sample in a training set includes a feature (for example, features in multiple dimensions) and a target variable having a definite value of the sample, so that a machine learning model can find a law of predicting the target variable based on the feature of the sample, and therefore has performance of predicting a value of the target variable based on the feature of the sample.4) Gradient boosting (GB) method: also referred to as a gradient boosting decision tree (GBDT) method, that is, a method for performing iterative training by linearly combining multiple weak classifiers (a function whose classification performance is insufficient to independently classify samples) to form a strong classifier (a function whose classification performance is sufficient to independently classify samples), where according to a gradient direction of a loss function of a model obtained after each iterative training, the model is updated in a manner of adding a function to the trained model, so that after each iterative training, a predicted loss of the model can decrease along the gradient direction.5) Extreme gradient boosting (XGBoost) method: a C++ implementation of the gradient boosting decision tree method, where multiple threads of processors such as a graphics processing unit (GPU) and a central processing unit (CPU) train a model in parallel, and an algorithm is improved, to improve precision.6) Overfitting: to enable a model to precisely predict all samples, the model becomes excessively complex.7) Loss function (Loss Function): which is used to indicate an inconsistency degree between a predicted result of a target variable in a machine learning model and an actual result of the target variable, and is a non-negative real-valued function, where a less loss function indicates better robustness of the machine learning model. The loss function includes representation forms such as a logistic loss (Logistic Loss) function, a quadratic loss function, and an exponential loss function.8) Compensation function: evaluation of a residual formed after each iteration process of a machine learning model, where a residual is a difference between a predicted value and an actual value of a target variable of a sample in the machine learning model.9) Target function: used to restrict a process of training a model to obtain an ideal parameter. For example, the target function may be in a form of a sum of the loss function and the compensation function.10) Gradient Descent (GD): a method for solving a maximum value of a loss function along a gradient descent direction, including mini-batch gradient descent (MBGD), batch gradient descent (BGD), stochastic gradient descent (SGD), and the like.11) First sample set: a set of samples that are in the training set and whose target variables are incorrectly predicted.12) Second sample set: a set of samples that are in the training set and whose target variables are correctly predicted. In a process of training, based on a supervised manner, the machine learning model, for example, an extreme gradient boosting (XGBoost) model including multiple classifiers, there is a problem that it is always difficult to predict classification of some samples in the training set. For example, when a machine learning model used to classify high-quality customers and non-high-quality customers is trained, for samples whose classification correctness percentages are 50% or a neighborhood (for example, 48% to 52%), the machine learning model classifies the samples to high-quality customers and non-high-quality customers randomly. In other words, it is equivalent to that the samples are not classified. Because of randomness of a classification result, after the machine learning model is iteratively trained each time, a predicted result of a sample is unstable. Another example is used for description.FIG.1is an optional schematic structural diagram of a machine learning model according to an embodiment of the present disclosure. The machine learning model includes multiple classifiers.FIG.1exemplarily shows that a classifier trains the machine learning model in a supervised manner by using linear combination. It is equivalent to that the multiple classifiers (certainly, a machine learning model including two classifiers is not excluded) included in the machine learning model perform training. For example, a decision tree classifier such as a classification and regression tree (CART), a neural network, or a support vector machine (SVM) may be used as the classifier. Certainly, other types of classifiers are not excluded in this embodiment of the present disclosure. For example, the classifier uses an XGBoost model. In the solution of training the XGBoost model provided in this embodiment of the present disclosure, when a feature and a target variable of a sample in a training set are inputted to the XGBoost model, if weights of samples are consistent, a predicted result of the target variable of the sample in the XGBoost model is random and is unstable. For example, when the machine learning model is used to determine whether a user is a high-quality customer, it is difficult to classify samples because of various reasons (for example, because features of the samples are insufficient, or the samples are sparsely distributed). In this case, a probability of classifying the user to a high-quality customer or a non-high-quality customer is 50% or a neighborhood of 50%. This is equivalent to that whether the user is a high-quality customer is not classified. Consequently, prediction precision of the machine learning model cannot be ensured. To resolve at least the foregoing problems, an embodiment of the present disclosure provides a machine learning model training method. When a machine learning model including multiple classifiers is trained, two weights are maintained for samples in a training set, where the weights includes a first weight and a second weight. The first weight and the second weight of each sample are initial. After the machine learning model is iteratively trained based on the initial second weight, a predicted loss is determined based on the first weight of each sample. A set of samples (that is, the first sample set) whose target variables are incorrectly predicted and a set of samples (that is, the second sample set) whose target variables are correctly predicted are determined based on the predicted loss of each sample in the training set. A weight of each sample in the first sample set and the second sample set is updated. After update, the first weight of each sample in the first sample set is greater than the second weight of the sample in the second sample set, and the second weight of each sample in the first sample set is greater than the second weight of the sample in the second sample set. The machine learning model is trained based on the updated second weight of the sample with reference to a feature and the target variable of the sample. In this embodiment of the present disclosure, a weight of a sample that is incorrectly predicted is increased by using two weights, so that when the machine learning model is trained, more attention may be paid to the sample whose target variable is incorrectly predicted, the problem that a predicted result of a target variable of a sample is random is resolved, and prediction precision of the machine learning model is improved. The machine learning model training method is exemplarily described with reference toFIG.2.FIG.2is an optional schematic flowchart of a machine learning model training method according to an embodiment of the present disclosure. The method includes step101to step106. Step101. Initialize a first weight (marked as w1) and a second weight (marked as w_xgb1) of each sample in a training set. In an optional embodiment of the present disclosure, a sample in the training set includes a feature and a target variable, the feature includes multi-dimensional data of the sample, and the target variable is used to describe the sample in a qualitative or quantitative manner. A credit reporting service scenario is used as an example. The machine learning model may be used to predict whether a user is a high-quality customer, and the target variable may be used to indicate that the user is a high-quality customer or a non-high-quality customer. For example, a possibility degree at which the user is a high-quality customer may be indicated in a form of a grade or a confidence level. When a predicted grade or confidence level exceeds a threshold, it indicates that the user is a high-quality customer. The feature may include data of the user such as an income and an expenditure. A customer maintenance service scenario is used as an example. The machine learning model may be used to predict whether a user is a potential to-be-lost customer of a client, and the target variable may be used to indicate that the user is a potential to-be-lost customer or is not a potential to-be-lost customer. Similarly, a possibility degree at which the user is a potential to-be-lost customer may be indicated in a form of a grade or a confidence level. When a predicted grade or confidence level exceeds a threshold, it indicates that the user is a potential to-be-lost customer. The feature may include basic attributes (for example, a gender, a region, and a preference) of the user, a client login state (a frequency and a time), and a message sending state on the client (a usage frequency and the like). In an embodiment, the prior first weight and the prior second weight are uniformly allocated to each sample in the training set, initial first weights w1of samples are the same, and initial second weights w_xgb of samples are also the same. For values of the prior first weight and the prior second weight, the first weight may be uniformly allocated to each sample in the training set, and the second weight may be uniformly allocated to each sample in the training set based on a quantity of samples in the training set, and a value of the second weight is different from that of the first weight. For example, assuming that the training set includes M samples, the first weight allocated to each sample in the training set is shown in formula (1): w1=1/M(1). A weight value of the second weight of each sample in the training set may be different from a weight value of the first weight. For example, a weight value of the second weight allocated to each sample in the training set may be 1. Step102. Input a second weight of each sample and a feature and a target variable of each sample in the training set to a classifier included in a machine learning model to perform training. In an embodiment, the machine learning model may be iteratively trained for multiple times based on the sample and the corresponding second weight of the sample. Referring toFIG.3, the machine learning model includes multiple classifiers. The multiple classifiers are base classifiers relative to the machine learning model, that is, basic determining units, and are marked as y1(x) to ym(x). In this case, in an sth(s is an integer greater than or equal to 1) iterative training, the following operations are performed: the sample in the training set and the second weight w_xgb1of the sample are inputted to each classifier, a minimum weighted error function (WEF) of each classifier is solved to obtain a fusion coefficient αmof the classifier, and the classifiers are combined based on the fusion coefficient of each classifier, to obtain the machine learning model after the sthiterative training. The model is shown in formula (2) and marked as: ƒM(x)=Σαmym(x) (2). In an embodiment, because each classifier predicts a value of the target variable of the sample, a final predicted result outputted by the machine learning model is obtained by comprehensively performing determining based on a predicted result of each classifier. A confidence level of the predicted result of each classifier depends on a fusion coefficient of the classifier. Therefore, at a stage of training the machine learning model, to avoid a problem that the obtained fusion coefficient is not an optimal solution of the minimum weight error function, by minimizing a quadratic sum of predicted losses of samples in the first sample set, the fusion coefficient of the classifier included in the machine learning model is solved so that the quadratic sum is minimum. Combination is performed to form the trained machine learning model based on solved fusion coefficients of classifiers. The classifiers are combined based on the fusion coefficients, thereby ensuring precision of the machine learning model. The following describes a training process of the machine learning model by using an example in which the classifier used in the machine learning model is an XGBoost model based classifier. It should be noted that if the machine learning model is trained by using another type of classifier, a person skilled in the art may easily perform implementation based on understanding of the following without creative work. In the XGBoost method, training is performed in a supervised manner, and relates to three parts: an XGBoost model, a parameter, and a target function. The XGBoost model and the parameter are used to control how to predict the value of the target variable (including a classification result or a fitting value) based on the sample. The target function is used to restrict a process of training the model to obtain an ideal parameter. A less target function indicates higher prediction precision of the XGBoost model. A process of training the XGBoost model is a process of enabling a value of the target function to be less than a particular value or to converge to a particular degree. The XGBoost model includes a classification and regression tree (CART) function (classification regression tree for short below). A classification tree and a regression tree are collectively referred to as the classification regression tree. When a classification problem is resolved, for example, whether a user is a credible user or an incredible user (that is, a binary classification problem) is predicted, the classification tree is used. For another example, when a regression problem is resolved, for example, a credit grade of a user is predicted, the regression tree is used. FIG.4is an optional schematic structural diagram of a classification tree. Each node in the classification tree indicates an attribute of a sample, each branch path indicates a possible value of an attribute, and each leaf node corresponds to a value (a class) of a sample indicated by a path from a root node to a leaf node. When the classification regression tree is used in the XGBoost model, because prediction (predict the value of the target variable of the sample) cannot be effectively performed because the classification regression tree is excessively simple. Therefore, a tree ensemble (TE) is used in the XGBoost model. The tree ensemble may be considered as a linear combination of a series of classification and regression trees, and an optional example may be marked as the following formula (3): yˆi=∑k=1kfk(xi),fk∈F.(4) ƒkis a classification and regression tree in F, F is a classification and regression tree set, and the target function of the XGBoost model is shown in the following formula (4): Obj(θ)=∑i=1nl(yi,yˆi)+∑k=1kΩ(fk).(5) xiindicates a feature of an ithsample. ∑i=1nl(yi,yˆi) is a loss function, indicates a degree of a difference between a predicted value and an actual value of a target variable of a sample in the XGBoost model, and may be in a form of, for example, a quadratic loss function or an exponential loss function. ∑k=1kΩ(fk) indicates a residual between a predicted value and an actual value of a target variable that is caused because of randomness of a sample, and is also referred to as a regularization term. The residual may be in a form of a sum of complexities of classification and regression trees in the classification and regression tree set. The regularization term is related to a quantity of leaf nodes and a value of the leaf node in the classification and regression tree. Because a parameter of the XGBoost model needs to be solved in F, and the XGBoost model cannot be trained by using a traditional method such as stochastic gradient descent, in this embodiment of the present disclosure, a gradient boosting method is used. For example, a new compensation functions superimposed on the XGBoost model obtained after each iterative training, to compensate for a residual of the XGBoost model caused in a previous iterative training process, and a new model continues to be trained to minimize the target function. Expressions of the first to the tthiterative training are described with reference to the XGBoost model: Before the first iterative training, the XGBoost model is indicated as the following formula (6): ŷi(0)=0 (6). After the first iterative training, the iterative XGBoost model is indicated as the following formula (7): ŷi(1)=ƒ1(xi)=ŷi(0)+ƒ1(xi) (7). After the second iterative training, the XGBoost model is indicated as the following formula (8): yˆi(2)=∑k=1tfk(xi)=yˆi(1)+f2(xi).(8) By analogy, after the tthiterative training, the XGBoost model is indicated as the following formula (9): yˆi(t)=∑k=1tfk(xi)=yˆi(t-1)+ft(xi).(9) With reference to the foregoing formula, in the first iterative training, the compensation function ƒ1(xi) is superimposed on the initial model ŷi(0), and the new model ŷi(1)obtained after compensation is iteratively trained for the second time. In the second iterative training, the compensation function ƒ2(xi) is superimposed on the model ŷi(1)obtained after the first iterative training, and a new model ŷi(2)obtained after compensation is trained. In the tthiterative training, a compensation function ƒt(xi) is superimposed on a model ŷi(t-1)obtained after the (t−1)thiterative training, and a new model ŷi(t)obtained after compensation is trained. Therefore, after the tthiterative training, the target function Obj(t)may be indicated by the following formula (10): Obj(t)=∑i=1nl(yi,yˆi(t))+∑i=1tΩ(fi)=∑i=1nl(yi,yi(t-1)+ft(xi))+Ω(ft)+constant.(10) constant is a constant. In the gradient boosting method, the function (the compensation function) ƒt(xi) added to the currently trained model to construct a new model is selected by using the following rule: the function ƒt(xi) is selected so that the target function is minimized. This is equivalent to minimize the following formula (11): ∑i=1nl(yi,yi(t-1)+ft(xi)).(11) Cases in which l are different forms of loss functions are described: 1) when l is a quadratic loss function, the target function may be indicated as formula (12): Obj(t)=∑i=1n[2(yi(t-1)-yift(xi)+ft2(xi))]+Ω(ft)+constant.(12) Herein, 2(yi(t-1)−yiƒt(xi) is also referred to as a residual. 2) when l is another form of loss function: quadratic expansion is performed on the target ∑i=1nl(yi,yi(t-1)+ft(xi)) by using a Taylor formula, to obtain formula (12): Obj(t)≈∑i=1n[l(yi,yi(t-1)+gift(xi)+1/2hift2(xi))]+Ω(ft)+constant,(12)wheregi=∂yˆ(t-1)l(yi,yˆi(t-1),hi=∂yˆ(t-1)2l(yi,yˆi(t-1)). A uniform target function may be obtained, and is shown in formula (13): ∑i=1n[gift(xi)+1/2hift2(xi))]+Ω(ft).(13) It is not difficult to see that after a constant term is removed, the target function has a very obvious feature: the compensation function added to the model after each iterative training is determined based on the first derivative and the second derivative in the loss function. In the XGBoost method, quadratic Taylor expansion is performed on the target function, and the function added to the model after each iteration is determined by using the first derivative and the second derivative. It is supported that the target function is self-defined, and a regularization term is added to the target function, to control model complexity, so that the trained XGBoost based classifier is simpler, and an overfitting phenomenon is avoided in a training process. Besides, in the XGBoost method, multi-threaded training is performed in parallel at the granularity of the features of the sample, thereby obviously reducing time complexity required in model training. For example, samples in the training set are classified based on features, one or more threads of a processor are allocated to each type of sample, and each thread trains the machine learning model by using samples having a same feature. The parallel multi-thread manner obviously improves machine learning model training efficiency. Step103. Determine a first sample set (marked as gt) in which a corresponding target variable is incorrectly predicted (i.e., sample(s) in the first sample set are the ones incorrectly predicted its corresponding target variable), and a second sample set (marked as le) in which a corresponding target variable is correctly predicted (i.e., sample(s) in the second sample set are the ones correctly predicted its corresponding target variable), based on a predicted loss of each sample in the training set. In an embodiment, the predicted loss of each sample in the training set is determined based on the loss function of the machine learning model. For example, the predicted loss of each sample is determined in the following manner: Based on a difference ŷ−y between a predicted value ŷ and an actual value y ŷ−y of each sample in the machine learning model, it is determined that an output value of a loss function ƒ(ŷ−y) that uses the difference ŷ−y as a dependent variable is a predicted loss loss of a corresponding sample. ƒ(ŷ−y) may be a function in any form, including an exponent form, a logarithm form, and the like. An exponent form shown in formula (14) may be used: loss=log[1+abs(ŷ−y/y)] (14), where abs is an absolute value operator. In an embodiment, the first sample set and the second sample set are determined by using a result of comparing the predicted loss of the sample with a loss threshold phi Samples in the training set whose predicted losses exceed the loss threshold form the first sample set gt, and samples whose predicted losses do not exceed the loss threshold form the second sample set le. Step104. Determine an overall predicted loss of the first sample set gt based on a predicted loss of each sample in the first sample set and the corresponding first weight. A loss of each sample in the first sample set is determined based on the loss function. A sum of losses is marked as ∑gtloss. The overall predicted loss is an overall predicted loss ξ1of the first sample set that is obtained by performing adjustment such as multiplication operation adjustment on a sum of predicted losses of samples by using the first weight, as shown in formula (15): ξ1=w1∑gtloss.(15) In some embodiments, because the loss function is indicated by parameters distributed within a value range of 0 to 1, the initial first weight w1is a value having a negative correlation with a quantity of samples in the training set, for example, w1=1/m. Therefore, a value of the overall predicted loss of the first sample set is less than 1. In some embodiments, the loss function of each sample is indicated by parameters distributed within a value range of 0 to 1; the first weight of each sample is regularized at the end of each iteration (e.g., to ensure all first weights adds up to 1) and is also a value between 0 and 1. Thus, a value of the predicted loss of each sample in the first sample set is less than 1. Step105. Update the first weight and the second weight of each sample in the first sample set based on the overall predicted loss of the first sample set, where the first weight and the second weight of each sample in the first sample set are greater than the first weight and the second weight of each sample in the second sample set correspondingly. That is, the first weight of each sample in the first sample set is greater than the first weight of each sample in the second sample set, and the second weight of each sample in the first sample set is greater than the second weight of each sample in the second sample set. The weight update factor β1less than 1 is constructed by using the overall predicted loss of the first sample set. It may be understood that the weight update factor β1may be constructed by using a form such as β1=ξ12or β1=ξ1. Exemplarily, the first weight of each sample in the first sample set is increased in this manner based on the weight update factor: 1) The updated first weight w_le_phi2of each sample in the second sample set le is obtained by decreasing the original first weight w1by using the weight update factor β1. That is, a product of the weight update factor and the original first weight w1is used as the updated first weight, as shown in formula (16) marked as follows: w_le_phi2=β1*w1(16). Besides, the first weight of each sample in the first sample set gt keeps unchanged before and after update, and is consistent with a value of the first weight existing when the machine learning model is iteratively trained for the first time. The updated first weight w_gt_phi2is shown in formula (17) marked as follows: w_gt_phi2=w1=1/M(17). Because the updated first weight w_le_phi2of each sample in the second sample set le is obtained by decreasing the original first weight using the weight update factor β1, although a value of the first weight w_gt_phi2of each sample in the first sample set gt is not directly increased, a weight value is increased compared with the first weight w_le_phi2of each sample in the second sample set le. It should be noted that to ensure that the value of the first weight is not excessively small in subsequent iterative training (for example, the third iterative training or the fourth iterative training), normalization processing may be performed by using a maximum value of the first weight w_gt_phi2w_le_phi2as reference. Besides, the first weight of each sample in the first sample set is decreased in this manner based on the weight update factor: 2) The updated second weight gt of each sample in the second sample set w_xgb_gt_phi2is obtained by increasing the original second weight w_xgb by using the weight update factor β1. That is, a quotient of the original second weight w1and the weight update factor is used as the updated second weight w_gt_phi2, as shown in formula (18) marked as follows: w_xgb_gt_phi2=w_xgb*(1/β1) (18) Besides, the second weight of each sample in the second sample set le keeps unchanged before and after update, and is consistent with a value of the second weight existing when the machine learning model is iteratively trained for the first time. The updated second weight w_xgb_le_phi2is shown in formula (17) marked as follows: w_xgb_le_phi2=w_xgb(19). Because the second weight of each sample in the first sample set gt is increased by using the weight update factor β1, the second weight of each sample in the second sample set le keeps unchanged before and after update, and a weight value of the second weight of each sample in the first sample set gt is increased. Step106. Input the updated second weight of each sample and the feature and the target variable of each sample in the training set to the classifier included in a machine learning model to perform training. It may be understood that based on the sample and the updated corresponding second weight of the sample, the machine learning model may be iteratively trained for multiple times. Still referring toFIG.2, the machine learning model includes the multiple classifiers y1(x) to ym(x). In the sth(s is an integer greater than or equal to 1) iterative training, the following operations are performed: inputting the first sample set and the second weight w_xgb_gt_phi2of the first sample set, and the second sample set and the second weight w_xgb_le_phi2of the second sample set to each classifier, solving a fusion coefficient αmof the classifier by minimizing a weight error function of the classifier; and combining classifiers based on fusion coefficients of the classifiers, to finally obtain, by training, the new machine learning model shown in formula (2). In the iterative training process, it should be noted that a difference between the (s+1)thtraining process and the sthtraining process is that a to-be-trained machine learning model in the (s+1)thtraining and the machine learning model obtained after the sthtraining has the following relationship: the machine learning model obtained after the (s+1)thtraining=the machine learning model obtained after the sthtraining+the compensation function. Therefore, exemplarily, if the compensation function is constructed by using a second-order derivation result of the loss function of the machine learning model obtained after the sthtraining, a prediction error of the machine learning model obtained after the sthtraining may converge along a gradient direction of the loss function, so that the prediction error of the machine learning model is minimized, and prediction precision is improved. Besides, because the value of the second weight in the first sample set is increased, compared with a same weight of the sample that is inputted to the machine learning model, in a process of training the machine learning model, more attention is paid to calculation of a fusion coefficient for the sample in the first sample set, so that the trained machine learning model has better performance of predicting a value of a target parameter in the first sample set. In an optional embodiment of the present disclosure, when the machine learning model uses an XGBoost model as a classifier, the XGBoost model supports that parallel training is performed on samples at a granularity of features. For example, one or more threads are allocated to samples having a same feature, and a multi-threaded processor is used to perform training in a hardware implementation. In this way, samples having different (classes) features may be used in parallel to train the machine learning model, thereby obviously reducing a training time of the machine learning model, and improving machine learning model training efficiency. It should be noted that step103to step106may be performed for multiple times, to determine a new first sample set gt in which a target variable of a sample in the new first sample set is incorrectly predicted, and a new second sample set le in which a target variable of a sample in the new second sample set is correctly predicted. The first weight and the second weight are iteratively updated, the new first sample set gt and the updated second weight of the new first sample set are inputted to the machine learning model, and the machine learning model is trained again. Certainly, iterative training may be performed for multiple times. Herein, repeated execution of step103to step106for the (t+1)thtime is used as an example. The first sample set including one or more sample whose target variable is incorrectly predicted and that is determined when step103is performed for the tthtime is gt, and the second sample set including one or more sample whose target variable is correctly predicted is le. Because in step103to step106, samples (the second weight is increased) in the first sample set are already preferentially used to perform iterative training, when step103is performed again, a quantity of samples in the re-determined first sample set gt decreases (because target variables of some samples in the original first sample set gt are already correctly predicted). The sum of the losses of the samples in the first sample set is marked as ∑gtloss, and the overall predicted loss ξtof the first sample set is shown in formula (20) marked as follows: ξt=w_gt_phit∑gtloss.(20) In some embodiments, equation (20) is suitable at iterations where the first weight of each sample in the first sample set is the same, e.g., at first iteration w1=1/m. In some embodiments, the overall predicted loss can be the sum of all products for samples in the first sample set, each product being a multiplication of the loss of the sample and the first weight of the sample, i.e., ξt=Σgt(loss*w_gt_phit). βt=ξt2. The first weight w_gt_phit+1in the first sample set and the first weight w_le_phit+1in the second sample set are updated by using the weight update factor βt, as shown in formula (21) and formula (22): w_le_phit+1=w_le_phit*βt(21); and w_gt_phit+1=w_gt_phit(22). Because βtis less than 1, although the first weight of each sample in the first sample set does not change, compared with the first weight in the second sample set, an increasing effect is achieved. Besides, to prevent the value of the first weight from decreasing excessively (while preventing the value of the second weight from increasing excessively), normalization processing is performed on the first weight of each sample in the first sample set and the first weight in the second sample set. As shown in formula (23), the normalized w_le_phit+1is indicated as: w_le_phit+1=w_le_phi2*βt(w_le_phit*βt)2+(w_gt_phit)2.(23) As shown in formula (24), the normalized w_gt_phit+1is indicated as: w_gt_phit+1=w_gt_phit(w_le_phit*βt)2+(w_gt_phit)2.(24) Besides, the second weight in the first sample set and the second weight in the second sample set are updated by using a manner shown in formula (25) and formula (26): w_xgb_le_phit+1=w_xgb_le_phit(25); and w_xgb_gt_phit+1=w_xgb_gt_phit*(1/βt) (26). Because 1/βtis greater than 1, the second weight in the first sample set is increased, and the second weight in the first sample set does not change, the second weight of the first sample set is increased. When a quantity of times the first weight and the second weight in the first sample set are iteratively updated reaches a specified value, or the overall predicted loss of the first sample set is less than a pre-determined value, the machine learning model has performance of precisely predicting a sample whose prediction correctness percentage of a target variable is 50% or a neighborhood (for example, 48% to 52%). FIG.5is an optional schematic diagram of a classification result of samples in a training set in a process of iteratively updating the first weight and the second weight for multiple times (a quantity of times is indicated by t) in an embodiment of the present disclosure. A solid line indicates a model (the model is obtained by iteratively updating the second weight and training samples in previous t times) currently obtained by training, and a dotted line indicates a current machine learning model. In each iterative training, because the second weight in the first sample set is greater than the second weight in the second sample set, the machine learning model preferentially trains the samples in the first sample set. A point inFIG.5indicates a sample, and a larger size of the point indicates a higher second weight of the sample. After the first weight and the second weight are iteratively updated for multiple times and the machine learning model is trained, the obtained machine learning model can already distinguish different types of samples. The embodiments of the present disclosure provide the machine learning model training method and a machine learning model training apparatus. In an actual application, the machine learning model training apparatus may be implemented as various types of terminal devices or implemented as a server, and trains a machine learning model and performs classification according to an actual application requirement, for example, is configured to evaluate whether a user is a user having good credit or a potential to-be-lost user of a client, or the like. Functional modules of the machine learning model training apparatus may be implemented in coordination by using hardware resources of various types of devices (for example, a terminal device, a server, or a server cluster), such as a computing resource and a communication resource (for example, used to support various manners of communication such as cable and cellular communication) of a processor. An embodiment of the present disclosure further provides a machine learning model training apparatus, including: a memory, configured to store an executable program; and a processor, configured to perform the machine learning model training method by executing the executable program stored in the memory. The following provides an exemplary description with reference toFIG.6A. FIG.6Aexemplarily shows an optional schematic structural diagram of software and hardware of a machine learning model training apparatus10. The machine learning model training apparatus10includes a hardware layer, an intermediate layer, an operating system layer, and a software layer. However, a person skilled in the art shall understand that the structure of the machine learning model training apparatus10shown inFIG.6Ais only an example, and the structure of the machine learning model training apparatus10is not limited. For example, the machine learning model training apparatus10may be provided with more components than those shown inFIG.6Aaccording to an implementation requirement, or some components may be omitted according to an implementation requirement. The hardware layer of the machine learning model training apparatus10includes a processor11, an input/output interface13, a memory14, and a communication interface12. The components may connect to and communicate with each other by using a system bus. The processor11may be implemented by using a CPU, a microprocessor (MCU), an application specific integrated circuit (ASIC), or a logical programmable gate array (FPGA). The input/output interface13may be implemented by using an input/output device, for example, a display screen, a touchscreen, and a speaker. The memory14may be implemented by using a non-volatile memory such as a flash memory, a hard disk, or an optic disk, or may be implemented by using a volatile memory such as a double data rate (DDR) dynamic cache. The non-volatile memory may be a read only memory (ROM) or a programmable read-only memory (PROM), which stores an executable instruction used to perform the machine learning model training method. In this embodiment of the present disclosure, the memory14is configured to store various types of application programs and operating systems to support operations of the machine learning model training apparatus10. The machine learning model training method disclosed in the embodiments of the present disclosure may be applied to the processor11or performed by the processor11. The processor11may be an integrated circuit chip having signal processing performance. In an implementation process, steps of the foregoing method may be performed by a hardware integrated logic circuit in the processor11or an instruction in a form of software. The processor11may be a general-purpose processor, a digital signal processor (DSP), other programmable logical devices, a discrete gate or transistor logic device, a discrete hardware component, and the like. The processor11may implement or execute the methods, the steps, and the logical block diagrams provided in the embodiments of the present disclosure. The general-purpose processor may be a microprocessor, any conventional processor, or the like. A software module may be located in a storage medium, the storage medium is located in the memory, and the processor11reads information in the memory and performs steps in the foregoing method in combination with hardware thereof. Exemplarily, the memory14and other components of the machine learning model training apparatus10may be centrally disposed, or may be disposed in a distributed manner relative to other components of the machine learning model training apparatus10. The communication interface12provides external data, for example, performance of access to the memory14disposed at a different place, to the processor11. Exemplarily, the communication interface12may perform communication in a wired manner (for example, an optical cable and a cable), and is configured to receive a sample for training the machine learning model. Certainly, the communication interface12may receive a sample in a short-distance communication manner based on a near field communication (NFC) technology, a Bluetooth technology, and a ZigBee technology. Besides, the communication interface12may further receive a sample in a communication manner of a communication standard such as Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), and an evolved standard thereof. The drive layer includes an intermediate component15configured to enable an operating system16to identify the hardware layer and communicate with each component of the hardware layer, for example, may be a set of drive programs for each component of the hardware layer. The operating system16is configured to provide a graphical user interface, for example, includes a plug-in icon, a desktop background, and an application icon. The operating system16supports a user to control a device by using the graphical interface. In this embodiment of the present disclosure, a software environment of the device such as an operating system type or version is not limited. For example, the operating system16may be an operating system Linux, an operating system UNIX, or another operating system. The application layer includes an application run by a terminal on a user side. For example, a model training application17runs on the application layer, to perform the machine learning model training method provided in the embodiments of the present disclosure. An embodiment of the present disclosure further provides a server, exemplarily shown inFIG.6B. The server30shown inFIG.6Bincludes: a processor31, a memory32, and a communication interface33. The components of the server30are coupled by using a bus system34. It should be understood that the communications bus34is configured to implement connection and communication between the components. The bus system34further includes a power supply bus, a control bus, and a status signal bus in addition to a data bus. However, for the purpose of description clarity, various buses are all marked as a first bus system34inFIG.6B. The components shown inFIG.6Bare only an example, do not indicate a quantity, may be disposed in a distributed manner in physical locations, and are connected by using the bus system34(for example, which may be a cable or an optical fiber) to become a whole logically. In this case, the bus system34may implement, by using the communication interface33, communication between application programs322(for example, databases) disposed in a distributed manner. It may be understood that the memory32may be a volatile memory or a non-volatile memory, and may also include both a volatile memory and a non-volatile memory. The non-volatile memory may be a ROM or a PROM. The memory32in this embodiment of the present disclosure intends to include but is not limited to these and any other proper memories. In this embodiment of the present disclosure, the memory32is configured to store various types of application programs322and operating systems321to support operations of the machine learning model training apparatus30. The machine learning model training method disclosed in the embodiments of the present disclosure may be applied to the processor31or performed by the processor31. The processor31may be an integrated circuit chip having a signal processing capability. In an implementation process, steps of the foregoing method may be performed by a hardware integrated logic circuit in the processor31or an instruction in a form of software. The processor31may be a general-purpose processor, a DSP, other programmable logical devices, a discrete gate or transistor logic device, a discrete hardware component, and the like. The processor31may implement or execute the methods, the steps, and the logical block diagrams provided in the embodiments of the present disclosure. The general-purpose processor may be a microprocessor, any conventional processor, or the like. A software module may be located in a storage medium, the storage medium is located in the memory32, and the processor31reads information in the memory32and performs steps in the foregoing method in combination with hardware thereof. Certainly, the embodiments of the present disclosure are not limited to be provided as methods and hardware, and there may be further multiple implementations, for example, provided as a storage medium (storing a program configured to perform the machine learning model training method provided in the embodiments of the present disclosure). When the program is run by the processor, the following operations are performed:training a machine learning model using features of each sample in a training set based on an initial first weight of each sample and an initial second weight of each sample;in one iteration of training the machine learning model, (e.g., after the machine learning model is trained at the beginning of the current iteration),determining a first sample set including one or more sample whose corresponding target variable is incorrectly predicted, and a second sample set including one or more sample whose corresponding target variable is correctly predicted, based on a predicted loss of each sample in the training set;determining an overall predicted loss of the first sample set based on a predicted loss and the corresponding first weight of each sample in the first sample set; andupdating a first weight and a second weight of each sample in the first sample set based on the overall predicted loss of the first sample set (e.g., at the first iteration, the first weight and the second weight of a sample are the initial first weight and the initial second weight of the sample; at an iteration other than the first iteration (e.g., Tth iteration), the first weight and the second weight are the first weight and the second weight obtained/updated from previous iteration (e.g., T−1th iteration); andinputting the updated second weight of each sample in the training set, and the features and the target variable of each sample in the training set to the machine learning model, and initiating a next iteration of training the machine learning model. When the program is run by the processor, the following operation is performed:initializing the first weight and the second weight of each sample in the training set to obtain the initial first weight of each sample and the initial second weight of each sample; inputting the second weight of each sample in the training set, the features of each sample in the training set, and the target variable of each sample in the training set to the machine learning model; and correspondingly allocating a thread to samples having a same feature in the machine learning model, and training the machine learning model using parallel threads. When the program is run by the processor, the following operations are performed:uniformly allocating the initial first weight to each sample in the training set, and uniformly allocating the initial second weight different from the initial first weight to each sample in the training set based on a quantity of samples in the training set. When the program is run by the processor, the following operations are further performed:after training the machine learning model at one iteration, determining a compensation function that causes the predicted loss to converge based on a gradient direction, according to a gradient direction of a loss function of the machine learning model; and superimposing, on the machine learning model, the compensation function to compensate for the predicted loss. When the program is run by the processor, the following operations are further performed:based on a difference between a predicted value of the target variable and an actual value of the target variable of a sample in the first sample set in the machine learning model, determining that that the predicted loss of the sample in the first sample set is an output value of a loss function that uses the difference as a dependent variable. When the program is run by the processor, the following operations are performed:in the training set, determining the first sample set whose predicted loss exceeds a loss threshold, and the second sample set whose predicted loss does not exceed the loss threshold. When the program is run by the processor, the following operations are performed:constructing a weight update factor by using a product of the overall predicted loss of the first sample set and the first weight; and decreasing the first weight of each sample in the second sample set, and increasing the second weight of each sample in the first sample set based on the weight update factor. When the program is run by the processor, the following operations are further performed:performing normalization processing on the first weight of each sample in the training set to obtain a normalization processing result, and updating the first weight of each sample in the training set based on the normalization processing result. When the program is run by the processor, the following operations are further performed:determining a fusion coefficient of a classifier included in the machine learning model, by minimizing a quadratic sum of predicted losses of the samples in the first sample set; and combining classifiers to form the trained machine learning model, based on fusion coefficients of the classifiers. When the program is run by the processor, the following operations are further performed:updating the first sample set and the second sample set, and iteratively updating the first weight and the second weight of the first sample set; and training the machine learning model based on the updated first sample set and the updated second weight, until a quantity of iterations (e.g., iterative update times) is satisfied, or the overall predicted loss of the first sample set is less than a pre-determined value. A functional structure of the machine learning model training apparatus is further described. Refer to an optional schematic structural functional diagram of the machine learning model training apparatus20shown inFIG.7, including:a first training unit21, configured to train a machine learning model at a granularity of a feature of each sample in a training set based on an initial first weight and an initial second weight of each sample;a sample unit22, configured to determine a first sample set in which a corresponding target variable is incorrectly predicted, and a second sample set in which a corresponding target variable is correctly predicted, based on a predicted loss of each sample in the training set;a loss prediction unit23, configured to determine an overall predicted loss of the first sample set based on a predicted loss of each sample in the first sample set and the corresponding first weight;a weight unit24, configured to increase a first weight and a second weight of each sample in the first sample set based on the overall predicted loss of the first sample set; anda second training unit25, configured to: input the updated second weight of each sample in the training set, and the feature and the target variable of each sample to the machine learning model, and train the machine learning model at the granularity of the feature of each sample. In an embodiment, the first training unit21is further configured to: initialize the first weight and the second weight of each sample in the training set; input the second weight of each sample and the feature and the target variable of each sample in the training set to the machine learning model; and correspondingly allocate a thread to samples having a same feature in the machine learning model, and perform training in a parallel thread manner. In an embodiment, the first training unit21is further configured to: uniformly allocate the first weight to each sample in the training set, and uniformly allocate the second weight different from the first weight to each sample in the training set based on a quantity of samples in the training set. In an embodiment, the machine learning model training apparatus20further includes: a compensation unit26, configured to: after the first training unit21and the second training unit25train the machine learning model each time, determine a compensation function that causes the predicted loss to converge based on a gradient direction of a loss function of the machine learning model, based on the gradient direction; and superimpose, on the machine learning model, the compensation function used to compensate for the predicted loss. In an embodiment, the loss prediction unit23is further configured to: based on a difference between a predicted value and an actual value of each sample in the first sample set in the machine learning model, determine that an output value of a loss function that uses the difference as a dependent variable is a predicted loss of a corresponding sample. In an embodiment, the sample unit22is further configured to determine, in the training set, the first sample set in which the predicted loss exceeds the loss threshold, and the second sample set in which the predicted loss does not exceed the loss threshold. In an embodiment, the sample unit22is further configured to: construct a weight update factor by using a product of the overall predicted loss of the first sample set and the first weight; and decrease the first weight of each sample in the second sample set, and increase the second weight of each sample in the first sample set based on the weight update factor. In an embodiment, the weight unit24is further configured to: perform normalization processing on the first weight of each sample in the training set, and correspondingly update the first weight of each sample based on a normalization processing result. In an embodiment, the machine learning model training apparatus20further includes:a fusion unit27, configured to: determine a fusion coefficient of a classifier included in the machine learning model, by minimizing a quadratic sum of predicted losses of the samples in the first sample set; and combine classifiers to form the trained machine learning model, based on fusion coefficients of the classifiers. In an embodiment, the second training unit25is further configured to: train the machine learning model based on the first sample set and the second sample set that are iteratively updated by the sample unit, and the second weight of the first sample set that is iteratively updated by the weight unit, until a quantity of iterative update times is satisfied, or the overall predicted loss of the first sample set is less than a pre-determined value. The following further exemplarily describes different implementations of the machine learning model training apparatus. 1. Application Program and Module at a Mobile End FIG.8Ais an optional schematic diagram in which a software module that may be designed by using a programming language such as C/C++ or Java is embedded into various mobile end APPs (for example, Wechat) based on a system such as Android or iOS (stored in a storage medium of the mobile end as an executable instruction, and is executed by a processor of the mobile end) according to an embodiment of the present disclosure. Related tasks such as machine learning model training and prediction are completed by using a computing resource of the mobile end, and results of the machine learning model training, prediction, and the like are periodically or aperiodically transferred to a remote server in various network communication manners or locally stored at the mobile end. For example, an APP at the mobile end may complete machine learning model training based on related sample data collected from the mobile end, and predict whether an APP user is a potential user to be lost. A background server of the APP pushes a free service to the user to avoid a user loss with reference to a customer care policy according to a predicted result reported by the APP. 2. Application Program and Platform of a Server FIG.8Bis an optional schematic diagram in which a dedicated software module in application software or a large software system designed by using a programming language such as C/C++ and Java runs at a server end (stored in a storage medium of the server end as an executable instruction, and run by a processor of the server end) according to an embodiment of the present disclosure. At least one of various original data, various levels of intermediate data, and a final result received from another device, and existing data or results on the server are combined to perform machine learning model training. The trained machine learning model is used to perform prediction. The machine learning model or a predicted result is outputted, in real time or not in real time, to another application program or module for usage, or may be written to a database or a file at the server end for storage. The embodiments of the present disclosure may be further provided as customized web interfaces or other user interfaces (UI) that are easy for interaction and that are attached on a distributed parallel computing platform including multiple servers, to form a data extraction platform for usage by an individual, a group, or an enterprise, a credit evaluation platform (used to evaluate whether a customer is a high-quality customer), a user loss warning platform (used to identify a potential customer to be lost), and the like. A user may upload existing data packets to the platform in batches, to obtain various computing results, or transmit real-time data streams to the platform to compute and update various levels of results in real time. 3. Application Program Interface (API) and Plug-In at a Server End FIG.8Cis an optional schematic diagram of an API, a software development toolkit (SDK), or a plug-in that implements a machine learning model training function, and performs prediction based on a machine learning model at a server end according to an embodiment of the present disclosure. The API, the SDK, or the plug-in are invoked by application program developers at other server ends, and embedded into various application programs. 4. API and Plug-In on a Mobile Device Client FIG.8Dis an optional schematic diagram of an API, an SDK, or a plug-in that implements a machine learning model training function, and performs prediction based on a machine learning model at a mobile device end according to an embodiment of the present disclosure. The API, the SDK, or the plug-in are invoked by application program developers at other mobile ends, and embedded into various application programs. 5. Cloud Open Service FIG.8Eis an optional schematic diagram of a cloud service in which prediction is performed based on a machine learning model according to an embodiment of the present disclosure. The cloud service includes a credit evaluation cloud service and a user loss warning cloud service. The embodiments of the present disclosure may be further provided as an API, an SDK, a plug-in, and the like of a credit evaluation cloud service and a user loss warning cloud service, and packaged as a cloud service that can be openly used by persons inside and outside an enterprise. Alternatively, various results are displayed on various terminal display devices in a proper form, for query by an individual, a group, an enterprise, or an institution. An example of an application scenario to which the machine learning model provided in the embodiments of the present disclosure can be applied is used for description. Certainly, a scenario example provided below constitutes no limitation. Scenario 1) The machine learning model is implemented as a binary classification warning model: Features including more than 1400 dimensions are constructed in the machine learning model based on basic types of features of a moral risk, income performance, a strained money chain, a game preference, malicious usage, and the like. On this basis, whether a user is a high-quality customer is predicted by using the binary classification warning model, to provide data support for further improving risk control performance of banks for credit users and formulating an effective policy. First. Prepare Sample Data, and Construct a Training Set Based on main types of features of samples such as a moral risk, income performance, a strained money chain, a game preference, malicious usage, and the like, the main types of features are further classified to subtypes of communication (6), special number (11), label (29), account information consistency (20), location-based service (56), device (39), message (28), communication time segment (42), game (142), shared friend (76), login behavior (172), adding a friend (384), and payment (432) in 13 dimensions (a number in the bracket indicates a quantity of features that may be used for modeling in each subtype, some features are primitive feature indexes, and some features are feature indexes derived from primitive indexes). Features of multiple samples in the foregoing dimensions, and the target variable (that is, a grade or a confidence level of a sample that is a high-quality customer) form the training set. Second. Weight Allocation of a Sample The prior first weight and the prior second weight are uniformly allocated to each sample in the training set, values of the first weights w1of the samples are the same, and values of the second weights w_xgb1of the samples are the same. Third. Iterative Training Stage The second weight of each sample in the training set, and the feature and the target variable (that is, a grade or a confidence level of a sample that is a high-quality customer) of each sample are inputted to a binary classification warning model for training. Assuming that a binary classification warning model uses the linear system model shown in formula (2), that is, classifiers in the binary classification warning model are combined based on a fusion coefficient, each iterative training process of the binary classification warning model is a process of adjusting the fusion coefficient according to a relationship between the feature and the target variable of the sample. After each iterative training of the binary classification warning model, the predicted loss of each sample in the training set is determined based on the loss function of the binary classification warning model, and the first sample set gt in which the target variable is incorrectly predicted and the second sample set le in which the target variable is correctly predicted are determined according to the predicted loss. The predicted loss of each sample in the first sample set is calculated according to formula (14), the overall predicted loss of the first sample set gt is determined based on formula (15) with reference to the first weight of the sample, and the first weight and the second weight of each sample in the first sample set are increased according to the overall predicted loss by using formulas (16) and (17) or formulas (18) and (19). Assuming that the binary classification warning model includes multiple classifiers marked as y1(x) to ym(x), the first sample set of the training set and the second weight thereof, and the second sample set and the second weight thereof are inputted into each classifier in the binary classification warning model, a fusion coefficient αmof each classifier is solved by minimizing a weight error function of the classifier, and classifiers are combined based on formula (2) and the fusion coefficient αmof each classifier, to obtain a new binary classification warning model after iterative training. Iterative training ends after reaching a preset quantity of iterative training times. Alternatively, the target function shown in formula (4) and (5) is constructed for the binary classification warning model. Based on whether a value of the target function is less than a pre-determined value, it is determined whether an ideal fusion coefficient is obtained after each iterative training. When the value is not less than the pre-determined value, iterative training continues. When the value is less than the pre-determined value, the fusion coefficient is outputted, and the classifiers are combined according to the fusion coefficient, to obtain the trained binary classification warning model. When the machine learning model uses an XGBoost model as a classifier, the XGBoost model supports that parallel training is performed on samples at a granularity of features. For example, one or more threads are allocated to samples having a same feature, and a multi-threaded processor is used to perform training in a hardware implementation. In this way, samples having different (classes) features may be used in parallel to train the machine learning model, thereby obviously reducing a training time of the machine learning model, and improving machine learning model training efficiency. Feature data of a to-be-predicted user is collected, and a grade (or a confidence level) of a high-quality customer is predicted according to the trained binary classification warning model. When the grade exceeds a grade threshold (or a confidence level threshold), it is determined that the user is a high-quality customer. Scenario 2) The machine learning model is implemented as a user loss warning model: Behavior data of known users (including a user lost and a user not lost) is analyzed in the user loss warning model based on features such as a basic user attribute, activeness, login states, and message states. Behavior data prediction and modeling are performed by using the present disclosure, a potential user to be lost is accurately predicted, and advertizing activities are performed for a user that may be lost, thereby improving overall user activeness. First. Prepare Sample Data, and Construct a Training Set The training set is formed based on features of multiple samples in different dimensions (basic user attributes, activeness, login states, message states, and the like), and the target variable (that is, a grade or a confidence level of a sample that is a lost user). Second. Weight Allocation of a Sample The prior first weight and the prior second weight are uniformly allocated to each sample in the training set, values of the first weights w1of the samples are the same, and values of the second weights w_xgb1of the samples are the same. Third. Iterative Training Stage The second weight of each sample in the training set, and the feature and the target variable of each sample (that is, a grade or a confidence level of a sample that is a high-quality customer) are inputted to a user loss warning model for training. Assuming that a user loss warning model uses the linear system model shown in formula (2), that is, classifiers in the user loss warning model are combined based on a fusion system, each iterative training process of the user loss warning model is a process of adjusting the fusion coefficient according to a relationship between the feature and the target variable of the sample. After each iterative training of the user loss warning model, the predicted loss of each sample in the training set is determined based on the loss function of the user loss warning model, and the first sample set gt in which the target variable is incorrectly predicted and the second sample set le in which the target variable is correctly predicted are determined according to the predicted loss. The predicted loss of each sample in the first sample set is calculated according to formula (14), the overall predicted loss of the first sample set gt is determined based on formula (15) with reference to the first weight of the sample, and the first weight and the second weight of each sample in the first sample set are increased according to the overall predicted loss by using formulas (16) and (17) or formulas (18) and (19). Assuming that the user loss warning model includes multiple classifiers marked as y1(x) to ym(x), the first sample set of the training set and the second weight thereof, and the second sample set and the second weight thereof are inputted into each classifier in the user loss warning model, a fusion coefficient αmof each classifier is solved by minimizing a weight error function of the classifier, and classifiers are combined based on formula (2) and the fusion coefficient αmof each classifier, to obtain a new user loss warning model after iterative training. Iterative training ends after reaching a preset quantity of iterative training times. Alternatively, the target function shown in formula (4) and (5) is constructed for the user loss warning model. Based on whether a value of the target function is less than a pre-determined value, it is determined whether an ideal fusion coefficient is obtained after each iterative training. When the value is not less than the pre-determined value, iterative training continues. When the value is less than the pre-determined value, the fusion coefficient is outputted, and the classifiers are combined according to the fusion coefficient, to obtain the trained user loss warning model. When the machine learning model uses an XGBoost model as a classifier, the XGBoost model supports that parallel training is performed on samples at a granularity of features. For example, one or more threads are allocated to samples having a same feature, and a multi-threaded processor is used to perform training in a hardware implementation. In this way, samples having different (classes) features may be used in parallel to train the machine learning model, thereby obviously reducing a training time of the machine learning model, and improving machine learning model training efficiency. Feature data of a to-be-predicted user is collected, and a grade (or a confidence level) of a lost customer is predicted according to the trained user loss warning model. When the grade exceeds a grade threshold (or a confidence level threshold), it is determined that the user is a potential customer to be lost. The embodiments of the present disclosure have the following beneficial effects:1) The machine learning model is trained when samples are distributed based on the prior second weight, a sample (the first sample set) that is incorrectly predicted by the machine learning model is found, and a corresponding weight is increased. In this way, by using updated distribution of samples, in subsequent training, a classifier in the machine learning model pays more attention to the sample that is incorrectly predicted, and prediction precision of the incorrect sample is improved.2) The machine learning model is trained in parallel at the granularity of the feature, a training process can be quickly completed by a multithreaded processor easily, and training efficiency of the machine learning model is improved.3) To resolve a problem that the fusion coefficient of the machine learning model is not optimal, the optimal fusion coefficient of the classifier is solved by using the quadratic sum of the predicted losses of the samples, to ensure precision of the trained machine learning model. The foregoing descriptions are merely specific embodiments of the present disclosure, but are not intended to limit the protection scope of the present disclosure. Any variation or replacement readily figured out by a person skilled in the art within the technical scope disclosed in the present disclosure shall fall within the protection scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims. | 73,745 |
11861479 | DETAILED DESCRIPTION As required, detailed embodiments of the present invention are disclosed herein; however, it is to be understood that the disclosed embodiments are merely exemplary of the invention that may be embodied in various and alternative forms. The figures are not necessarily to scale; some features may be exaggerated or minimized to show details of particular components. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the present invention. Human beings tend to develop a first impression of a person based on voice. In other words, when users listen to speech, they quickly form an opinion about the speaker. This may occur for speech from a digital assistant as well. When a digital assistant is talking to the user to perform tasks or answer queries, the user may also form an opinion of the digital assistant. This may be particularly relevant to digital assistants, as the digital assistant may be perceived like an employee of a company representing the company's brand. It would be beneficial for the user to connect with the digital assistant personality. That connection may automatically affect the ongoing utility and trust that the user places in the company overall. A digital assistant persona may make a good impression to the user through demonstration by the digital assistant of traits other than expertise and knowledge. These traits may include personality traits such as reliability, helpfulness, kindness, friendliness and resourcefulness. A goal of a virtual assistant is to establish user trust, engagement, and satisfaction with a brand. Thus, for brand awareness, it is beneficial for the virtual assistant to exude a digital assistant persona that is pleasant to the user. Elements such as intensity, frequency, pace, intonation, quality, melody, harmony, rhythm, etc. may influence the way users perceive the brand. Digital assistant personality includes of the choice of words, characteristics traits, and tone of voice. A combination of these may be used to create a standard vocabulary for a digital assistant. This standard vocabulary then helps create the prompts a digital assistant would say for the command. Creating the standard vocabulary for digital assistant and writing voice prompts is a tedious effort and requires specialized skill sets. This task is further complicated by a desire to provide prompts that help the user to connect with the personality of the digital assistant. By automating the process of personality development through artificial intelligence and machine learning techniques, a standard vocabulary may be generated which then will help use develop voice prompts for common tasks. This automated approach to digital assistant design saves significant time and effort, as well as aids in producing a personality that is engaging to the user. As discussed in detail herein, four recommendation engines may work together to automatically generate the standard vocabulary and voice prompts. These engines include (i) a personality type recommendation engine, (ii) a standard vocabulary recommendation engine, (iii) a brand tone and voice engine, and (iv) an analytics engine. Basic information may be input in the personality studio interface about the desired digital assistant persona. Based on the input, the personality recommendation engine automatically performs pattern matching between the inputs and the output of the recommendation engine and generates a personality type (e.g., from the 16 Myers and Briggs personality types), as well as a standard vocabulary based on the learning data that is collected and trained. Regarding the personality type recommendation engine, a combination of machine learning techniques may be used to develop the personality type. The personality type recommendation engine may generate base information according to personality type in combination with collected web data for each personality type, which may include data with respect to speaking style, choice of words, and tone of voice. The brand tone and voice engine may receive inputs from the corporate branding guidelines on tone of voice, choice of words, and overall corporate value and personality. The brand tone and voice engine may also receive the digital assistant personality type generated using personality studio to further refine the standard vocabulary. The standard vocabulary recommendation engine may receive inputs from the personality type recommendation engine, the brand tone and voice engine and any analytics data captured (prompt edits, user testing inputs, etc.) to develop a standard vocabulary recommendation. This will be used to generate a standard vocabulary for digital assistant. When the user creates a skill and input the intent or user utterances, the standard vocabulary may be used to autogenerate the voice prompts. Further details are discussed in detail herein. FIG.1illustrates an example diagram of a system100configured to provide digital assistant services to a vehicle102. The vehicle102may include various types of passenger vehicle, such as crossover utility vehicle (CUV), sport utility vehicle (SUV), truck, recreational vehicle (RV), boat, plane or other mobile machine for transporting people or goods. Telematics services may include, as some non-limiting possibilities, navigation, turn-by-turn directions, vehicle health reports, local business search, accident reporting, and hands-free calling. In an example, the system100may include the SYNC system manufactured by The Ford Motor Company of Dearborn, MI It should be noted that the illustrated system100is merely an example, and more, fewer, and/or differently located elements may be used. The infotainment system104may include one or more processors106configured to perform instructions, commands and other routines in support of the processes described herein. For instance, the infotainment system104may be configured to execute instructions of vehicle applications110to provide features such as navigation, accident reporting, satellite radio decoding, and hands-free calling. Such instructions and other data may be maintained in a non-volatile manner using a variety of types of computer-readable storage medium112. The computer-readable medium112(also referred to as a processor-readable medium or storage) includes any non-transitory medium (e.g., a tangible medium) that participates in providing instructions or other data that may be read by the processor106of the infotainment system104. Computer-executable instructions may be compiled or interpreted from computer programs created using a variety of programming languages and/or technologies, including, without limitation, and either alone or in combination, Java, C, C++, C#, Objective C, Fortran, Pascal, Java Script, Python, Perl, and PL/SQL. The infotainment system104may be provided with various features allowing the vehicle occupants to interface with the infotainment system104. For example, the infotainment system104may include an audio input114configured to receive spoken commands from vehicle occupants through a connected microphone116, and auxiliary audio input118configured to receive audio signals from connected devices. The auxiliary audio input118may be a physical connection, such as an electrical wire or a fiber optic cable, or a wireless input, such as a BLUETOOTH audio connection. In some examples, the audio input114may be configured to provide audio processing capabilities, such as pre-amplification of low-level signals, and conversion of analog inputs into digital data for processing by the processor106. The infotainment system104may also provide one or more audio outputs120to an input of an audio module122having audio playback functionality. In other examples, the infotainment system104may provide the audio output to an occupant through use of one or more dedicated speakers (not illustrated). The audio module122may include an input selector124configured to provide audio content from a selected audio source126to an audio amplifier128for playback through vehicle speakers130or headphones (not illustrated). The audio sources126may include, as some examples, decoded amplitude modulated (AM) or frequency modulated (FM) radio signals, and audio signals from compact disc (CD) or digital versatile disk (DVD) audio playback. The audio sources126may also include audio received from the infotainment system104, such as audio content generated by the infotainment system104, audio content decoded from flash memory drives connected to a universal serial bus (USB) subsystem132of the infotainment system104, and audio content passed through the infotainment system104from the auxiliary audio input118. The infotainment system104may utilize a voice interface134to provide a hands-free interface to the infotainment system104. The voice interface134may support speech recognition from audio received via the microphone116according to grammar associated with available commands, and voice prompt generation for output via the audio module122. The voice interface134may utilize probabilistic voice recognition techniques using the grammar in comparison to the input speech. In many cases, the voice interface134may include a standard user profile tuning for use by the voice recognition functions to allow the voice recognition to be tuned to provide good results on average, resulting in positive experiences for the maximum number of initial users. In some cases, the system may be configured to temporarily mute or otherwise override the audio source specified by the input selector124when an audio prompt is ready for presentation by the infotainment system104and another audio source126is selected for playback. The infotainment system104may also receive input from human-machine interface (HMI) controls136configured to provide for occupant interaction with the vehicle102. For instance, the infotainment system104may interface with one or more buttons or other HMI controls configured to invoke functions on the infotainment system104(e.g., steering wheel audio buttons, a push-to-talk button, instrument panel controls, etc.). The infotainment system104may also drive or otherwise communicate with one or more displays138configured to provide visual output to vehicle occupants by way of a video controller140. In some cases, the display138may be a touch screen further configured to receive user touch input via the video controller140, while in other cases the display138may be a display only, without touch input capabilities. The infotainment system104may be further configured to communicate with other components of the vehicle102via one or more in-vehicle networks142. The in-vehicle networks142may include one or more of a vehicle controller area network (CAN), an Ethernet network, and a media oriented system transfer (MOST), as some examples. The in-vehicle networks142may allow the infotainment system104to communicate with other vehicle102systems, such as a vehicle modem144(which may not be present in some configurations), a global positioning system (GPS) module146configured to provide current vehicle102location and heading information, and various vehicle ECUs148configured to corporate with the infotainment system104. As some non-limiting possibilities, the vehicle ECUs148may include a powertrain control module configured to provide control of engine operating components (e.g., idle control components, fuel delivery components, emissions control components, etc.) and monitoring of engine operating components (e.g., status of engine diagnostic codes); a body control module configured to manage various power control functions such as exterior lighting, interior lighting, keyless entry, remote start, and point of access status verification (e.g., closure status of the hood, doors and/or trunk of the vehicle102); a radio transceiver module configured to communicate with key fobs or other local vehicle102devices; and a climate control management module configured to provide control and monitoring of heating and cooling system components (e.g., compressor clutch and blower fan control, temperature sensor information, etc.). As shown, the audio module122and the HMI controls136may communicate with the infotainment system104over a first in-vehicle network142-A, and the vehicle modem144, GPS module146, and vehicle ECUs148may communicate with the infotainment system104over a second in-vehicle network142-B. In other examples, the infotainment system104may be connected to more or fewer in-vehicle networks142. Additionally or alternately, one or more HMI controls136or other components may be connected to the infotainment system104via different in-vehicle networks142than shown, or directly without connection to an in-vehicle network142. The infotainment system104may also be configured to communicate with mobile devices152of the vehicle occupants. The mobile devices152may be any of various types of portable computing device, such as cellular phones, tablet computers, smart watches, laptop computers, portable music players, or other devices capable of communication with the infotainment system104. In many examples, the infotainment system104may include a wireless transceiver150(e.g., a BLUETOOTH module, a ZIGBEE transceiver, a Wi-Fi transceiver, an IrDA transceiver, an RFID transceiver, etc.) configured to communicate with a compatible wireless transceiver154of the mobile device152. Additionally or alternately, the infotainment system104may communicate with the mobile device152over a wired connection, such as via a USB connection between the mobile device152and the USB subsystem132. In some examples the mobile device152may be battery powered, while in other cases the mobile device152may receive at least a portion of its power from the vehicle102via the wired connection. The communications network156may provide communications services, such as packet-switched network services (e.g., Internet access, VoIP communication services), to devices connected to the communications network156. An example of a communications network156may include a cellular telephone network. Mobile devices152may provide network connectivity to the communications network156via a device modem158of the mobile device152. To facilitate the communications over the communications network156, mobile devices152may be associated with unique device identifiers (e.g., mobile device numbers (MDNs), Internet protocol (IP) addresses, etc.) to identify the communications of the mobile devices152over the communications network156. In some cases, occupants of the vehicle102or devices having permission to connect to the infotainment system104may be identified by the infotainment system104according to paired device data160maintained in the storage medium112. The paired device data160may indicate, for example, the unique device identifiers of mobile devices152previously paired with the infotainment system104of the vehicle102, secret information shared between the paired device and the infotainment system104such as link keys, and/or personal identification numbers (PINs), and most recently used or device priority information, such that the infotainment system104may automatically reconnect to the mobile devices152matching data in the paired device data160without user intervention. When a mobile device152that supports network connectivity is connected to the infotainment system104, the mobile device152may allow the infotainment system104to use the network connectivity of the device modem158to communicate over the communications network156with the remote telematics server162or other remote computing device. In one example, the infotainment system104may utilize a data-over-voice plan or data plan of the mobile device152to communicate information between the infotainment system104and the communications network156. Additionally or alternately, the infotainment system104may utilize the vehicle modem144to communicate information between the infotainment system104and the communications network156, without use of the communications facilities of the mobile device152. Similar to the infotainment system104, the mobile device152may include one or more processors164configured to execute instructions of mobile applications loaded to a memory166of the mobile device152from storage medium168of the mobile device152. In some examples, the mobile applications may be configured to communicate with the infotainment system104via the wireless transceiver154and with the remote telematics server162or other network services via the device modem158. The infotainment system104may also include a device link interface172to facilitate the integration of functionality of the mobile applications into the grammar of commands available via the voice interface134. The device link interface172may also provide the mobile applications with access to vehicle functions and information available to the infotainment system104via the in-vehicle networks142. An example of a device link interface172may be the SYNC APPLINK component of the SYNC system provided by The Ford Motor Company of Dearborn, MI. FIG.2illustrates an example system200for the application of a customized digital assistant persona208to the vehicle102. As shown, a user may utilize the mobile device152(or another network-enabled computing device) to provide user input206to a personality studio portal204over the communications network156to a digital assistant configuration server202. The digital assistant configuration server202may generate the customized digital assistant persona208responsive to the user input206and may provide the digital assistant persona208to the vehicle102for use by the infotainment system104. Similar to as discussed above with respect to the remote telematics server162, the digital assistant configuration server202may include various types of computing apparatus including a memory on which computer-executable instructions may be maintained, where the instructions may be executable by one or more processors of the device. The personality studio portal204may be an application or library included on the storage of or otherwise accessible by the digital assistant configuration server202. The personality studio portal204may provide, to a user of the mobile device152, a user interface provided by the digital assistant configuration server202. To do so, the digital assistant configuration server202may be configured to maintain the personality studio portal204accessible to the mobile device152(or other devices) over the communications network156. In an example, the digital assistant configuration server202may be configured to provide the personality studio portal204by using a web server application. As another possibility, the digital assistant configuration server202may execute a dedicated server application that may be accessed by a dedicated client application of a connecting device to provide the personality studio portal204. As explained in detail herein, the personality studio portal204may be configured to allow the user to access, view, and update aspects of the digital assistant persona208. The user input206may include the input of various information to the mobile device152in support of the controlled operation of the personality studio portal204. The user input206may be provided by answering questions indicated by the personality studio portal204, e.g., through the selections of options and/or textual input. This input may take various forms, such as touch input to a screen of the mobile device152, and/or audio input received to a microphone of mobile device152and transcribed into text. The digital assistant persona208may include various information to allow a digital assistant to provide a digital assistant having a predefined personality. Referring toFIG.3, this persona-defining information may include data regarding personality type, character traits, tone of voice, and/or speaking style. The digital assistant persona208may be used as an aid in defining a standard vocabulary302. The standard vocabulary302includes various elements that make up a spoken or displayed prompt from a digital assistant to a user. These elements may include, as some examples, introductions, acknowledgments, confirmations, error handling, discourse markers, unsuccessful task indications, apologies, empathy, tapering questions, and confirmations. The standard vocabulary302may be useful for the generation of content304. Skills may refer to individual bots or applications that are focused on specific types of tasks. Some skills require access to a knowledge base. The standard vocabulary302of the digital assistant persona208helps select content304that follow the persona of the digital assistant across different skills. These skills may include, as some examples, customer support, frequently asked questions (FAQs), notifications, Easter eggs, onboarding, etc. The digital assistant persona208may also be used as an aid in defining a prompt tune306for the digital assistant. As compared to the content304, which relates to the substance of what information is provided by the digital assistant, the prompt tune306instead relates to how the information is spoken. As some examples, the prompt tune306may include information indicative of aspects of the speech of the digital assistant such as tone, pitch, speed, and pronunciation. FIG.4illustrates an example personality studio user interface400in accordance with an embodiment of the disclosure. The personality studio user interface400may be provided to a user accessing the personality studio portal204, as mentioned above. In general, the personality studio user interface400is configured to present an approach that allows the user to define aspects of the digital assistant persona208. The personality studio user interface400may be a cloud-based solution which leverages artificial intelligence and machine learning techniques to auto generate prompts based on the user request type. The personality studio user interface400may include a plurality of categories of configurable information. In the illustrated example, these categories include general information402, a backstory404, a personality register406, a personality type408, character traits410, service tasks412, persuasive tasks414, tone of voice416, speaking style418, and sample dialog420. The general information402may include basic details about the digital assistant, such as name, age, and gender. The general information402may also include one or more images of the digital assistant, which may be used in user interfaces, for example, to identify who is communicating. The backstory404may include background information about the character of the digital assistant. This may include, for example, a fictitious background relating to where the persona of the digital assistant was born, raised, attended school, siblings, and so on. The personality register406may include information indicative of personality aspects of the digital assistant. For instance, the personality register406may allow for selection of aspects such as whether the persona of the digital assistant is dominant or submissive, and whether the persona is friendly or averse. Personality traits describe one's public, external behavior, while character traits describe one's private, internal compass. The personality type408may allow for the selection of various personality traits that the persona of the digital assistant may be chosen to have. The personality traits refer to other skills besides expertise and knowledge that may be used by the digital assistant to gain user trust. Some examples of such traits are to be reliable, helpful, kind, friendly, and/or resourceful. The character traits410may similarly allow for the selection of various character traits that the persona of the digital assistant may be chosen to have. Some example character traits may be to be honest, brave, compassionate, a leader, courageous, unselfish, and/or loyal. The service tasks412may include a listing of one or more service tasks that the digital assistant may be defined to perform. Similarly, the persuasive tasks414may include a listing of one or more persuasive tasks that the digital assistant may be defined to perform. The tone of voice416may include information indicative of the tone of voice that may be used by the digital assistant. The tone of voice may incorporate elements such as intensity, frequency, pace, intonation, quality, melody, harmony, rhythm. These elements may influence the way a user perceives a brand based on how it sounds. The speaking style418refers to the intonation and speaking style of the digital assistant. For example, a male voice or a female voice may be chosen, or a voice with an English accent or a Southern accent may be chosen. What voice to choose may depends on the tasks being performed as well as the expectations of the user. The speaking style418may also include choice of words relates to whether one word or phrase or another word or phrase with similar meaning is used. The choice of words also provides clues to personality, as users often perceive a speaker differently based on how the speaker talks. The sample dialog240may include example conversation between the digital assistant and users. These sample dialogs240may be useful in establishing rapport with the users, and may be based on the other personality information described via the personality studio user interface400. The user may utilize the personality studio user interface400to access, view, and update these aspects of the digital assistant persona208. If the user makes any additions, changes, or updates, the user may select a save control422to apply those changes to the system. If the user wishes to discard any changes, the user may select a cancel control424. Responsive to receiving input that is saved via the save control422, the personality studio portal204creates a personality type for the user. In one example, this personality type is based on the Myers and Briggs sixteen personality types. In Myers and Briggs, these sixteen personality types are based on four independent factors that are used to categorize personalities: (i) introversion vs. extraversion; (ii) sensing vs. intuition; (iii) thinking vs. feeling; and (iv) judging vs. perceiving. The introversion vs. extraversion factor describes how a person manages their energy. Introverts spend quiet time alone or with a small group, and are reserved and thoughtful. Extraverts spend time with people and in busy, active surroundings, and tend to be more expressive and outspoken. The sensing vs. intuition factor describes how an individual processes information. Sensors focus on their five senses and tend to be hands-on learners and are often described as practical. Intuitives focus on a more abstract level of thinking, are more interested in theories, patterns, and explanations, and are often more concerned with the future than the present and are often described as creative. The thinking vs. feeling factor describes how people make decisions. Thinkers are interested in finding the most logical, reasonable choice, while feelers tend to make decisions with their hearts. The judging vs. perceiving factors describes how people approach structure in their lives. Judgers appreciate structure and order and dislike last-minute changes, while perceivers appreciate flexibility and spontaneity and minds open to change. The sum of a person's four preferred styles may be referred to as their personality type. FIG.5illustrates an example500of a personality type generated based on user input206to the personality studio user interface400. As shown in the generated text502, the personality type includes information about the personality and characteristics of the potential digital assistant personal208. Some of the information in the generated personality type may relate to which of the sixteen personality types is indicated by the user input206, while other content may relate to specifics provided by the user, e.g., backstory. The user may customize the generated text502by pressing the edit control504. Once the user is satisfied with the generated text502, the user may select the generate standard vocabulary control506to cause the generation of the standard vocabulary302to be performed by the personality studio portal204. FIG.6illustrates an example600of a standard vocabulary302generated based on the personality type and other user input206. As noted above, the standard vocabulary302includes prompts for various categories such as introductions, acknowledgments, confirmations, error handling, discourse markers, unsuccessful task indications, apologies, empathy, tapering questions, and confirmations. A portion of the introductions of a standard vocabulary302are illustrated in the example600. These elements of the standard vocabulary302as shown may be editable by the user, for instance in edit controls602if the user chooses to tune the prompts at604. It can also be seen that the user may select from a selector control606to view of the categories, such as apologies, explicit confirmations, implicit confirmations, greetings, acknowledgements, etc. Once the user is satisfied with the standard vocabulary302, the standard vocabulary302may be exported to a project responsive to selection of the export control608. FIG.7illustrates an example700including an export screen702provided responsive to selection to export the standard vocabulary302. As shown, the export screen702allows the user to select projects704into which the standard vocabulary302and personality information can be exported into. If the user elects to continue with the export, the user may select the save control706. Or, if the user wishes to abandon the export, the user may select the cancel control708, reverting back to the user interface of the example600. FIG.8illustrates an example800of auto-generated content304for a skill. As shown, the skill in the example800is a FAQs skill. The example800accordingly illustrates a set of digital assistant responses802that are autogenerated by the personality studio portal204.FIG.9illustrates an alternate example900of auto-generated content304for a skill. As shown, the skill in the example900is a small talk skill. The example900accordingly illustrates a set of digital assistant responses902that are autogenerated by the personality studio portal204for the small talk skill. FIG.10illustrates an example process1000for the use of personality studio portal408in the creation of a digital assistant persona208. In an example, the process300may be performed by the digital assistant configuration server202in communication with a user accessing the personality studio portal408using a mobile device152. Further aspects of these operations are described in detail herein. At operation1002, the digital assistant configuration server202receives digital assistant persona208details. In an example, these details may be entered in the tool using an interface such as the personality studio user interface400. At operation1004, the digital assistant configuration server202generates a personality type. In an example, the personality type is a Myers and Briggs personality types generated based on the persona information provided to the personality studio user interface400. At operation1006, the digital assistant configuration server202auto-generates the standard vocabulary302and prompts for each skill or domain. In an example, this auto-generation of the persona is performed with the help of personality type. At operation1008, the digital assistant configuration server202exports the standard vocabulary for use in a project. In an example, this information is exported by use of the export screen702. At operation1010, the digital assistant configuration server202autogenerates prompts for a skill based on the question types. Example autogenerated prompts are illustrates in the examples800and900. FIG.11illustrates an example1100hierarchical diagram of aspects of the operation of the digital assistant configuration server202. This hierarchy includes four aspects: a personality type recommendation engine1102, a standard vocabulary recommendation engine1104, a brand tone and voice engine1106, and an analytics engine1108. Each of these four recommendation engines1102,1104,1106,1108are utilized together help derive aspects of the digital assistant personality208described above, including the standard vocabulary302, the content304, and the prompt tune306. To support the operation of these engines1102,1104,1106,1108, the digital assistant configuration server202operates utilizing multiple artificial intelligence algorithms and machine learning techniques/methods. FIG.12illustrates an example1200of an un-supervised machine learning approach. As shown, machine learning1202is initiated, in which un-supervised learning1204operations are performed. The un-supervised learning1204involves the grouping and interpretation of the input data, without regard to any outputs. As indicated, clustering1206may be utilized. Clustering1206refers to a common un-supervised learning technique in which exploratory data analysis is performed to find hidden patterns or groupings in the input data. FIG.13illustrates an example1300application of the un-supervised machine learning approach to the personality type recommendation engine1102. As shown, the personality type recommendation engine1102may receive input data1302. This input data may include the personality type generated based on the user input206to the personality studio portal204. For instance, with reference toFIG.14, an example1400is shown of a lookup table1402being used by the personality studio portal204to find a closest matching personality type for the data input to the personality studio user interface400. Based on the persona information fed into the personality studio user interface400, the personality studio portal204may do a quick lookup of the personality type table to find the right personality type from the available personality types. For each personality type, the lookup table1402may include information in detailed information in various categories, such as characteristics of that personality type, communications styles of that personality type, strengths of that personality type, cognitive functions of that personality type, hobbies and interests common to those of that personality type, and famous people sharing that personality type.FIG.15illustrates an example of details of the lookup table1402for a specific personality type. Referring back toFIG.13, the personality type recommendation engine1102may also receive training data indexed according to personality type. This may include, as one example, audio recordings, video recordings, and/or articles of various famous celebrities and/or personalities as well as the corresponding personality type of those famous celebrities and/or personalities. This additional information may be used to supplement a corpus of content for use in generating the standard vocabulary302. FIG.16illustrates an example1600of training data that may be used as the corpus of content. As shown, some example content about a celebrity to be mined may include social media posts of the celebrity, audio/podcasts of the celebrity, interview videos of the celebrity, books about the celebrity or autobiographies by the celebrity, public speeches by the celebrity, or blogs, articles, or other publications authored by the celebrity. Referring back toFIG.13, the personality type recommendation engine1102may also receive data to use for pattern matching of the different personality types. This pattern matching information may allow the personality type recommendation engine1102to feed the correct information from the training data into the standard vocabulary302information for the personality type. In an example, the pattern matching information may include tagging of the training data for specific personality factors that may be present for the indicated personality type. The personality type recommendation engine1102may utilize this input data1302to generate un-supervised results1304. These un-supervised results1304may include references to the information in the training data that can be used to build to the standard vocabulary302. FIG.17illustrates an example of data collection for the training performed by the personality type recommendation engine1102. As shown, using the training data sources shown inFIG.16that conform to each personality type, the personality type recommendation engine1102may collect celebrity speaking style patterns, such as choice of words, tone, pitch, etc. FIG.18illustrates an example of un-supervised learning performed by the personality type recommendation engine1102. As shown, using the data collection illustrated inFIG.17, the personality type recommendation engine1102perform the un-supervised learning via a neural network based on a pattern recognition (e.g., clustering, etc.) with respect to commonalities in speaking style patterns, such as choice of words, tone, pitch, etc., across celebrities having a given personality type. This information may be used to augment the prompt tune306information keyed off the digital assistant persona208created by the user with the common information for individuals having that personality type. FIG.19illustrates an example1900of a supervised machine learning approach. As shown, machine learning1902is initiated, in which supervised learning1904operations are performed. The supervised learning1904involves developing a predictive model based on both the input data and the generated outputs. As indicated, classification1906may be utilized. Clustering1206refers to techniques that may be used to predict discrete responses based on a trained classification input data into categories. FIG.20illustrates an example2000of an application of the supervised machine learning approach to the brand tone and voice engine1106. As shown, the brand tone and voice engine1106may receive input data2002. This input data2002may include corporate brand guidelines, such as personality traits that a brand owner wishes for a digital assistant persona to possess. The input data2002may further include pre-created corporate dialog for digital assistant personas that embody the personality traits desired by the brand owner. This corporate dialog may be created by brand specialists, but may be taken advantage of not only as base content, but also as data indicative of the personality traits desired by the broad, separate from the words of the content itself. FIG.21illustrates an example2100of the brand tone and voice engine1106utilizing the corporate brand guidelines from a brand as well as digital assistant persona information entered via the personality studio user interface400. The brand tone and voice engine1106may utilize this information to classify the input data2002to aid in classifying new dialog to identify content that is consistent with the corporate brand guidelines. FIG.22illustrates an example2200of an application of the supervised machine learning approach to the standard vocabulary recommendation engine1102. As shown, the standard vocabulary recommendation engine1102may receive inputs2202such as user-input sample dialog that may feed the user's words into the machine learning algorithm. The input data2202may also include the outputs2004from the brand tone and voice engine1106, which may include dialog that fits with the tone and voice desired by the brand. The input data2202may also include prompt analytics data, which may include prompt edits, positive or negative feedback on prompts based on user testing, or other information that may be used to help gauge the likeability of dialog to a user. The result of this information may be autogenerated prompts2204. These autogenerated prompts2204may be exported into a project, and used as the basis for dialog for a skill, where the dialog conforms with the user's personality type as well as with the brand's desired tone and voice. Accordingly, it is to be understood that the above description is intended to be illustrative and not restrictive. Many embodiments and applications other than the examples provided would be apparent upon reading the above description. The scope should be determined, not with reference to the above description, but should instead be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. It is anticipated and intended that future developments will occur in the technologies discussed herein, and that the disclosed systems and methods will be incorporated into such future embodiments. In sum, it should be understood that the application is capable of modification and variation. All terms used in the claims are intended to be given their broadest reasonable constructions and their ordinary meanings as understood by those knowledgeable in the technologies described herein unless an explicit indication to the contrary in made herein. In particular, use of the singular articles such as “a,” “the,” “said,” etc. should be read to recite one or more of the indicated elements unless a claim recites an explicit limitation to the contrary. The abstract of the disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter. While exemplary embodiments are described above, it is not intended that these embodiments describe all possible forms of the invention. Rather, the words used in the specification are words of description rather than limitation, and it is understood that various changes may be made without departing from the spirit and scope of the invention. Additionally, the features of various implementing embodiments may be combined to form further embodiments of the invention. | 42,933 |
11861480 | DETAILED DESCRIPTION Example embodiments of the invention relate to, among other things, systems, methods, computer-readable media, techniques, and methodologies for determining the orientation of a target object in an image and iteratively reorienting the target object until an orientation of the target object is within an acceptable threshold of a target orientation. Example embodiments of the invention also relate to, among other things, systems, methods, computer-readable media, techniques, and methodologies for verifying that an image contains a target object. In example embodiments, the target object may be an insulator such as an overhead line insulator. Images of insulators may be captured, for example, using aerial surveillance from a drone, helicopter, or the like. Because the images may be captured from any number of angles/perspectives, the insulators in the images may be oriented in multiple different orientations. More specifically, across multiple images of insulators, the insulators may be oriented at any angle with respect to the horizontal or the vertical. Example embodiments of the invention are capable of detecting an orientation of an insulator in an image and generating a rectified image in which the insulator is reoriented to be substantially horizontal or substantially vertical. Reorienting an insulator in this manner makes it easier to identify potential issues with the insulator such as damage to the insulator (e.g., cracks in the insulator). In example embodiments, a set of captured images of insulators having known orientations may be used as training data to train a deep neural network to classify insulator images based on orientation. This set of captured insulator images may be augmented in certain example embodiments, and the augmented set of images may constitute the training data. Augmenting the set of captured insulator images may include, for example, generating multiple additional images from any given insulator image, where each additional image includes the insulator rotated to an orientation corresponding to one of multiple possible orientations. Each possible orientation may correspond to a respective classification bin. In example embodiments, the classification bins may be equally spaced. In example embodiments, each successive classification bin may correspond to an orientation of an insulator with respect to a horizontal or a vertical that differs by x degrees from respective orientations corresponding to each of the neighboring bins of the classification bin. For instance, in example embodiments, a first classification bin may correspond to a zero degree orientation representing the horizontal, a second classification bin may correspond to a 10 degree orientation with respect to the horizontal, a third classification bin may correspond to a 20 degree orientation with respect to the horizontal, and so forth. In example embodiments, because insulators are symmetric objects, 18 classification bins may be used, where each classification bin represents a respective multiple of 10 degrees orientation with respect to the horizontal or vertical. In example embodiments, additional classification bins corresponding to a 180 degree orientation, a 190 degree orientation, a 200 degree orientation, and so forth may not be required because these orientations may be indistinguishable from a 0 degree orientation, a 10 degree orientation, a 20 degree orientation, and so forth, respectively, due to the symmetric nature of the insulators. After training of the deep neural network using the augmented set of images, an image of an insulator having an unknown orientation may be provided as input to the trained deep neural network. The insulator image may be a segmented image in which an insulator previously detected to have been present in the image is represented by, for example, a bounding box indicative of a location of the detected insulator in the image. In certain example embodiments, an image may include multiple insulators, in which case, the corresponding segmented image may include multiple bounding boxes indicative of the detected positions of the multiple insulators in the image. In example embodiments, the deep neural network may determine an initial orientation prediction for an insulator in the segmented image. More specifically, the deep neural network may generate a classification probability distribution indicative of a respective predicted likelihood for each of the classification bins that the orientation of the insulator falls within that classification bin. In example embodiments, a classification bin that receives the highest classification score (e.g., the largest probability) may be indicative of the initial predicted orientation of the insulator. In example embodiments, the initial orientation prediction may be compared to a desired target orientation to determine how the difference between the two compares to a threshold value. In particular, in example embodiments, if the difference between the initial orientation prediction and the desired target orientation exceeds a threshold allowable deviation, an aligned image may be generated by aligning the segmented image to the target orientation based at least in part on the initial orientation prediction. For instance, if i) the initial prediction is that the insulator is oriented at 20 degrees from the horizontal (e.g., the classification bin corresponding to 20 degrees received the highest classification score), ii) the target orientation is 0 degrees (representing the horizontal), and iii) the threshold allowable deviation is 5 degrees, the aligned image may be generated by rotating the insulator in the segmented image (or more specifically the bounding box representative of the insulator) by 20 degrees. In certain example embodiments, the angle by which the insulator is rotated may be more or less than the difference between an orientation prediction and a target orientation depending on classification scores associated with classification bins that neighbor the classification bin corresponding to the predicted orientation. These example embodiments will be described in more detail later in this disclosure in reference to the illustrative method400ofFIG.4. In example embodiments, the aligned image may be provided as input to the deep neural network, which may then generate a refined orientation prediction based on the aligned image. In example embodiments, the refined orientation prediction may result in a new classification bin receiving the highest classification score. The refined orientation prediction may then be compared to the target orientation, and the process described earlier may continue iteratively until an orientation prediction is obtained that is within the threshold allowable deviation from the target orientation, in which case, the aligned image corresponding to such an orientation prediction may be output as a rectified image in which the insulator is substantially oriented in the target orientation. In example embodiments, defects or damage to an insulator may be more easily identified from the rectified image in which the insulator is substantially oriented in the target orientation than from the original image. In certain example embodiments, the trained deep neural network may be used to verify the presence of an insulator in the segmented image in addition to performing orientation detection and correction. In particular, a deep neural network trained to perform orientation classification may be used in conjunction with one or more additional layers that receive the classification output of the deep neural network and learn to detect the presence or absence of an insulator in a segmented image using ground-truth training data that includes the training data used to train the deep neural network as well as images known to not contain any insulators. In this manner, the deep neural network may be trained to output orientation prediction and insulator verification together in a single forward pass. While example embodiments may be described herein in connection with orientation prediction and correction for images of insulators, it should be appreciated that the object whose orientation is being predicted and corrected can be any suitable target object. Further, while in example embodiments, an insulator is assumed to be symmetric and the number of classification bins depends on this assumed symmetry, in other example embodiments, the target object may be asymmetric and any suitable number of classification bins may be used. In addition, the term deep neural network is not intended to be limiting with respect to the type of neural network or machine learning technique that may be used to perform the multi-stage classification described herein. Illustrative methods in accordance with example embodiments of the invention will now be described. It should be noted that any given operation of any of the methods200-400may be performed by one or more of the program modules or the like depicted inFIG.1and/or inFIG.5, whose operation will be described in more detail later in this disclosure. These program modules may be implemented in any combination of hardware, software, and/or firmware. In certain example embodiments, one or more of these program modules may be implemented, at least in part, as software and/or firmware modules that include computer-executable instructions that when executed by a processing circuit cause one or more operations to be performed. A system or device described herein as being configured to implement example embodiments may include one or more processing circuits, each of which may include one or more processing units or nodes. Computer-executable instructions may include computer-executable program code that when executed by a processing unit may cause input data contained in or referenced by the computer-executable program code to be accessed and processed to yield output data. FIG.1is a schematic hybrid block/data flow diagram illustrating orientation detection and correction of a target object in a segmented image in accordance with example embodiments.FIG.2is a process flow diagram of an illustrative method200for orientation detection and correction of a target object in a segmented image in accordance with example embodiments.FIG.3is a process flow diagram of an illustrative method300for generating an aligned image from the segmented image to refine an orientation prediction in accordance with example embodiments. Each ofFIGS.2and3will be described in conjunction withFIG.1hereinafter. Referring now toFIG.2in conjunction withFIG.1, at block202of the method200, each image in a set of segmented images104may be annotated with a respective ground-truth orientation of a target object in the image. In example embodiments, the target object may be an overhead line insulator. More specifically, in example embodiments, each image in the set of segmented images104may be labeled with a known orientation of a target object (or multiple known orientations of multiple target objects) in the image. The annotated segmented images104may serve as at least a portion of training data for training a deep neural network108during a training phase102A. At block204of the method200, the annotated segmented images104may be augmented to yield a set of augmented images106. The set of augmented images106may be an expanded set of images that includes the annotated segmented images104as well as additional images generated from each of the segmented images104. More specifically, a given segmented image104may be augmented by rotating the target object in the segmented image104from its known orientation to each of multiple different orientations corresponding to different classification bins of the deep neural network108. In example embodiments, each orientation of the target object in an augmented image106may correspond to a respective classification bin. In example embodiments, the classification bins may be equally spaced. For example, each successive classification bin may correspond to an orientation of the target object with respect to a horizontal or a vertical that differs by x degrees from respective orientations corresponding to neighboring bins of the classification bin. As a non-limiting example, a first classification bin may correspond to a zero degree orientation representing the horizontal, a second classification bin may correspond to a 10 degree orientation with respect to the horizontal, a third classification bin may correspond to a 20 degree orientation with respect to the horizontal, and so forth. In this example, the set of augmented images106for a given segmented image104may include an augmented image in which the target object in the segmented image104is rotated to the 0 degree orientation, an augmented image in which the target object is rotated to the 10 degree orientation, an augmented image in which the target object is rotated to the 20 degree orientation, and so forth. It should be appreciated that the target object in a segmented image104may be at any orientation, and in particular, at an orientation that does not correspond to one of the classification bins (e.g., a 12 degree orientation). Notwithstanding this, the target object in a segmented image104may be rotated to respective orientations corresponding to the classification bins to generate the set of augmented images106for that segmented image104. In example embodiments, if the target object is symmetric (e.g., an insulator), 18 classification bins may be used, where each classification bin represents a respective multiple of 10 degrees orientation with respect to the horizontal or vertical. In such example embodiments, additional classification bins corresponding to a 180 degree orientation, a 190 degree orientation, a 200 degree orientation, and so forth may not be required because these orientations may be indistinguishable from a 0 degree orientation, a 10 degree orientation, a 20 degree orientation, and so forth, respectively. At block206of the method200, the deep neural network108may be trained using the set of augmented images106during the training phase102A. Specifically, the deep neural network108may be trained to perform orientation classification using the set of augmented images106. As previously noted, the deep neural network108may be any suitable type of neural network (e.g., a convolutional neural network) or other machine learning technique/construct. After training of the deep neural network108using the augmented set of images106, a trained deep neural network112may be obtained. Then, as part of a testing phase102B of the trained deep neural network112, a segmented image110of a target object having an unknown orientation may be provided as input to the trained deep neural network112at block208of the method200. The segmented image110may be an image in which a target object previously detected to have been present in the image is represented by, for example, a bounding box indicative of a location of the detected target object in the image. In certain example embodiments, the segmented image110may include multiple bounding boxes or the like representing the positions of multiple target objects detected in the original image. At block210of the method200, computer-executable instructions of one or more orientation prediction modules114of the deep neural network112may be executed to determine an initial orientation prediction for a target object in the segmented image110. More specifically, the deep neural network112may generate a classification probability distribution indicative of a respective predicted likelihood for each of the classification bins that the orientation of the target object in the segmented image110falls within that classification bin. In example embodiments, a classification bin that receives the highest classification score (e.g., the largest probability) may be indicative of the initial predicted orientation of the target object. In example embodiments, the initial orientation prediction may be compared to a desired target orientation to determine how the difference between the two compares to a threshold value. In particular, in example embodiments, if the difference between the initial orientation prediction and the desired target orientation exceeds a threshold allowable deviation, computer-executable instructions of one or more orientation correction modules116may be executed at block212of the method200to generate an aligned image118from the segmented image110(the illustrative method200assumes that the initial orientation prediction deviates from the target orientation by more than the threshold allowable deviation). In example embodiments, the aligned image118may be generated by aligning the segmented image110to the target orientation based at least in part on the initial orientation prediction. As a non-limiting example, if the initial prediction is that the target object is oriented at 10 degrees from the horizontal (e.g., the classification bin corresponding to 10 degrees received the highest classification score), ii) the target orientation is 0 degrees (representing the horizontal), and iii) the threshold allowable deviation is 5 degrees, the aligned image118may be generated by rotating the target object in the segmented image110(or more specifically the bounding box representative of the target object) by 10 degrees. In certain example embodiments, the angle by which the target object is rotated may be more or less than the difference between the initial orientation prediction and the target orientation depending on classification scores associated with classification bins that neighbor the classification bin corresponding to the predicted orientation.FIG.3is a process flow diagram that depicts an illustrative method300for utilizing classification scores of neighboring classification bins to determine the angle of rotation of the target object to generate the aligned image118. The method300may be performed in connection with generating the aligned image118from the segmented image110or in connection with generating an updated aligned image from an aligned image of a previous iteration of the method200. In example embodiments, operations of the method300may be performed responsive to execution of computer-executable instructions of the orientation correction module(s)116. Referring now toFIG.3, at block302of the method300, a classification bin having a highest classification score in connection with an orientation prediction (e.g., the initial orientation prediction) may be determined. As a non-limiting example, if the neural network112predicts that the orientation of a target object in the segmented image110is oriented at 10 degrees from the horizontal, the classification bin having the highest classification score would be the classification bin that corresponds to 10 degrees. At block304of the method300, neighboring bins of the classification bin having the highest classification score may be determined. Referring again to the non-limiting example from above, the neighboring bins may be the classification bin corresponding to 0 degrees and the classification bin corresponding to 20 degrees. At block306of the method300, a difference between the orientation prediction and a target orientation of the target object may be determined. Referring again to the non-limiting example from above, if the initial orientation prediction for the segmented image110is 10 degrees and the target orientation is 0 degrees, the difference there between would be 10 degrees. At block308of the method300, a particular neighboring bin having a highest classification score among the neighboring bins may be determined. Referring again to the non-limiting example from above, the neighboring bin (i.e., either the 0 degrees bin or the 20 degrees bin) having the higher classification score may be determined. More specifically, in example embodiments, the classification bin corresponding to the orientation prediction may have the highest overall classification score among all classification bins, while one of the neighboring bins may have the second highest overall classification score among all classification bins and a larger classification score than the other neighboring bin. At block310of the method300, a determination may be made as to whether an orientation corresponding to the particular neighboring bin with the higher classification score between the two neighboring bins is closer to the target orientation than an orientation corresponding to the other neighboring bin. In response to a positive determination at block310, the method300may proceed to block312where the aligned image118may be generated by rotating the target object in the segmented image110by a rotation angle that is less than the difference between the orientation prediction and the target orientation. On the other hand, in response to a negative determination at block310, the method300may proceed to block314where the aligned image118may be generated by rotating the target object in the segmented image110by a rotation angle that is greater than the difference between the orientation prediction and the target orientation. Referring again to the non-limiting example from above, assuming that the orientation prediction corresponds to the 10 degrees classification bin and the 0 degrees neighboring classification bin has a higher classification score than the 20 degrees neighboring classification bin, then the rotation angle would be less than the difference between the orientation prediction and the target orientation (i.e., 10 degrees-0 degrees=10). For instance, as a non-limiting example, the rotation angle may be 8 degrees. The rotation angle is reduced from the 10 degrees difference between the orientation prediction and the target orientation because the deep neural network112has assigned a higher classification probability to the 0 degrees neighboring bin than the 20 degrees neighboring bin, and thus, has effectively predicted that the actual orientation of the target object in the segmented image110is more likely to be closer to the target orientation than what is indicated by the predicted orientation alone. On the other hand, if we assume that the 20 degrees neighboring classification bin has a higher classification score than the 0 degrees neighboring classification bin, the rotation angle may be increased from the 10 degrees difference between the orientation prediction and the target orientation because the deep neural network112has effectively predicted that the actual orientation of the target object in the segmented image110is more likely to be farther away from the target orientation than what is indicated by the predicted orientation alone. In this example scenario, the rotation angle may be greater than the difference between the orientation prediction and the target orientation (e.g., 10 degrees). For example, the rotation angle may be 12 degrees. In certain example embodiments, regardless of whether the rotation angle is increased to be above the difference between orientation prediction and the target orientation or decreased to be below the difference between the orientation prediction and the target orientation, the amount of the increase or decrease may be less than half the difference between successive classification bins (assuming that the classification bins are equally spaced). Referring again to the non-limiting example from above, the rotation angle may be increased or decreased by less than 5 degrees, or in other words, less than half of the degree interval between successive classification bins (e.g., 10 degrees). This may be the case because the classification scores of the neighboring bins—while being greater than the classification scores of other classification bins—may generally be less than the classification score of the classification bin corresponding to the orientation prediction. In those example embodiments in which the classification scores of two successive classification bins are equal or substantially equal, the rotation angle may be increased or decreased by an amount that is half the difference between the classification bins. Further, example embodiments in which the orientation prediction corresponds to a classification bin representing the target orientation may constitute a special case. For instance, if the orientation prediction corresponds to the 0 degrees classification bin, which is also the target orientation, then the neighboring bin with the higher classification score (e.g., 170 degrees or 10 degrees) may determine the direction of rotation (e.g., clockwise or counterclockwise) of the target object rather than the amount by which the rotation angle is modified. For example, if the 170 degrees neighboring bin has a higher classification score than the 10 degrees bin, then the target object may be rotated in a first direction (e.g., counterclockwise), whereas if the 0 degrees bin has a higher classification score than the 170 degrees bin, then the target object may be rotated in a second different direction (e.g., clockwise). It should be appreciated that the above example embodiments that utilize the classification scores of neighboring classification bins to determine rotation angles and/or rotation direction are merely illustrative and not exhaustive. Referring again toFIG.2, at block214of the method200, the aligned image may be provided as input to the deep neural network112. Then, at block216of the method200, computer-executable instructions of the orientation prediction module(s)114may be executed to generate a refined orientation prediction based on the aligned image118. In example embodiments, the refined orientation prediction may result in a new classification bin receiving the highest classification score. The new classification bin may correspond to an orientation that is closer to the target orientation than the initial orientation prediction. For example, if the initial orientation prediction corresponds to the 20 degrees classification bin, the refined orientation prediction may correspond to the 10 degrees classification bin. At block218of the method200, the refined orientation prediction may be compared to the target orientation to determine whether the refined orientation prediction is within a threshold value, such as a threshold allowable deviation, from the target orientation. In response to a positive determination at block218, the aligned image118may be output as a rectified image120in which the target object is substantially oriented in the target orientation. On the other hand, in response to a negative determination at block218, the method may proceed iteratively from block212where a new aligned image is generated from the aligned image118and a new refined orientation prediction associated with the new aligned image is compared to the target orientation to determine if the difference there between is within the threshold allowable deviation. The method200may proceed iteratively in this fashion through as many iterations as may be needed to obtain convergence, or in other words, an aligned image having a refined orientation prediction that is within the threshold allowable deviation from the target orientation, in which case, the aligned image for which convergence is obtained is output as the rectified image120. As previously noted, in certain example embodiments, the trained deep neural network112may be used to verify the presence of a target object in the segmented image110in addition to performing orientation detection and correction.FIG.4is a process flow diagram of an illustrative method400for verifying that a segmented image includes a target object in accordance with example embodiments. Operations of the method400may be performed responsive to execution of computer-executable instructions of one or more target object verification modules524(depicted inFIG.5). At block402of the method400, one or more additional layers may be added to the trained deep neural network112. In particular, the neural network112that is trained as an orientation classifier can be extended to support target object verification by reusing, for example, the last and second to last fully-connected layers of the neural network112and adding the additional layers at block402. At block404of the method400, a set of images used to train the neural network112as well as a set of images known to not contain target objects may be provided as ground-truth training data for the training the one or more additional layers. During training of the one or more additional layers, the functional layers of the neural network112trained for target object orientation detection may be fixed such that only the additional layer(s) are learning. At block406of the method400, a segmented image (e.g., the segmented image110) may be received as input to the trained neural network112. At block408of the method400, an output of at least a last layer (e.g., a last fully-connected layer) of the neural network112may be received as input at the additional layer(s). At block410of the method400, a determination may be made as to whether the segmented image includes the target object based at least in part on the output of the additional layer(s). In this manner, the deep neural network112may be trained to output orientation prediction and target object verification together in a single forward pass. Example embodiments described herein provide a number of technical effects and technical benefits over conventional solutions. In particular, example embodiments define a new data structure, specifically a new type of neural network implementation that is capable of performing orientation detection and correction of a target object in an image to generate a rectified image in which the target object is oriented at a desired target orientation. The rectified image enables more efficient analysis of the target object such as an assessment of any failure conditions that may be present with the target object. Thus, example embodiments that utilize a trained orientation classifier to generate a rectified image yield a technical effect over conventional solutions that are not capable of producing such a rectified image through machine learning techniques. Thus, a trained neural network for orientation classification in accordance with example embodiments constitutes an improvement to neural network computer-based technology. One or more illustrative embodiments of the disclosure are described herein. Such embodiments are merely illustrative of the scope of this disclosure and are not intended to be limiting in any way. Accordingly, variations, modifications, and equivalents of embodiments disclosed herein are also within the scope of this disclosure. For example, the data key generation process described herein in accordance with example embodiments can be expanded to use multiple data seeds to produce one set of unique and reproducible data for each data seed. FIG.5is a schematic diagram of an illustrative computing configuration for implementing one or more example embodiments of the invention. In particular,FIG.5depicts one or more orientation classification and verification servers502configured to implement one or more example embodiments. While the orientation classification and verification server(s)502may be described herein in the singular, it should be appreciated that multiple servers502may be provided, and functionality described herein may be distributed across multiple such servers502. In an illustrative configuration, the orientation classification and verification server502may include one or more processors (processor(s))504, one or more memory devices506(generically referred to herein as memory506), one or more input/output (“I/O”) interface(s)508, one or more network interfaces510, and data storage514. The orientation classification and verification server502may further include one or more buses512that functionally couple various components of the orientation classification and verification server402. The bus(es)512may include at least one of a system bus, a memory bus, an address bus, or a message bus, and may permit the exchange of information (e.g., data (including computer-executable code), signaling, etc.) between various components of the orientation classification and verification server502. The bus(es)512may include, without limitation, a memory bus or a memory controller, a peripheral bus, an accelerated graphics port, and so forth. The bus(es)512may be associated with any suitable bus architecture including, without limitation, an Industry Standard Architecture (ISA), a Micro Channel Architecture (MCA), an Enhanced ISA (EISA), a Video Electronics Standards Association (VESA) architecture, an Accelerated Graphics Port (AGP) architecture, a Peripheral Component Interconnects (PCI) architecture, a PCI-Express architecture, a Personal Computer Memory Card International Association (PCMCIA) architecture, a Universal Serial Bus (USB) architecture, and so forth. The memory506may include volatile memory (memory that maintains its state when supplied with power) such as random access memory (RAM) and/or non-volatile memory (memory that maintains its state even when not supplied with power) such as read-only memory (ROM), flash memory, ferroelectric RAM (FRAM), and so forth. Persistent data storage, as that term is used herein, may include non-volatile memory. In certain example embodiments, volatile memory may enable faster read/write access than non-volatile memory. However, in certain other example embodiments, certain types of non-volatile memory (e.g., FRAM) may enable faster read/write access than certain types of volatile memory. In various implementations, the memory506may include multiple different types of memory such as various types of static random access memory (SRAM), various types of dynamic random access memory (DRAM), various types of unalterable ROM, and/or writeable variants of ROM such as electrically erasable programmable read-only memory (EEPROM), flash memory, and so forth. The memory506may include main memory as well as various forms of cache memory such as instruction cache(s), data cache(s), translation lookaside buffer(s) (TLBs), and so forth. Further, cache memory such as a data cache may be a multi-level cache organized as a hierarchy of one or more cache levels (L1, L2, etc.). The data storage514may include removable storage and/or non-removable storage including, but not limited to, magnetic storage, optical disk storage, and/or tape storage. The data storage514may provide non-volatile storage of computer-executable instructions and other data. The memory506and the data storage514, removable and/or non-removable, are examples of computer-readable storage media (CRSM) as that term is used herein. The data storage514may store computer-executable code, instructions, or the like that may be loadable into the memory506and executable by the processor(s)504to cause the processor(s)504to perform or initiate various operations. The data storage514may additionally store data that may be copied to memory506for use by the processor(s)504during the execution of the computer-executable instructions. Moreover, output data generated as a result of execution of the computer-executable instructions by the processor(s)504may be stored initially in memory506and may ultimately be copied to data storage514for non-volatile storage. More specifically, the data storage514may store one or more operating systems (O/S)516; one or more database management systems (DBMS)518configured to access the memory506and/or one or more datastores526; and one or more program modules, applications, engines, managers, computer-executable code, scripts, or the like such as, for example, one or more orientation prediction modules520, one or more orientation correction modules522, and one or more target object verification modules524. Any of the components depicted as being stored in data storage514may include any combination of software, firmware, and/or hardware. The software and/or firmware may include computer-executable instructions (e.g., computer-executable program code) that may be loaded into the memory506for execution by one or more of the processor(s)504to perform any of the operations described earlier. Although not depicted inFIG.5, the data storage514may further store various types of data utilized by components of the orientation classification and verification server502(e.g., data stored in the datastore(s)424). Any data stored in the data storage514may be loaded into the memory506for use by the processor(s)504in executing computer-executable instructions. In addition, any data stored in the data storage514may potentially be stored in the external datastore(s)526and may be accessed via the DBMS518and loaded in the memory506for use by the processor(s)504in executing computer-executable instructions. The processor(s)504may be configured to access the memory506and execute computer-executable instructions loaded therein. For example, the processor(s)504may be configured to execute computer-executable instructions of the various program modules, applications, engines, managers, or the like of the orientation classification and verification server502to cause or facilitate various operations to be performed in accordance with one or more embodiments of the disclosure. The processor(s)504may include any suitable processing unit capable of accepting data as input, processing the input data in accordance with stored computer-executable instructions, and generating output data. The processor(s)504may include any type of suitable processing unit including, but not limited to, a central processing unit, a microprocessor, a Reduced Instruction Set Computer (RISC) microprocessor, a Complex Instruction Set Computer (CISC) microprocessor, a microcontroller, an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), a System-on-a-Chip (SoC), a digital signal processor (DSP), and so forth. Further, the processor(s)504may have any suitable microarchitecture design that includes any number of constituent components such as, for example, registers, multiplexers, arithmetic logic units, cache controllers for controlling read/write operations to cache memory, branch predictors, or the like. The microarchitecture design of the processor(s)504may be capable of supporting any of a variety of instruction sets. Referring now to other illustrative components depicted as being stored in the data storage514, the O/S516may be loaded from the data storage514into the memory506and may provide an interface between other application software executing on the orientation classification and verification server502and hardware resources of the orientation classification and verification server502. More specifically, the O/S516may include a set of computer-executable instructions for managing hardware resources of the orientation classification and verification server502and for providing common services to other application programs. In certain example embodiments, the O/S516may include or otherwise control the execution of one or more of the program modules, engines, managers, or the like depicted as being stored in the data storage514. The O/S516may include any operating system now known or which may be developed in the future including, but not limited to, any server operating system, any mainframe operating system, or any other proprietary or non-proprietary operating system. The DBMS518may be loaded into the memory506and may support functionality for accessing, retrieving, storing, and/or manipulating data stored in the memory506, data stored in the data storage514, and/or data stored in external datastore(s)526. The DBMS518may use any of a variety of database models (e.g., relational model, object model, etc.) and may support any of a variety of query languages. The DBMS518may access data represented in one or more data schemas and stored in any suitable data repository. As such, data stored in the datastore(s)526may include, for example, training images528, rectified images530, and intermediate data532generated, for example, by a neural network disclosed herein. External datastore(s)526that may be accessible by the orientation classification and verification server502via the DBMS518may include, but are not limited to, databases (e.g., relational, object-oriented, etc.), file systems, flat files, distributed datastores in which data is stored on more than one node of a computer network, peer-to-peer network datastores, or the like. Referring now to other illustrative components of the orientation classification and verification server502, the input/output (I/O) interface(s)508may facilitate the receipt of input information by the orientation classification and verification server502from one or more I/O devices as well as the output of information from the orientation classification and verification server502to the one or more I/O devices. The I/O devices may include any of a variety of components such as a display or display screen having a touch surface or touchscreen; an audio output device for producing sound, such as a speaker; an audio capture device, such as a microphone; an image and/or video capture device, such as a camera; a haptic unit; and so forth. Any of these components may be integrated into the Orientation classification and verification server502or may be separate. The I/O devices may further include, for example, any number of peripheral devices such as data storage devices, printing devices, and so forth. The I/O interface(s)508may also include an interface for an external peripheral device connection such as universal serial bus (USB), FireWire, Thunderbolt, Ethernet port or other connection protocol that may connect to one or more networks. The I/O interface(s)508may also include a connection to one or more antennas to connect to one or more networks via a wireless local area network (WLAN) (such as Wi-Fi) radio, Bluetooth, and/or a wireless network radio, such as a radio capable of communication with a wireless communication network such as a Long Term Evolution (LTE) network, WiMAX network, 3G network, etc. The orientation classification and verification server502may further include one or more network interfaces510via which the orientation classification and verification server502may communicate with one or more other devices or systems via one or more networks. Such network(s) may include, but are not limited to, any one or more different types of communications networks such as, for example, cable networks, public networks (e.g., the Internet), private networks (e.g., frame-relay networks), wireless networks, cellular networks, telephone networks (e.g., a public switched telephone network), or any other suitable private or public packet-switched or circuit-switched networks. Such network(s) may have any suitable communication range associated therewith and may include, for example, global networks (e.g., the Internet), metropolitan area networks (MANs), wide area networks (WANs), local area networks (LANs), or personal area networks (PANs). In addition, such network(s) may include communication links and associated networking devices (e.g., link-layer switches, routers, etc.) for transmitting network traffic over any suitable type of medium including, but not limited to, coaxial cable, twisted-pair wire (e.g., twisted-pair copper wire), optical fiber, a hybrid fiber-coaxial (HFC) medium, a microwave medium, a radio frequency communication medium, a satellite communication medium, or any combination thereof. It should be appreciated that the program modules/engines depicted inFIG.5as being stored in the data storage514are merely illustrative and not exhaustive and that processing described as being supported by any particular module may alternatively be distributed across multiple modules, engines, or the like, or performed by a different module, engine, or the like. In addition, various program module(s), script(s), plug-in(s), Application Programming Interface(s) (API(s)), or any other suitable computer-executable code hosted locally on the orientation classification and verification server502and/or other computing devices accessible via one or more networks, may be provided to support functionality provided by the modules depicted inFIG.5and/or additional or alternate functionality. Further, functionality may be modularized in any suitable manner such that processing described as being performed by a particular module may be performed by a collection of any number of program modules, or functionality described as being supported by any particular module may be supported, at least in part, by another module. In addition, program modules that support the functionality described herein may be executable across any number of cluster members in accordance with any suitable computing model such as, for example, a client-server model, a peer-to-peer model, and so forth. In addition, any of the functionality described as being supported by any of the modules depicted inFIG.5may be implemented, at least partially, in hardware and/or firmware across any number of devices. It should further be appreciated that the orientation classification and verification server502may include alternate and/or additional hardware, software, or firmware components beyond those described or depicted without departing from the scope of the disclosure. More particularly, it should be appreciated that software, firmware, or hardware components depicted as forming part of the orientation classification and verification server502are merely illustrative and that some components may not be present or additional components may be provided in various embodiments. While various illustrative modules have been depicted and described as software modules stored in data storage514, it should be appreciated that functionality described as being supported by the modules may be enabled by any combination of hardware, software, and/or firmware. It should further be appreciated that each of the above-mentioned modules may, in various embodiments, represent a logical partitioning of supported functionality. This logical partitioning is depicted for ease of explanation of the functionality and may not be representative of the structure of software, hardware, and/or firmware for implementing the functionality. Accordingly, it should be appreciated that functionality described as being provided by a particular module may, in various embodiments, be provided at least in part by one or more other modules. Further, one or more depicted modules may not be present in certain embodiments, while in other embodiments, additional program modules and/or engines not depicted may be present and may support at least a portion of the described functionality and/or additional functionality. One or more operations of any of the methods200-400may be performed by a orientation classification and verification server502having the illustrative configuration depicted inFIG.5, or more specifically, by one or more program modules, engines, applications, or the like executable on such a device. It should be appreciated, however, that such operations may be implemented in connection with numerous other device configurations. The operations described and depicted in the illustrative methods ofFIGS.2-4may be carried out or performed in any suitable order as desired in various example embodiments of the disclosure. Additionally, in certain example embodiments, at least a portion of the operations may be carried out in parallel. Furthermore, in certain example embodiments, less, more, or different operations than those depicted inFIGS.2-4may be performed. Although specific embodiments of the disclosure have been described, one of ordinary skill in the art will recognize that numerous other modifications and alternative embodiments are within the scope of the disclosure. For example, any of the functionality and/or processing capabilities described with respect to a particular system, system component, device, or device component may be performed by any other system, device, or component. Further, while various illustrative implementations and architectures have been described in accordance with embodiments of the disclosure, one of ordinary skill in the art will appreciate that numerous other modifications to the illustrative implementations and architectures described herein are also within the scope of this disclosure. In addition, it should be appreciated that any operation, element, component, data, or the like described herein as being based on another operation, element, component, data, or the like may be additionally based on one or more other operations, elements, components, data, or the like. Accordingly, the phrase “based on,” or variants thereof, should be interpreted as “based at least in part on.” The present disclosure may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure. Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. | 56,369 |
11861481 | Like reference numbers and designations in the various drawings indicate like elements. DETAILED DESCRIPTION This specification describes how a training system can use embeddings to effectively search a sensor data repository. FIG.1is a diagram of an example system100. The system100includes a training system110and an on-board system120. The on-board system120is physically located on-board a vehicle122. The vehicle122inFIG.1is illustrated as an automobile, but the on-board system120can be located on-board any appropriate vehicle type. The vehicle122can be a fully autonomous vehicle that determines and executes fully-autonomous driving decisions in order to navigate through an environment. The vehicle122can also be a semi-autonomous vehicle that uses predictions to aid a human driver. For example, the vehicle122can autonomously apply the brakes if a prediction indicates that a human driver is about to collide with another vehicle. The on-board system120includes one or more sensor subsystems132. The sensor subsystems132include a combination of components that receive reflections of electromagnetic radiation, e.g., lidar systems that detect reflections of laser light, radar systems that detect reflections of radio waves, and camera systems that detect reflections of visible light. The sensor data generated by a given sensor generally indicates a distance, a direction, and an intensity of reflected radiation. For example, a sensor can transmit one or more pulses of electromagnetic radiation in a particular direction and can measure the intensity of any reflections as well as the time that the reflection was received. A distance can be computed by determining how long it took between a pulse and its corresponding reflection. The sensor can continually sweep a particular space in angle, azimuth, or both. Sweeping in azimuth, for example, can allow a sensor to detect multiple objects along the same line of sight. The sensor subsystems132or other components of the vehicle122can also combine groups of one or more raw sensor measurements from one or more sensors as being measures of the same region in the environment. A group of sensor measurements can be represented in any of a variety of ways, depending on the kinds of sensor measurements that are being captured. Each group of raw laser sensor measurements, for example, can be represented as a three-dimensional point cloud, with each point having an intensity and a position. In some implementations, the position is represented as a range and elevation pair. Each group of camera sensor measurements can be represented as an image patch, e.g., an RGB image patch. Once a group of one or more raw sensor measurements has been classified as being a measure of a particular region in the environment, the sensor subsystems132or the other components of the vehicle122generate a sensor sample155from the sensor measurements that measure the vehicle. For example, the sensor sample can include one or more of: a patch of an image captured by the camera sensor of the region of the environment, point cloud data generated by one or more of the laser sensors that corresponds to the region of the environment, or portions of one or more projections, e.g., a projection from a top-down view or a perspective view, of sensor data captured by one or more of the laser sensors that correspond to the region of the environment. The sensor subsystems132or the other components provide the sensor sample155to an on-board prediction subsystem134. The on-board prediction subsystem134uses some or all of the data in the sensor sample155to generate one or more predictions165. For example, the on-board prediction subsystem134can implement one or more machine learning models that each use the sensor sample155to make a prediction that is relevant to the operation of the vehicle122. As a particular example, one or more machine learning models can be classification machine learning models that classify an object located in the region characterized by the sensor sample. As another particular example, one or more machine learning models can be behavior prediction machine learning models that predict a future trajectory of the object located in the region characterized by the sensor sample. The on-board classifier subsystem134can provide the predictions165to a planning subsystem136, a user interface subsystem138, or both. When a planning subsystem136receives the predictions165, the planning subsystem136can use the predictions165to make fully-autonomous or semi-autonomous driving decisions. For example, if the predictions include a prediction indicating that a particular type of traffic sign is in the vicinity of the vehicle, the planning subsystem136can generate a fully-autonomous plan to adjust the trajectory of the vehicle122to conform to the requirements of the traffic sign, e.g., to apply the brakes when the traffic sign is a yield sign. As another example, the planning subsystem136can generate a semi-autonomous recommendation for a human driver to apply the brakes in order to conform with the requirements of the traffic sign. A user interface subsystem138can receive the predictions165and can generate a user interface presentation based on the predictions165, e.g., an alert for an operator of the vehicle122that the vehicle's speed exceeds the requirements of the traffic sign or a user interface presentation having image or video data containing a representation of the region of space that is occupied by another vehicle. An on-board display device can then display the user interface presentation for view by passengers of the vehicle122. The on-board classifier subsystem134can also use the sensor data155to generate log data127that is transmitted to the training system110, e.g., for use in training various machine learning models to make predictions. The on-board system120can provide the log data127to the training system110in offline batches or in an online fashion, e.g., continually whenever it is generated. The log data127includes sensor data samples that were generated during operation of the vehicle122. The training system110is typically hosted within a data center112, which can be a distributed computing system having hundreds or thousands of computers in one or more locations. When the training system110receives log data127from a vehicle, the training system100stores the log data127in a sensor data repository125. Generally, the sensor data repository125stores sensor data received from a large number of vehicles, i.e., the sensor data repository125stores sensor samples generated from sensor data captured during the operation of a large number of different vehicles. In some cases, the sensor data repository125can also include sensor data generated in simulation, i.e., generated as simulated versions of vehicles navigate through a software simulation of a real-world environment. The training system110includes a training subsystem114that trains various machine learning models to make predictions using training data generated from the sensor samples in the sensor data repository125. For example, the training system110can use the sensor samples stored in the sensor data repository125to generate training data that includes training examples123for training a machine learning model. Each training example123includes (i) data from a sensor sample and (ii) a label that indicates some ground truth output that should be generated by the machine learning model for the sensor sample. The training subsystem114can then train a machine learning model on the training data to determine trained values of the weights of the machine learning model. After training is complete, the training system110can provide a final set of model weight values to the on-board system120for use in making predictions165for fully autonomous or semi-autonomous driving decisions. The training system110can provide the final set of model weight values by a wired or wireless connection to the on-board system120. However, in some cases, it may be necessary to search the sensor data repository125for relevant sensor samples. To allow for searching the repository125, the training system110includes a sensor sample search engine190. To allow the sensor sample search engine190to more efficiently search the sensor data repository, the training system110implements one or more embedding neural networks180, i.e., includes hardware that implements the operations of the layers of the embedding neural networks180. Each embedding neural network180is a neural network that has been trained to receive as input sensor data, i.e., a portion of or all of the data in a sensor sample, and to generate as output an embedding of the sensor data. In some cases, the training system110implements a single embedding neural network180. In other cases, however, the training system100implements multiple embedding neural networks180that operate on different portions of the sensor sample, that generate embeddings that reflect different characteristics of the sensor sample, or both. As on example, the embedding neural networks180may include one or more neural networks that generate embeddings by operating on image patches and one or more neural networks that operate on laser sensor data. For example, the embedding neural networks180may include one or more deep convolutional neural networks that operates on image patches. In a particular example, the embedding neural network180may be a portion of an object classification neural network that has been trained to classify objects in the environment into different object categories. For example, the embedding neural network180may include all but one or more final layers of the trained object classification neural network. Thus, the embeddings generated by this embedding neural network180represent the type of object depicted in the sensor sample. As another example, the embedding neural networks180may include one or more convolutional neural networks that operate on point cloud data generated by a laser sensor, e.g., a lidar sensor. Each of these convolutional neural networks can be the initial layers of a neural network that is trained to predict some type of annotation for the point cloud data, e.g., object type, object size, or object trajectory. An example of a convolutional neural network that operates on point cloud data is described in Yin Zhou et al, “End-to-End Multi-View Fusion for 3D Object Detection in LiDAR Point Clouds,” available at https://arxiv.org/abs/1910.06528. Another example of such a convolutional neural network is described in, Y. Zhou and O. Tuzel. Voxelnet: End-to-end learning for point cloud based 3d object detection. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4490-4499, June 2018. When multiple ones of these neural networks that have been trained to predict different types of annotations are included in the set of embedding neural networks180, the resulting set of embeddings for a given sensor sample includes multiple different embeddings that each reflect different characteristics of the sensor sample. In some cases, a single neural network can be trained to predict multiple different types of annotations for the same sensor sample. In these cases, the embedding generated by that neural network will reflect multiple different characteristics, e.g., color and object type or speed and heading. Thus, generally, the embeddings generated by the various embedding neural networks180for a given sensor sample can each represent a certain property of the sample. An example set of properties includes one or more of: high-level object type, fine-grained object type, object speed, and object color of the sample. Accordingly, in one example, the set of embeddings for a sensor sample will include one embedding that represents a fine-grained object type and another that represents object speed. When new sensor samples are received, the search engine190processes the new sensor samples using the embedding neural networks180to generate embeddings of each new sensor sample. The search engine190then stores data associating each sensor sample in the repository with the corresponding embeddings of the sensor sample. When a request to search the repository is received, the search engine190generates query embeddings of the query sensor sample specified in the request using the embedding neural networks180and uses the query embeddings to search the embeddings in the repository, i.e., instead of directly searching the high-dimensional sensor samples. In particular, the search engine190can maintain a search index that associates each sensor sample with the embeddings for the sensor sample. In some implementations, the search engine190slices the index for each of the different types of embeddings. In other words, for a given type of embedding, i.e., for embeddings generated by a given embedding neural network180, the search engine190generates multiple slices of embeddings, with each embedding of that type belonging to exactly one of the generated slices. In these implementations, to search the index for a given query embedding of a given type, the search engine190first identifies the slice of the embeddings of that type that matches the query embeddings, and then searches the embeddings within the slice. As a particular example, the search engine190can slice the embeddings of a given embedding type into k slices using k-means clustering or other unsupervised clustering technique, with each of the k slices corresponding to one of the k clusters and each slice being represented by the “prototype” or mean for the corresponding cluster as generated by the clustering technique. To identify the embeddings of the given type that are most similar to a given query embedding, the search engine190first identifies the slice that matches the query embedding by identifying the prototype that is closest to the query embedding. The search engine190can then search within the slice to identify the closest embeddings to the query embedding. FIG.2is a flow chart of an example process200for adding a sensor sample to a sensor data repository. The process will be described as being performed by an appropriately programmed computer system. For convenience, the process200will be described as being performed by a system of one or more computers located in one or more locations. For example, a training system, e.g., the training system110ofFIG.1, appropriately programmed in accordance with this specification, can perform the process200. The system receives a sensor sample generated from sensor data collected during operation of an autonomous vehicle (step210). The system processes the sensor sample using each of the one or more embedding neural networks to generate one or more embeddings of the sensor sample (step220). As described above, when the set of embedding neural networks includes multiple neural networks, the different embeddings generally represent different properties or characteristics of the sensor sample. The system adds the sensor sample to the repository and associates the sensor sample with the embeddings of the sensor sample (step230). For example, the system can add an entry to the search index maintained by the system that associates the sensor sample with the embeddings for the sensor sample. In some cases, the system updates the slices of the index to account for the newly added embeddings. FIG.3is a flow chart of an example process300for searching a repository storing a collection of sensor samples. The process will be described as being performed by an appropriately programmed computer system. For convenience, the process300will be described as being performed by a system of one or more computers located in one or more locations. For example, a training system, e.g., the training system110ofFIG.1, appropriately programmed in accordance with this specification, can perform the process300. The system maintains a sensor data repository (310). As described above, the sensor data repository includes a collection of sensor samples generated during the operation of autonomous vehicles, e.g., samples generated from sensor data collected by vehicles as the vehicles drive through environments. Each sensor sample characterizes a particular region of an environment in the vicinity of an autonomous vehicle and includes data from one or more sensors of the autonomous vehicle. For example, each sensor sample can include data that represents measurements from one or more of the sensors of an autonomous vehicle, with the measurements from each sensor characterizing the same region at the same time. As a particular example, the tensor can represent measurements from a camera sensor and one or more laser sensors of the autonomous vehicle. As another particular, example, the tensor can represent measurements from a camera sensor, a radar sensor, and a laser sensor. For example, the sensor sample can include a patch of an image captured by the camera sensor of the region of the environment, point cloud data generated by one or more of the laser sensors, and, optionally, portions of one or more projections, e.g., a projection from a top-down view or a perspective view, of sensor data captured by one or more of the laser sensors that correspond to the region of the environment. As described above, each sensor sample in the repository is also associated with one or more embeddings of the sensor sample. Each embedding of the sensor sample is an embedding generated by an embedding neural network by processing the sensor sample. More specifically, each embedding that is generated by a given embedding neural network is generated in accordance with the same, trained parameter values of the embedding neural network. That is, the system or another system trains each embedding neural network and then fixes the parameter values to the trained values. The system identifies a query sensor sample that characterizes a region of interest (step320). In particular, the system can receive a user input or other request specifying a region of interest in the environment and can identify or generate a sensor sample characterizing the region of interest. For example, the system can provide, for presentation in a user interface, an image of the environment surrounding a vehicle as generated by the camera sensor of the vehicle (or a visual representation of other sensor data captured by other sensors) and the user can submit an input specifying the region of the image that is of interest. The system processes the query sensor sample using one or more embedding neural networks to generate one or more query embeddings of the query sensor sample (step330). In some cases, the system processes the query sensor sample using each of the embedding neural networks in the set of embedding neural networks that are maintained by the system. In other cases, the user may specify a particular property that is of interest, and the system can process the sensor sample using only the embedding neural network(s) from the set that generate embeddings that reflect that property. As a particular example, if the query specifies that the property of interest is object type, the system may not process the sensor sample using an embedding neural network that generates embeddings that reflect object speed. The system searches the sensor data repository using the query embeddings to identify relevant sensor samples (step340). In particular, for each of the query embeddings, the system can identify, from the embeddings in the sensor data repository that were generated by the same embedding neural network, a predetermined number of embeddings that are closest to the query embedding in the embedding space or can identify, from the embeddings in the sensor data repository, each embedding that is closer than a threshold distance to the query embedding in the embedding space. The system can then identify the sensor samples associated with the identified embeddings as relevant sensor samples. The system can measure how close one embedding is to another embedding using a distance measure, i.e., a function that receives two embeddings as input and returns a score that represents how close the two embeddings are. Examples of distance measures that can be used include cosine similarity and Euclidean distance. When the index is sliced, for each query embedding, the system can first identify the slice that the query embedding belongs to, and then search within the slice as described above to determine the closest embeddings. For example, the system can identify the slice that the query embedding belongs to by identifying the closest prototype to the query embedding. By first identifying the slice and only performing the search for the closest embedding within the identified slice, the system can reduce the latency and amount of computational resources necessary to identify the closest embeddings even when the number of sensor samples stored in the repository is very large. In some implementations, when there are multiple query embeddings, the system separately identifies the relevant sensor samples for each query embedding, i.e., for each of the desired properties or characteristics. For example, the system can identify relevant sensor samples for each query embedding as described above and can then associate each relevant sensor sample with the property or properties reflected by the query embedding used to identify the sensor sample. In some other implementations, when there are multiple query embeddings, the system joins the result set so that only sensor samples that have embeddings that are relevant to all of the query embeddings are returned. Thus, only sensor samples that match the query sensor sample along all of the desired properties or characteristics are returned. In some implementations, the system leverages labels associated with sensor samples in the repository in identifying relevant sensor samples. For example, the sensor samples in the collection of sensor samples can each associated be associated with a high-level classification that identifies an object type of an object located in the environment region characterized by the sensor sample, e.g., vehicle, pedestrian, road sign, or so on. In these cases, the request may specify a query high-level classification. For example, the query sensor sample may characterize a type of road sign that a user is unfamiliar with and the user may wish to surface other instances when an autonomous vehicle encountered a similar road sign. The system can then search only the sensor samples in the collection that are associated with the same query high-level classification, e.g., by discarding any identified relevant sensor samples that do not have a high-level classification that matches the query high-level classification. Once the relevant sensor samples have been identified, the system can use the relevant sensor samples in any of a variety of ways. For example, when the region of interest characterizes an object of a particular class, the system can use the relevant sensor samples to generate training data for training a machine learning model to classify whether or not input sensor samples characterize objects belonging to the particular class. This can be useful, for example, when it is discovered that the autonomous vehicles would benefit from being able to accurately classify objects of a particular class, but an insufficient number of samples are labeled as depicting objects belonging to that particular class. As another example, when the region of interest is a region where an event of interest is occurring, the system can generate, for each of the identified sensor samples, a visual representation of the sensor sample, and optionally, other sensor samples captured within a particular time window of the sensor sample and provide the visual representation for presentation on a user device. As a particular example, the system can, for each relevant sensor sample, identify other sensor samples that were captured within a specified time window of the relevant sensor sample and generate a video representation of the other sensor samples and the relevant sensor sample, e.g., a video showing the camera image patches from the other sensor samples and the relevant sensor samples arranged in chronological order. Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non-transitory storage medium for execution by, or to control the operation of, data processing apparatus. The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them. Alternatively or in addition, the program instructions can be encoded on an artificially-generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. The term “data processing apparatus” refers to data processing hardware and encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can also be, or further include, off-the-shelf or custom-made parallel processing subsystems, e.g., a GPU or another kind of special-purpose processing subsystem. The apparatus can also be, or further include, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). The apparatus can optionally include, in addition to hardware, code that creates an execution environment for computer programs, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A computer program which may also be referred to or described as a program, software, a software application, an app, a module, a software module, a script, or code) can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data, e.g., one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, e.g., files that store one or more modules, sub-programs, or portions of code. A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a data communication network. As used in this specification, an “engine,” or “software engine,” refers to a software implemented input/output system that provides an output that is different from the input. An engine can be an encoded block of functionality, such as a library, a platform, a software development kit (“SDK”), or an object. Each engine can be implemented on any appropriate type of computing device, e.g., servers, mobile phones, tablet computers, notebook computers, music players, e-book readers, laptop or desktop computers, PDAs, smart phones, or other stationary or portable devices, that includes one or more processors and computer readable media. Additionally, two or more of the engines may be implemented on the same computing device, or on different computing devices. The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by special purpose logic circuitry, e.g., an FPGA or an ASIC, or by a combination of special purpose logic circuitry and one or more programmed computers. Computers suitable for the execution of a computer program can be based on general or special purpose microprocessors or both, or any other kind of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data. The central processing unit and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device, e.g., a universal serial bus (USB) flash drive, to name just a few. Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and pointing device, e.g., a mouse, trackball, or a presence sensitive display or other surface by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's device in response to requests received from the web browser. Also, a computer can interact with a user by sending text messages or other forms of message to a personal device, e.g., a smartphone, running a messaging application, and receiving responsive messages from the user in return. While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or on the scope of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially be claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. Particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain some cases, multitasking and parallel processing may be advantageous. | 33,863 |
11861482 | DETAILED DESCRIPTION The foregoing and other features will become more apparent upon reading of the following non-restrictive description of illustrative embodiments thereof, given by way of example only with reference to the accompanying drawings. Various aspects of the present disclosure generally address one or more of the problems related to environment control systems for buildings. More particularly, the present disclosure aims at providing solutions for generating and improving a predictive model of a neural network used by a plurality of environment controllers. The generation and improvement is performed through the use of a training server interacting with the plurality of environment controllers and performing reinforcement learning. The following terminology is used throughout the present specification:Environment: condition(s) (temperature, pressure, oxygen level, light level, security, etc.) prevailing in a controlled area or place, such as for example in a building.Environment control system: a set of components which collaborate for monitoring and controlling an environment.Environmental data: any data (e.g. information, commands) related to an environment that may be exchanged between components of an environment control system.Environment control device (ECD): generic name for a component of an environment control system. An ECD may consist of an environment controller, a sensor, a controlled appliance, etc.Environment controller: device capable of receiving information related to an environment and sending commands based on such information.Environmental characteristic: measurable, quantifiable or verifiable property of an environment (a building). The environmental characteristic comprises any of the following: temperature, pressure, humidity, lighting, CO2, flow, radiation, water level, speed, sound; a variation of at least one of the following, temperature, pressure, humidity and lighting, CO2 levels, flows, radiations, water levels, speed, sound levels, etc., and/or a combination thereof.Environmental characteristic value: numerical, qualitative or verifiable representation of an environmental characteristic.Sensor: device that detects an environmental characteristic and provides a numerical, quantitative or verifiable representation thereof. The numerical, quantitative or verifiable representation may be sent to an environment controller.Controlled appliance: device that receives a command and executes the command. The command may be received from an environment controller.Environmental state: a current condition of an environment based on an environmental characteristic, each environmental state may comprise a range of values or verifiable representation for the corresponding environmental characteristic.VAV appliance: a Variable Air Volume appliance is a type of heating, ventilating, and/or air-conditioning (HVAC) system. By contrast to a Constant Air Volume (CAV) appliance, which supplies a constant airflow at a variable temperature, a VAV appliance varies the airflow at a constant temperature.Area of a building: the expression ‘area of a building’ is used throughout the present specification to refer to the interior of a whole building or a portion of the interior of the building such as, without limitation: a floor, a room, an aisle, etc. Referring now toFIGS.1and2, an environment control system where an environment controller100exchanges data with other environment control devices (ECDs) is illustrated. The environment controller100is responsible for controlling the environment of an area of a building. The environment controller100receives from sensors (e.g.200,210,220and230) environmental characteristic values measured by the sensors. The environment controller100generates commands based on the received environmental characteristic values. The generated commands are transmitted to controlled appliances300(to control the operations of the controlled appliances300). Although a single controlled appliance300is represented inFIG.1for simplification purposes, the environment controller100may be interacting with a plurality of controlled appliances300. The area under the control of the environment controller100is not represented in the Figures for simplification purposes. As mentioned previously, the area may consist of a room, a floor, an aisle, etc. However, any type of area located inside any type of building is considered to be within the scope of the present disclosure. The sensors (200,210,220and230) and the controlled appliances300are generally located in the area under control (e.g. a room). The environment controller100may or may not be located in the area under control. For example, the environment controller100may remotely control the environment of the area under control, which includes controlling the controlled appliances300based on the inputs of the sensors200,210,220and230. Examples of sensors include: a temperature sensor200for measuring a temperature in the area and transmitting the measured temperature to the environment controller100, a humidity sensor210for measuring a humidity level in the area and transmitting the measured humidity level to the environment controller100, a CO2 sensor220for measuring a CO2 level in the area and transmitting the measured CO2 level to the environment controller100, an occupancy sensor230for generating occupancy data for the area and transmitting the generated occupancy data to the environment controller100, a lighting sensor (not represented in the Figures) for measuring a light level in the area and transmitting the measured light level to the environment controller100, etc. Each environmental characteristic value measured by a sensor may consist of either a single value (e.g. the current CO2 level measured by the CO2 sensor210is 405 parts per million), or a range of values (e.g. the current CO2 level measured by the CO2 sensor210is in the range of 400 to 410 parts per million). In a first implementation, a single sensor (e.g. CO2 sensor210) measures a given type of environmental characteristic value (e.g. CO2 level) for the whole area. In a second implementation, the area is divided into a plurality of zones, and a plurality of sensors (e.g. temperature sensors200) measures the given type of environmental characteristic value (e.g. temperature) in the corresponding plurality of zones. In the second implementation, the environment controller100calculates an average environmental characteristic value in the area (e.g. an average temperature in the area) based on the environmental characteristic values transmitted by the plurality of sensors (e.g. temperature sensors200) respectively located in the plurality of zones of the area. Additional sensor(s) may be deployed outside of the area and report their measurement(s) to the environment controller100. For example, the area is a room of a building. An external temperature sensor measures an external temperature outside the building and transmits the measured external temperature to the environment controller100. Similarly, an external humidity sensor measures an external humidity level outside the building and transmits the measured external humidity level to the environment controller100. The aforementioned examples of sensors are for illustration purposes only. A person skilled in the art would readily understand that other types of sensors could be used in the context of the environment control system managed by the environment controller100. Each controlled appliance300comprises at least one actuation module, to control the operations of the controlled appliance300based on the commands received from the environment controller100. The actuation module can be of one of the following types: mechanical, pneumatic, hydraulic, electrical, electronical, a combination thereof, etc. The commands control operations of the at least one actuation module. An example of a controlled appliance300consists of a VAV appliance. Examples of commands transmitted to the VAV appliance include commands directed to one of the following: an actuation module controlling the speed of a fan, an actuation module controlling the pressure generated by a compressor, an actuation module controlling a valve defining the rate of an airflow, etc. This example is for illustration purposes only. Other types of controlled appliances300could be used in the context of an environment control system managed by the environment controller100. Details of the environment controller100, sensors (200,210,220and230) and control appliance300will now be provided. The environment controller100comprises a processing unit110, memory120, and a communication interface130. The environment controller100may comprise additional components, such as another communication interface130, a user interface140, a display150, etc. The processing unit110comprises one or more processors (not represented in the Figures) capable of executing instructions of a computer program. Each processor may further comprise one or several cores. The processing unit110executes a neural network inference engine112and a control module114, as will be detailed later in the description. The memory120stores instructions of computer program(s) executed by the processing unit110, data generated by the execution of the computer program(s), data received via the communication interface130(or another communication interface), etc. Only a single memory120is represented inFIG.1, but the environment controller100may comprise several types of memories, including volatile memory (such as a volatile Random Access Memory (RAM), etc.) and non-volatile memory (such as a hard drive, electrically-erasable programmable read-only memory (EEPROM), flash, etc.). The communication interface130allows the environment controller100to exchange data with remote devices (e.g. the sensors (200,210,220and230), the controlled appliance300, etc.) over a communication network (not represented inFIG.1for simplification purposes). For example, the communication network is a wired communication network, such as an Ethernet network. The communication interface130is adapted to support communication protocols used to exchange data over the Ethernet network. Other types of wired communication networks may also be supported by the communication interface130. In another example, the communication network is a wireless communication network, such as a Wi-Fi network. The communication interface130is adapted to support communication protocols used to exchange data over the Wi-Fi network. Other types of wireless communication network may also be supported by the communication interface130, such as a wireless mesh network, Bluetooth®, Bluetooth® Low Energy (BLE), etc. In still another example, the environment controller100comprises two communication interfaces130. The environment controller100communicates with the sensors (200,210,220and230) and the controlled appliance300via a first communication interface130(e.g. a Wi-Fi interface); and communicates with other devices (e.g. a training server400) via a second communication interface130(e.g. an Ethernet interface). Each communication interface130usually comprises a combination of hardware and software executed by the hardware, for implementing the communication functionalities of the communication interface130. A detailed representation of the components of the sensors (e.g. temperature sensor200) is not provided inFIG.1for simplification purposes. The sensor comprises at least one sensing module for detecting an environmental characteristic (e.g. temperature). The sensor further comprises a communication interface for transmitting to the environment controller100an environmental characteristic value (e.g. value of the temperature) corresponding to the detected environmental characteristic. The environmental characteristic value is transmitted over a communication network and received via the communication interface130of the environment controller100. The sensor may also comprise a processing unit for generating the environmental characteristic value based on the detected environmental characteristic. Alternatively, the environmental characteristic value is directly generated by the sensing module. The other types of sensors mentioned previously (e.g. humidity sensor210and CO2 sensor220) generally include the same types of components as those mentioned for the temperature sensor200. The temperature, humidity and CO2 sensors are well known in the art, and easy to implement types of sensors. With respect to the occupancy sensor, its implementation may be more or less complex, based on its capabilities. For example, a basic occupancy sensor (e.g. based on ultrasonic or infrared technology) is only capable of determining if the area is occupied or not. A more sophisticated occupancy sensor is capable of determining the number of persons present in the area, and may use a combination of camera(s) and pattern recognition software for this purpose. Alternatively, the occupancy sensor is not capable of determining the number of persons present in the area, but is capable of determining the number of persons entering or leaving the area (e.g. an infrared beam sensor using infrared rays to detect people entering or leaving the area). A detailed representation of the components of the controlled appliance300is not provided inFIG.1for simplification purposes. As mentioned previously, the controlled appliance300comprises at least one actuation module. The controlled appliance300further comprises a communication interface for receiving commands from the environment controller100. The commands control operations of the at least one actuation module. The commands are transmitted over a communication network via the communication interface130of the environment controller100. The controlled appliance300may also comprise a processing unit for controlling the operations of the at least one actuation module based on the received commands. A detailed representation of the components of the training server400is not provided inFIG.1as it will be detailed later. The training server400comprises a processing unit, memory and a communication interface. The processing unit of the training server400executes a neural network training engine411. The execution of the neural network training engine411generates a predictive model, which is transmitted to the environment controller100via the communication interface of the training server400. The predictive model is transmitted over a communication network and received via the communication interface130of the environment controller100. Also represented inFIG.1is a user10. The user10provides at least one set point to the environment controller100. Examples of set points include target environmental characteristic values, such as a target temperature, a target humidity level, a target CO2 level, a combination thereof, etc. The at least one set point is related to the area where the sensors (200,210,220and230) and the controlled appliance300are located. Alternatively, the controlled appliance300is not located in the area, but the operations of the controlled appliance300under the supervision of the environment controller100aim at reaching the at least one set point in the area. The user10enters the at least one set point via the user interface140of the environment controller100. Alternatively, the user10enters the at least one set point via a user interface of a computing device (e.g. a smartphone, a tablet, etc.) not represented inFIG.1for simplification purposes; and the at least one set point is transmitted over a communication network and received via the communication interface130of the environment controller100. The previous examples of setpoints are for illustration purposes only, and a person skilled in the art would readily understand that other types of set points could be used in the context of an environment control system managed by the environment controller100. Furthermore, each set point may consist of either a single value (e.g. target temperature of 25 degrees Celsius), or a range of values (e.g. target temperature between 25 and 26 degrees Celsius). Optionally, the control module114executed by the processing unit110of the environment controller100also determines at least one characteristic of the area. The characteristic(s) of the area include one or more geometric characteristics of the area (e.g. a room in a building). Examples of geometric characteristics include a volume of the area, a surface of the area, a height of the area, a length of the area, a width of the area, etc. Instead of a given value, the geometric characteristics may be identified as ranges of values. For example, the volume of the area is defined by the following ranges of values: 0 to 50 cubic meters, 50 to 200 cubic meters, and more than 200 cubic meters. Similarly, the height of the area is defined by the following ranges of values: less than 3 meters and more than 3 meters. Alternatively or complementarity, the characteristic(s) of the area include an area type identifier of the current area A plurality of area type identifiers is defined, each area type identifier corresponding to areas having one or more geometric characteristics in common. For example, each area type identifier is an alphanumerical value. The area type identifier of the current area is selected among the plurality of pre-defined area type identifiers based on geometric characteristics of the current area. For instance, the area type identifier R1is allocated to areas having a volume lower than 50 cubic meters; the area type identifier R2is allocated to areas having a volume between 50 and 200 cubic meters, and a height lower than 3 meters; the area type identifier R3is allocated to areas having a volume between 50 and 200 cubic meters, and a height higher than 3 meters; and the area type identifier R4is allocated to areas having a volume higher than 200 cubic meters. Alternatively or complementarity, the characteristic(s) of the area include a human activity in the area. For example, the human activity in the area comprises periods of time when the room is occupied by humans (e.g. during the day or during the night, in the morning or in the afternoon, during the week or the week-end, etc.). Alternatively or complementarity, the human activity in the area defines the type of activity performed by the persons occupying the area; for instance, the area is an office room, a room in a store, a storage room, a workshop room, a room in a house or an apartment, etc. The aforementioned area type identifier of the area can also be based on the human activity in the area. Furthermore, a person skilled in the art would readily understand that other types of area characteristics could be used in the context of an environment control system managed by the environment controller100. FIG.2illustrates examples of the determination of the characteristic(s) of the area by the processing unit110of the environment controller100. The determination of the characteristic(s) of the area comprises receiving the characteristic(s) of the area from a computing device20via the communication interface130, and storing the characteristic(s) of the area in the memory120of the environment controller100. Alternatively or complementarily, the determination of the characteristic(s) of the area comprises receiving the characteristic(s) of the area from the user10via the user interface140of the environment controller100, and storing the characteristic(s) of the area in the memory120. Alternatively or complementarily, the determination of the characteristic(s) of the area comprises receiving the characteristic(s) of the area from a sensor240via the communication interface130, and storing the characteristic(s) of the area in the memory120of the environment controller100. The sensor240is capable of automatically determining characteristic(s) of the area. For example, the sensor240combines one or more cameras, and a processing unit, capable of automatically determining geometric characteristics of the area. In another example, the sensor240combines one or more cameras (or sound sensor, motion detector, etc.), and a processing unit, capable of automatically determining a human activity in the area. Alternatively, the sensor240only transmits collected data (e.g. images of the area) to the processing unit110of the environment controller100, and the processing unit110determines the characteristic(s) of the area based on the data transmitted by the sensor240. The characteristic(s) of the area usually do not change over time. Thus, the determination occurs only once, and the characteristics of the area are permanently stored in the memory120for being used by the neural network inference engine112, as will be illustrated later in the description. Reference is now made concurrently toFIGS.1,2,3A,3B,3C and3D; whereFIGS.3A,3B,3C and3Drepresent a method500. At least some of the steps of the method500are implemented by the environment controller100. The method500aims at improving a predictive model of a neural network used by the environment controller100(more specifically by the neural network inference engine112). The present disclosure is not limited to the method500being implemented by the environment controller100, but is applicable to any type of computing device capable of implementing the steps of the method500. A dedicated computer program has instructions for implementing at least some of the steps of the method500. The instructions are comprised in a non-transitory computer program product (e.g. the memory120) of the environment controller100. The instructions provide for improving a predictive model of a neural network used by the environment controller100(more specifically by the neural network inference engine112), when executed by the processing unit110of the environment controller100. The instructions are deliverable to the environment controller100via an electronically-readable media such as a storage media (e.g. CD-ROM, USB key, etc.), or via communication links (e.g. via a communication network through the communication interface130). The instructions of the dedicated computer program executed by the processing unit110implement the neural network inference engine112and the control module114. The neural network inference engine112provides functionalities of a neural network, allowing to infer output(s) based on inputs using the predictive model, as is well known in the art. The control module114provides functionalities allowing the environment controller100to interact with and control other devices (e.g. the sensors (200,210,220and230) and the controlled appliance300). The method500comprises the step505of storing a predictive model in the memory120. Step505is performed by the processing unit110. The predictive model comprises weights of a neural network implemented by the neural network inference engine112. The method500comprises the step510of determining at least one environmental characteristic value in the area. Step510is performed by the control module114executed by the processing unit110. The at least one environmental characteristic value includes one or more of the following: a current temperature in the area, a current humidity level in the area, a current CO2 level in the area, and a current occupancy of the area. However, other types of environmental characteristic value may be determined at step510. In the case of the current temperature, the measurement of the current temperature is performed by the temperature sensor200(located in the area) and transmitted to the environment controller100. Thus, step510includes receiving the current temperature from the temperature sensor200via the communication interface130. Alternatively, functionalities of a temperature sensor are integrated to the environment controller100. In this case, step510includes receiving the current temperature from a temperature sensing module (not represented inFIG.1) integrated to the environment controller100. In still another implementation, step510includes calculating the current temperature in the area based on temperature measurements respectively received from a plurality of temperature sensors200located in the area (e.g. calculating the average of the temperature measurements received from the plurality of temperature sensors200). In the case of the current humidity level, the measurement of the current humidity level is performed by the humidity sensor210(located in the area) and transmitted to the environment controller100. Thus, step510includes receiving the current humidity level from the humidity sensor210via the communication interface130. Alternatively, functionalities of a humidity sensor are integrated to the environment controller100. In this case, step510includes receiving the current humidity level from a humidity sensing module (not represented inFIG.1) integrated to the environment controller100. In still another implementation, step510includes calculating the current humidity level in the area based on humidity level measurements respectively received from a plurality of humidity sensors210located in the area (e.g. calculating the average of the humidity level measurements received from the plurality of humidity sensors210). In the case of the current CO2 level, the measurement of the current CO2 level is performed by the CO2 sensor220(located in the area) and transmitted to the environment controller100. Thus, step510includes receiving the current CO2 level from the CO2 sensor220via the communication interface130. Alternatively, functionalities of a CO2 sensor are integrated to the environment controller100. In this case, step510includes receiving the current CO2 level from a CO2 sensing module (not represented inFIG.1) integrated to the environment controller100. In still another implementation, step510includes calculating the current CO2 level in the area based on CO2 level measurements respectively received from a plurality of CO2 sensors220located in the area (e.g. calculating the average of the CO2 level measurements received from the plurality of CO2 sensors220). In the case of the current occupancy of the area, the measurement of occupancy data is performed by the occupancy sensor230(located in the area) and transmitted to the environment controller100. In a first implementation, the current occupancy of the area directly consists of the occupancy data. Thus, step510includes directly receiving the current occupancy of the area from the occupancy sensor230via the communication interface130. For example, an ultrasonic or infrared sensor determines if the area is occupied or not, and transmits the current occupancy status of the area (occupied or not) to the environment controller100. In a second implementation, the current occupancy of the area is determined by processing the occupancy data. Thus, step510includes receiving the occupancy data from the occupancy sensor230via the communication interface130, and further processing the occupancy data to generate the current occupancy of the area. For example, a visible or thermal camera transmits picture(s) of the area to the environment controller100, and a detection software implemented by the environment controller100analyses the picture(s) to determine the number of persons present in the area. Alternatively, functionalities of an occupancy sensor are integrated to the environment controller100. In this case, step510includes receiving the occupancy data from an occupancy sensing module (not represented inFIG.1) integrated to the environment controller100. Ultimately, the current occupancy of the area determined at step510comprises one of the following: an indication of the area being occupied or not, a number of persons present in the area, a number of persons entering or leaving the area. A person skilled in the art would readily understand that other types of occupancy sensors230may be used in the context of the present disclosure, to determine the aforementioned types of current occupancy of the area, or other types of current occupancy of the area. The method500comprises the step515of receiving at least one set point. Step515is performed by the control module114executed by the processing unit110. As mentioned previously, the at least one set point includes one or more of the following: a target temperature, a target humidity level, and a target CO2 level. However, other types of set point may be determined at step515. A set point is received from the user10via the user interface140(as illustrated inFIGS.1and3A). Alternatively, a set point is received from a remote computing device via the communication interface130(this use case is not represented in the Figures for simplification purposes). For example, the user10enters the set point via a user interface of the remote computing device (e.g. a smartphone) and the set point is transmitted to the environment controller10. The order in which steps510and515are performed may vary. The order represented inFIG.3Ais for illustration purposes only. The method500comprises the step520of executing the neural network inference engine112using the predictive model (stored at step505) for generating one or more output based on inputs. The execution of the neural network inference engine112is performed by the processing unit110. The neural network inference engine112implements a neural network using the weights of the predictive model. This step will be further detailed later in the description. The inputs comprise the at least one environmental characteristic value in the area determined at step510, and the at least one set point received at step515. The inputs used by the neural network inference engine112at step520may include additional parameter(s). For example, the method500comprises the optional step507of determining at least one characteristic of the area. Optional step507is performed by the control module114executed by the processing unit110. The determination of characteristic(s) of the area has been detailed previously in relation toFIG.2. The at least one characteristic of the area includes one or more of the following: an area type identifier selected among a plurality of area type identifiers, one or more geometric characteristics of the area, and a human activity in the area. The inputs used at step520further include the characteristic(s) of the area. Another example of additional parameter(s) for the inputs include an external temperature measured outside the building (where the area is located) and/or an external humidity level measured outside the building. The one or more output comprises one or more command for controlling the controlled appliance300. As mentioned previously, an example of controlled appliance300is a VAV appliance. Examples of commands for controlling the VAV appliance300include commands directed to one of the following actuation modules of the VAV appliance300: an actuation module controlling the speed of a fan, an actuation module controlling the pressure generated by a compressor, an actuation module controlling a valve defining the rate of an airflow, etc. Although the present disclosure focuses on generating command(s) for controlling appliance(s) at step520, other types of output may be generated in addition to the command(s) at step520. The method500comprises the step525of modifying the one or more command generated at step520. Step525is performed by the control module114executed by the processing unit110. Different algorithms may be implemented at step525. Following are examples of algorithms for modifying the one or more command. However, a person skilled in the art would readily understand that other algorithms may be used in the context of the present disclosure. In a first implementation, the modification to a command is random. Furthermore, the random modification may be limited to a pre-defined range of modifications. For example, the command consists in adjusting the speed of a fan, and the predefined range of modifications is between −10% and +10%. If the speed generated at step520is 20 revolutions per second, then a random value between 18 and 22 revolutions per second is generated at step525. In a second implementation, the modification to a command is selected among a set of one or more pre-defined modification. For example, the command consists in adjusting the speed of a fan, and the predefined modifications consist of +5%, +10%, −5% and −10%. If the speed generated at step520is 20 revolutions per second, then a value among 18, 19, 21 and 22 revolutions per second is selected at step525. The sub-algorithm for selecting one among a plurality of pre-defined modifications is out of the scope of the present disclosure. In the case where the one or more command generated at step520includes two or more commands, the modification may affect any combination of the commands (e.g. all the commands are modified or only some of the commands are modified). For example, if the one or more command includes one command for adjusting the speed of a fan and one command for adjusting the pressure generated by a compressor, the modification at step525includes one of the following: only adjust the speed of the fan, only adjust the pressure generated by the compressor, or simultaneously adjust the speed of the fan and the pressure generated by the compressor. Furthermore, the selection of which commands are modified may vary each time step525is performed, using a random algorithm or a pre-defined modification schedule. In an exemplary implementation, the type of modification(s) to be applied at step525is received via the communication interface130. For example, the training server400sends a configuration message to the environment controller100. The configuration message defines the type of modification(s) to be applied at step525. As will be illustrated later in the description, this mechanism allows the training server400to control a plurality of environment controllers100via configuration messages defining various types of modification(s) to be applied at step525. Thus, the training server400drives a fleet of environment controllers100respectively applying modifications at step525. Each environment controller100has its own range of modifications, allowing a wide range of exploratory modifications for the purpose of improving the predictive model. The configuration data (type of modification(s) to be applied) included in the configuration message are stored in the memory120and used each time step525is performed. Each environment controller100can also be reconfigured by the training server400via a new configuration message defining a new set of modification(s) to be applied at step525. The method500comprises the step530of transmitting the one or more modified command (generated at step520and modified at step525) to the controlled appliance300via the communication interface130. Step530is performed by the control module114executed by the processing unit110. The method500comprises the step535of receiving the one or more modified command at the controlled appliance300, via the communication interface of the controlled appliance300. Step535is performed by the processing unit of the controlled appliance300. The method500comprises the step540of executing the one or more modified command at the controlled appliance300. Step540is performed by the processing unit of the controlled appliance300. Executing the one or more modified command consists in controlling one or more actuation module of the controlled appliance300based on the received one or more modified command. As mentioned previously, a single command or a plurality of commands is generated at step520and transmitted at step530(after modification at step525) to the same controlled appliance300. Alternatively, the same command is generated at step520and transmitted at step530to a plurality of controlled appliances300. In yet another alternative, a plurality of commands is generated at step520and transmitted at step530to a plurality of controlled appliances300. The method500comprises the step545of generating at least one metric representative of the execution (at step540) of the one or more modified command by the controlled appliance300. Step545is performed by the control module114executed by the processing unit110. The role of the one or more metric is to provide a quantified evaluation of the efficiency of the execution of the modified command(s) (at step540). More specifically, since the one or more modified command aims at reaching the set point(s) received at step515, the one or more metric evaluates the efficiency of execution of the one or more modified command for the purpose of reaching the set point(s). The efficiency may be measured according to various criteria, including the time required for reaching an environmental state corresponding to the set point(s), the adequacy of the reached environmental state with respect to the set point(s), the impact on the comfort of the users present in the area, etc. Examples of metrics include the determination of one or more updated environmental characteristic value in the area following the transmission of the modified command(s), the measurement of one or more time required for reaching one or more corresponding environmental state in the area (e.g. reaching one or more set point) following the transmission of the modified command(s), the measurement of an energy consumption by the execution of the modified command(s), etc. For illustration purposes, we consider the use case where a target temperature is included in the set point(s). A first example of metric consists of an updated temperature measured by the temperature sensor200and transmitted to the environment controller100after a given amount of time (e.g. 5 minutes), following the transmission of the modified command(s) at step530. A second example of metric consists of several updated temperatures measured by the temperature sensor200and transmitted to the environment controller100at various interval of times (e.g. respectively 5 minutes and 10 minutes), following the transmission of the modified command(s) at step530. This second example allows an evaluation of the trajectory of the variation of temperature in the area from the current temperature (determined at step510) to the target temperature (received at step515). A third example of metric consists of a measurement of the time required for reaching the target temperature, following the transmission of the modified command(s) at step530. In this third example, the environment controller100starts a timer following the transmission of the modified command(s) at step530. The environment controller100receives updated temperatures measured by the temperature sensor200and transmitted to the environment controller100. Upon reception of an updated temperature substantially equal to the target temperature, the environment controller100stops the timer. The measurement of the required time is the difference between the times at which the timer was respectively stopped and started. A fourth example of metric consists of several measurements of the time required for reaching milestones on the trajectory from the current temperature towards the target temperature, following the transmission of the modified command(s) at step530. For example, a first milestone corresponds to a temperature halfway between the current temperature and the target temperature, and a second milestone corresponds to the target temperature. The previous exemplary metrics are for illustration purposes only. A person skilled in the art would be capable of implementing other metrics particularly adapted to the specific inputs and outputs used by the neural network inference engine112at step520. The method500comprises the step550of transmitting the inputs used by the neural network inference engine112(at step520), the one or more output generated by the neural network inference engine112(at step520), and the at least one metric (generated at step545) to the training server400via the communication interface130. Step550is performed by the control module114executed by the processing unit110. All the data transmitted at step550are referred to as training data inFIG.1. A new set of training data is transmitted to the training server400as soon as it is available (after each execution of steps520-525-530-545). Alternatively, the transmission of a new set of training of data to the training server400is delayed until a certain amount of training data has been collected (the transmission of all the collected training data occurs after several executions of steps520-525-530-545). The method500comprises the step555of receiving the inputs, the one or more output and the at least one metric (transmitted at step550) at the training server400, via the communication interface of the training server400. Step555is performed by the processing unit of the training server400. The predictive model stored by the environment controller100is also stored by the training server400. The method500comprises the step560of generating an update of the predictive model. Step560is performed by the processing unit of the training server400. The update of the predictive model comprises an update of the weights of the neural network. The update is performed based on the inputs, the one or more output and the at least one metric received at step555. The method500comprises the step565of transmitting the update of the predictive model (comprising the updated weights) to the environment controller100, via the communication interface of the training server400. Step565is performed by the processing unit of the training server400. Steps555,560and565will be detailed later, when providing a detailed description of the functionalities of the training server400. The method500comprises the step570of receiving the update of the predictive model (comprising the updated weights) from the training server400via the communication interface130. Step570is performed by the control module114executed by the processing unit110. Reference is now made more particularly toFIG.3C. During a training phase, the method500is used for generating an operational predictive model based on an initial predictive model. Steps510to550are repeated systematically. The initial predictive model is stored at step505. Then, the repetition of steps510to550provides data to the training server400for improving the initial predictive model. At some point, the training server400determines that an operational version of the predictive model is ready, and transmits the operational version to the environment controller100. The operational version is received at step570and stored at step505. Reference is now made more particularly toFIG.3D. During an operational phase, the method500can be used to improve/fine-tune the current predictive model. Steps525,545and550are not performed systematically, but only once in a while (for example, once every ten occurrences of step520). The rest of the time, the command(s) generated at step520are not modified. The execution of steps525,545and550provides data to the training server400for improving the current predictive model. At some point, the training server400determines that an improved version of the predictive model is ready, and transmits the improved version to the environment controller100. The improved version is received at step570and stored at step505. The steps of the method500involving the reception or the transmission of data by the environment controller100may use the same communication interface130or different communication interfaces130. For example, steps510, optionally515, and530use a first communication interface130of the Wi-Fi type; while steps550and570use a second communication interface130of the Ethernet type. In another example, steps510, optionally515,530,550and570use the same communication interface130of the Wi-Fi type. In an alternative implementation, for each environmental characteristic value considered at step510, a plurality of consecutive measurements of the environmental characteristic value is determined at step510(instead of a single current environmental characteristic value). For example, the inputs used by the neural network inference engine112at step520include a plurality of consecutive temperature measurements in the area (instead of a single current temperature in the area), and/or a plurality of consecutive humidity level measurements in the area (instead of a single current humidity level in the area), and/or a plurality of consecutive CO2 level measurements in the area, (instead of a single current CO2 level in the area). For instance, a measurement is determined (e.g. received from a corresponding sensor) every minute and the last five consecutive measurements (the current one, one minute before, two minutes before, three minutes before, and four minutes before) are stored in the memory120. At step520, the inputs include the last five consecutive measurements stored in the memory120(e.g. the last five consecutive temperature measurements and the last five consecutive humidity measurements). FIG.4is a schematic representation of the neural network inference engine112illustrating the inputs and the outputs used by the neural network inference engine112when performing step520. FIG.5is a detailed representation of an exemplary neural network implemented by the neural network inference engine112. The neural network includes an input layer with four neurons for receiving four input parameters (the current temperature in the area, the current humidity level in the area, the number of persons present in the area, and the target temperature). The neural network includes an output layer with two neurons for outputting two output values (the inferred adjustment of the speed of a fan and the inferred adjustment of the pressure generated by a compressor). The neural network includes three intermediate hidden layers between the input layer and the output layer. All the layers are fully connected. The number and type of inputs (four inFIG.5) and outputs (two inFIG.5) of the neural network are for illustration purposes only. Any combination of inputs and outputs supported by the present description can be applied to the neural network illustrated inFIG.5. The number of intermediate hidden layers is an integer greater or equal than 1 (FIG.5represents three intermediate hidden layers for illustration purposes only). The number of neurons in each intermediate hidden layer may vary. During the training phase of the neural network, the number of intermediate hidden layers and the number of neurons for each intermediate hidden layer are selected, and may be adapted experimentally. The generation of the outputs based on the inputs using weights allocated to the neurons of the neural network is well known in the art. The architecture of the neural network, where each neuron of a layer (except for the first layer) is connected to all the neurons of the previous layer is also well known in the art. Reference is now made concurrently toFIGS.1,3A-D and6, whereFIG.6illustrates the usage of the method500in a large environment control system. A plurality of environment controllers100implementing the method500are deployed at different locations. Only two environment controllers100are represented inFIG.6for illustration purposes, but any number of environment controllers100may be deployed. Each environment controller100represented inFIG.6corresponds to the environment controller100represented inFIG.1. Each environment controller100interacts with the same entities as represented inFIG.1, such as the controlled appliance300(the sensors illustrated inFIG.1are not represented inFIG.6for simplification purposes). In an exemplary configuration, the different locations are within a building, and the environment controllers100are deployed at different floors of the building, different rooms of the building, etc. The training server400is also deployed in the building. Alternatively, the training server400is deployed at a remote location from the building, for example in a remote cloud infrastructure. In another configuration, the environment controllers100are deployed at different buildings. The training server400is deployed in one of the buildings, or at a remote location from the buildings. Each environment controller100receives an initial predictive model from the centralized training server400. The same initial predictive model is used for all the environment controllers100. Each environment controller100generates training data when using the initial predictive model, and the training data are transmitted to the training server400. The training server400uses the training data from all the environment controllers100to improve the initial predictive model. At some point, an improved predictive model generated by the training server400is transmitted to the environment controllers100, and used by all the environment controllers100in place of the initial predictive model. Several iterations of this process can be performed, where the environment controllers100use a current version of the predictive model to generate training data, and the training data are used by the training server400to generate a new version of the predictive model. The environment controllers100control environments having substantially similar characteristics, so that the same predictive model is adapted to all the environment controllers100. For example, the environment controllers100control the environment of rooms having substantially similar geometric characteristics, and/or substantially the same type of human activity in the rooms, etc. Details of the components of the training server400are also represented inFIG.6. The training server400comprises a processing unit410, memory420, and a communication interface430. The training server400may comprise additional components, such as another communication interface430, a user interface440, a display450, etc. The characteristics of the processing unit410of the training server400are similar to the previously described characteristics of the processing unit110of the environment controller100. The processing unit410executes the neural network training engine411and a control module414. The characteristics of the memory420of the training server400are similar to the previously described characteristics of the memory120of the environment controller100. The characteristics of the communication interface430of the training server400are similar to the previously described characteristics of the communication interface130of the environment controller100. Reference is now made concurrently toFIGS.1,3A-D,6and7.FIG.7represents a method600for improving a predictive model of a neural network used by the environment controllers100(more specifically by the neural network inference engines112) through reinforcement learning. At least some of the steps of the method600represented inFIG.7are implemented by the training server400. The present disclosure is not limited to the method600being implemented by the training server400, but is applicable to any type of computing device capable of implementing the steps of the method600. A dedicated computer program has instructions for implementing at least some of the steps of the method600. The instructions are comprised in a non-transitory computer program product (e.g. the memory420) of the training server400. The instructions provide for improving the predictive model of the neural network used by the environment controllers100(more specifically by the neural network inference engines112) through reinforcement learning, when executed by the processing unit410of the training server400. The instructions are deliverable to the training server400via an electronically-readable media such as a storage media (e.g. CD-ROM, USB key, etc.), or via communication links (e.g. via a communication network through the communication interface430). The instructions of the dedicated computer program executed by the processing unit410implement the neural network training engine411and the control module414. The neural network training engine411provides functionalities for training a neural network, allowing to improve a predictive model (more specifically to optimize weights of the neural network), as is well known in the art. The control module414provides functionalities allowing the training server400to gather data used for the training of the neural network. An initial predictive model is generated by the processing unit410of the training server400and transmitted to the plurality of environment controllers100via the communication interface430of the training server400. Alternatively, the initial predictive model is generated by and received from another computing device (via the communication interface430of the training server400). The initial predictive model is also transmitted by the other computing device to the plurality of environment controllers100. The generation of the initial predictive model is out of the scope of the present disclosure. Generating the initial predictive model comprises defining a number of layers of the neural network, a number of neurons per layer, the initial value for the weights of the neural network, etc. The definition of the number of layers and the number of neurons per layer is performed by a person highly skilled in the art of neural networks. Different algorithms (well documented in the art) can be used for allocating an initial value to the weights of the neural network. For example, each weight is allocated a random value within a given interval (e.g. a real number between −0.5 and +0.5), which can be adjusted if the random value is too close to a minimum value (e.g. −0.5) or too close to a maximum value (e.g. +0.5). The execution of the method600by the training server400and the execution of the method500by the environment controllers100provide for improving the initial predictive model (more specifically to optimize the weights of the predictive model). At the end of the training phase, an improved predictive model is ready to be used by the neural network inference engines112of the plurality of environment controllers100. Optionally, the improved predictive model can be used as a new initial predictive model, which can be further improved by implementing the aforementioned procedure again. The method600comprises the step605of storing the initial predictive model in the memory420. Step605is performed by the processing unit410. The initial predictive model comprises the weights of the neural network implemented by the neural network training engine411. The method600comprises the step610of receiving a plurality of training data sets via the communication interface430. Step610is performed by the control module414executed by the processing unit410. The training data sets are received from the plurality of environment controllers100. Step610corresponds to step550of the method500executed by the environment controllers100. Each training data set comprises inputs of the neural network implemented by the neural network training engine411, one or more output of the neural network implemented by the neural network training engine411, and at least one metric. The inputs comprise at least one environmental characteristic value in the area under the control of the corresponding environment controller100(determined at step510of the method500) and at least one set point (received at step515of the method500). The one or more output comprises one or more command for controlling the controlled appliance300(generated at step525by modifying the command generated at step520of the method500). The at least one metric (generated at step545of the method500) is representative of an execution of the one or more command by the controlled appliance300. As mentioned previously, the at least one environmental characteristic value includes one or more of the following: a current temperature in the area, a current humidity level in the area, a current CO2 level in the area, and a current occupancy of the area. Alternatively, the at least one environmental characteristic value includes one or more of the following: a plurality of consecutive temperature measurements in an area, a plurality of consecutive humidity level measurements in the area, a plurality of consecutive carbon dioxide (CO2) level measurements in the area, and a plurality of consecutive determinations of an occupancy of the area. The at least one set point includes one or more of the following: a target temperature, a target humidity level, and a target CO2 level. Examples of the one or more command have also been described previously. Optionally, the inputs include additional parameters used at step520of the method500. For example, the inputs further include at least one characteristic of the area (determined at optional step507of the method500). As mentioned previously, the at least one characteristic of the area includes one or more of the following: an area type identifier selected among a plurality of area type identifiers, one or more geometric characteristics of the area, and a human activity in the area. Optionally, the outputs include additional parameters different from command(s). As illustrated inFIG.7, steps615and620of the method600are repeated for each training data set received at step610. The method600comprises the step615of determining a value of a reinforcement signal based on the at least one metric of a given training data set (among the plurality of training data sets received at step610). Step615is performed by the control module414executed by the processing unit410. The value of the reinforcement signal is one of positive reinforcement (also referred to as a positive reward) or negative reinforcement (also referred to as a negative reward). For example, the control module414implements a set of rules (stored in the memory420) to determine the value of the reinforcement signal. The set of rules is designed for evaluating the efficiency of the modified command(s) transmitted at step530of the method500for reaching the set point(s) received at step515of the method500. If the command(s) is evaluated as being efficient, the outcome is a positive reinforcement value for the reinforcement signal. If the command(s) is evaluated as not being efficient, the outcome is a negative reinforcement value for the reinforcement signal. The reinforcement signal takes only two Boolean values: positive reinforcement or negative reinforcement. Alternatively, the reinforcement signal is expressed as a percentage representing a relative efficiency. For example, positive reinforcement includes the values between 51 and 100%, while negative reinforcement includes the values between 0 and 49%. Alternatively, the reinforcement signal takes one among a pre-defined set of values (e.g. +1, +2, +3 for positive reinforcement and −1, −2, −3 for negative reinforcement). The neural network training engine411is adapted and configured to adapt the weights of the predictive model based on values chosen for implementing the reinforcement signals. A person skilled in the art would readily understand that the values of the reinforcement signal are not limited to the previous examples. The determination of the value of the reinforcement signal may further takes into consideration the at least one set point included in the inputs received at step610. Alternatively or complementarily, the determination of the value of the reinforcement signal may further takes into consideration the at least one environmental characteristic value in an area included in the inputs received at step610. Alternatively or complementarily, the determination of the value of the reinforcement signal may further take into consideration the characteristic(s) of the area included in the inputs received at step610(if optional step507of the method500is performed). Following are exemplary sets of rules for evaluating the efficiency of the command(s) transmitted at step530of the method500, based on a target temperature (received at step515of the method500) and a metric consisting of one or more updated temperature measurement (determined at step545of the method500). The target temperature and the one or more updated temperature measurement are comprised in the training data transmitted at step550of the method500and received at step610of the method600. A first exemplary set of rules uses a single updated temperature measurement. The reinforcement signal is positive if the absolute difference between the target temperature and the updated temperature measurement is lower than a threshold (e.g. 0.5 degree Celsius). The reinforcement signal is negative otherwise. A second exemplary set of rules uses several consecutive measurements of the updated temperature. For instance, the reinforcement signal is positive if the absolute difference between the target temperature and a first measurement of the updated temperature determined 5 minutes after transmitting the commands (at step530of the method500) is lower than a first threshold (e.g. 2 degrees Celsius) AND the absolute difference between the target temperature and a second measurement of the updated temperature determined 10 minutes after transmitting the commands (at step530of the method500) is lower than a second threshold (e.g. 0.5 degree Celsius). The reinforcement signal is negative otherwise. A third exemplary set of rules further uses the volume of the area (determined at step507of the method500, transmitted at step550of the method500and received at step610of the method600). The reinforcement signal is positive if the absolute difference between the target temperature and the updated temperature measurement is lower than a first threshold (e.g. 0.5 degree Celsius) AND the volume of the area is lower than 150 cubic meters. The reinforcement signal is also positive if the absolute difference between the target temperature and the updated temperature measurement is lower than a second threshold (e.g. 1 degree Celsius) AND the volume of the area is higher than 150 cubic meters. The reinforcement signal is negative otherwise. A fourth exemplary set of rules further uses the human activity in the area, and more specifically the type of activity performed by humans occupying the area (determined at step507of the method500, transmitted at step550of the method500and received at step610of the method600). The reinforcement signal is positive if the absolute difference between the target temperature and the updated temperature measurement is lower than a first threshold (e.g. 1 degree Celsius) AND the area is an office room. The reinforcement signal is also positive if the absolute difference between the target temperature and the updated temperature measurement is lower than a second threshold (e.g. 2 degrees Celsius) AND the area is a storage room. The reinforcement signal is negative otherwise. A fifth exemplary set of rules also uses the human activity in the area, and more specifically periods of time when the area is occupied by humans (determined at step507of the method500, transmitted at step550of the method500and received at step610of the method600). The reinforcement signal is positive if the absolute difference between the target temperature and the updated temperature measurement is lower than a first threshold (e.g. 1 degree Celsius) AND the current time is within a period of occupation of the area (e.g. between 8 am and 6 pm from Monday to Saturday). The reinforcement signal is also positive if the absolute difference between the target temperature and the updated temperature measurement is lower than a second threshold (e.g. 2 degrees Celsius) AND the current time is within a period of inoccupation of the area (e.g. anytime except between 8 am and 6 pm from Monday to Saturday). The reinforcement signal is negative otherwise. Following is another exemplary sets of rules for evaluating the efficiency of the command(s) transmitted at step530of the method500, based on metric(s) consisting of one or more measurement of the time required for reaching the target temperature (received at step515of the method500). The one or more measurement of the required time is determined at step545of the method500. The one or more measurement of the required time is comprised in the training data transmitted at step550of the method500and received at step610of the method600. A first exemplary set of rules uses a single measurement consisting of the time required for reaching the target temperature. The reinforcement signal is positive if the measurement of the required time is lower than a threshold (e.g. 5 minutes). The reinforcement signal is negative otherwise. A second exemplary set of rules uses several consecutive measurements of the of the time required for reaching the target temperature. For instance, the reinforcement signal is positive if a first measurement of the required time for reaching a temperature halfway between the current temperature measurement (determined at step510of the method500) and the target temperature is lower than a first threshold (e.g. 2 minutes) AND a second measurement of the required time for reaching the target temperature is lower than a second threshold (e.g. 5 minutes). The reinforcement signal is negative otherwise. In addition to the one or more measurement of the time required for reaching the target temperature, other set of rules may be defined, which further use the characteristics of the area determined at step507of the method500(e.g. volume of the area, human activity in the area, periods of time when the area is occupied by humans, etc.), as illustrated previously. The previous exemplary sets of rules are for illustration purposes only. A person skilled in the art would be capable of implementing other sets of rules particularly adapted to the specific inputs and outputs used by the neural network inference engine112at step520of the method500. The method600comprises the step620of executing the neural network training engine411to update the weights of the neural network based on the inputs (of the given training data set), the one or more output (of the given training data set), and the value of the reinforcement signal (determined at step615). The execution of the neural network training engine411is performed by the processing unit410. The neural network training engine411implements the neural network using the weights of the predictive model stored at step605. The neural network implemented by the neural network training engine411corresponds to the neural network implemented by the neural network inference engine112(same number of layers, same number of neurons per layer). As mentioned previously,FIG.5is a detailed exemplary representation of such a neural network. Reinforcement learning is a technique well known in the art of artificial intelligence. Having a set of inputs and the corresponding output(s), the weights of the predictive model are updated to force the generation of the corresponding output(s) when presented with the inputs, if the value of the reinforcement signal is a positive reinforcement. Complementarily, having a set of inputs and the corresponding output(s), the weights of the predictive model are updated to prevent the generation of the corresponding output(s) when presented with the inputs, if the value of the reinforcement signal is a negative reinforcement. Thus, having a given set of inputs and a candidate set of corresponding output(s), the neural network training engine411learns through reinforcement learning which one(s) among the candidate set of corresponding output(s) is (are) the best fit for the given set of input(s). In the context of the present disclosure, the neural network training engine411learns (through reinforcement learning) which command(s) is/are the best fit for reaching the set point(s), when presented with the current environmental characteristic value(s), the set point(s) and optionally the characteristic(s) of the area. Additionally, during the training phase, the number of intermediate hidden layers of the neural network and the number of neurons per intermediate hidden layer can be adjusted to improve the accuracy of the predictive model. At the end of the training phase, the predictive model generated by the neural network training engine411includes the number of layers, the number of neurons per layer, and the weights. However, the number of neurons for the input and output layers shall not be changed. Although not represented inFIG.7for simplification purposes, the modifications to the weights of the neural network performed at step620are stored in the memory420of the training server400. FIG.8is a schematic representation of the neural network training engine411illustrating the inputs, the one or more output and the value of the reinforcement signal used by the neural network inference engine411when performing step620. Optionally, as illustrated inFIG.7, several iterations of steps610-615-620are repeated if a plurality of batches of training data sets are received at step610. The execution of steps610-615-620is implementation dependent. In a first exemplary implementation, as soon as the training server400receives training data set(s) from a given environment controller100at step610, steps615and620are immediately performed. In a second exemplary implementation, the training server400waits for the reception of a substantial amount of training data sets from environment controller(s)100at step610, before performing steps615and620. In this second implementation, the received training data steps are stored in the memory420before being used. Furthermore, some of the received training data sets may be discarded by the training server400(e.g. a training data set is redundant with another already received training data set, at least some of the data contained in the training data set are considered erroneous or non-usable, etc.). At the end of the training phase implemented by steps610-615-620, the neural network is considered to be properly trained, and an updated predictive model comprising a final version of the updated weights is transmitted to the environment controllers100, as illustrated inFIG.6. Various criteria may be used to determine when the neural network is considered to be properly trained, as is well known in the art of neural networks. This determination and the associated criteria is out of the scope of the present disclosure. The method600comprises the step625of transmitting an update of the predictive model (originally stored at step605) comprising the updated weights (updated by the repetition of step620) to the plurality of environment controllers100via the communication interface430. Step625is performed by the control module414executed by the processing unit410. The update of the predictive model of the neural network generally only involves an update of the weights (the number of layers of the neural network and the number of neurons per layer are generally unchanged). Step625corresponds to step570of the method500executed by the environment controllers100. From this point on, the environment controllers100enter an operational mode, where the updated predictive model is used for managing the environment (generating command(s) for controlling the controlled appliances300) of the respective areas under the control of the environment controllers100. During the execution of the method600for improving the initial predictive model, only a few environment controllers100may be operating in a training mode, for the sole purpose of providing the training data sets used by the training server400when executing the method600. Once the updated predictive model is available at the end of the training phase, it can be distributed to a larger number of environment controllers100entering the operational mode. Additionally, the methods500and600can be used to further improve the updated predictive model used in the operational mode, as described previously. Although the present disclosure has been described hereinabove by way of non-restrictive, illustrative embodiments thereof, these embodiments may be modified at will within the scope of the appended claims without departing from the spirit and nature of the present disclosure. | 72,780 |
11861483 | DETAILED DESCRIPTION Hereinafter, embodiments of the inventive concept will be described clearly and in detail so that those skilled in the art may easily carry out the inventive concept. The inventive concept relates to a circuit implemented in a semiconductor device in order to perform an operation of a neural network. A neural network of the inventive concept may be an artificial neural network (ANN) capable of processing data or information in a similar manner to a biological neural network. The neural network may include multiple layers including artificial neurons similar to biological neurons, and synapses for connecting the multiple layers. Hereinafter, a spike neural network that processes a spike signal having a pulse form and toggling for a short period of time will be representatively described. However, a circuit according to an embodiment of the inventive concept is not limited to a spike neural network, and may be used to implement other neural networks. FIG.1is a block diagram exemplarily illustrating a spike neural network circuit according to an embodiment of the inventive concept. A spike neural network circuit100may include an axon circuit110, a synaptic circuit120, and a neuron circuit130. The axon circuit110may include axons generating input spike signals. Similar to an axon of a biological neural network, an axon of the axon circuit110may perform a function of outputting a signal to another neuron. For example, each of the axons of the axon circuit110may generate an input spike signal based on data or information input to the spike neural network circuit100from the outside. In another example, each of the axons of the axon circuit110may first receive output spike signals output from the neuron circuit130depending on input spike signals transmitted to the synaptic circuit120, and then generate a new input spike signal based on feedback output spike signals. The input spike signal may be a pulse signal toggling for a short period of time. The axon circuit110may generate input spike signals and transmit the input spike signals to the synaptic circuit120. The synaptic circuit120may connect the axon circuit110to the neuron circuit130. The synaptic circuit120may include synapses121determining (deciding) the connection and the connection strength of the axons of the axon circuit110and neurons of the neuron circuit130. Each of the synapses121may have a unique or a variable weight. Each of the synapses121may receive an input spike signal, and apply a weight to the input spike signal. The weight may be a numerical value representing the correlation between the axon and the neuron described above, the connection strength between the axons of the axon circuit110and the neurons of the neuron circuit130, the correlation of a (subsequent) neuron of the neuron circuit130with respect to an input spike signal. Each of the synapses121may output a weight to the neuron circuit130depending on an input spike signal. Each of the synapses121generates an operation signal based on the input spike signal and the weight, and output the operation signal to the neuron circuit130. The spike neural network circuit100may include a plurality of layers each including multiple neurons. Some of the synapses121of the synaptic circuit120may represent the correlation between a first layer and a second layer, and the other synapses121of the synaptic circuit120may represent the correlation between a third layer and a fourth layer. That is, the synapses121of the synaptic circuit120may represent correlations between different layers. Referring toFIG.1, the synapses121are illustrated as being disposed on a two-dimensional array. Input spike signals may be transmitted in a first direction toward the synaptic circuit120from the axon circuit110. An operation signal in which an input spike signal is applied with a weight (that is, an operation result) may be transmitted in a second direction toward the neuron circuit130from the synaptic circuit120. For example, the first direction and the second direction may be perpendicular to each other. However, unlike what is shown inFIG.1, the synapses121may be disposed on a three-dimensional array. Neurons131of the neuron circuit130may respectively receive operation signals in which input spike signals are applied with weights in the synaptic circuit120. Similar to a dendrite of a biological neural network, each of the neurons131may perform a function of receiving a signal output from a different neuron. Referring toFIG.1, each of the neurons131may be connected to the synapses121disposed along the second direction, and may receive operation signals output from the synapses121. In each of the neurons131, the operation signals of the synapses121disposed along the second direction may be accumulated. However, the number, arrangement, and the like of the synapses121connected to each of the neurons131are not limited to those shown inFIG.1. Each of the neurons131may compare a sum signal in which the operation signals of the synapses121are accumulated with a threshold signal (that is, a reference signal) and generate an output spike signal when the sum signal is greater than the threshold signal (that is, fire of a neuron). Output spike signals of the neuron circuit130may be provided back to the axon circuit110, may be output to the outside of the spike neural network circuit100, or may be output to another component of the spike neural network100. FIG.2is a block diagram more specifically illustrating synapses of a synaptic circuit and neurons of a neuron circuit shown inFIG.1.FIG.2will be described with reference toFIG.1. A spike neural network circuit100_1may include first to third synapses121_1to121_3and a neuron131_1. The spike neural network circuit100_1is the spike neural network circuit100ofFIG.1. For convenience of description, the axon circuit110is not shown, and only some synapses121_1,121_2, and121_3of the synaptic circuit120are shown inFIG.2. In addition, only one neuron131_1of the neuron circuit130is shown inFIG.2. A first synapse121_1may include a transistor MP1and a current source CS1. The current source CS1receives a first weight (Weight 1), and may generate a current corresponding to the first weight. For example, the current source CS1may be a transistor connected between a power supply voltage VDD and the transistor MP1. A transistor of the current source CS1may receive a voltage representing the first weight through a gate terminal. A source terminal of the transistor of the current source CS1may be connected to the power supply voltage VDD. A drain terminal of the transistor of the current source CS1may be connected to a source terminal of the transistor MP1. The source terminal and the drain terminal of the transistor may be referred to as a first end or a second end, respectively. The current source CS1may output the current corresponding to the first weight to the transistor MP1. The transistor MP1may receive a first input spike signal (Input 1: for example, a negative pulse) through a gate terminal. A source terminal of the transistor MP1may be connected to the current source CS1. A drain terminal of the transistor MP1may be connected to a transmission line. The transistor MP1may be a switch which is turned on or turned off depending on the first input spike signal. When the transistor MP1is turned on depending on the first input spike signal, the transistor MP1may output a current which is output from the current source CS1depending on the first input spike signal, that is, an operation signal, to the transmission line. The first synapse121_1may generate a first operation signal (Operation1) based on the first input spike signal and the first weight. The first operation signal may be determined by the product of the first input spike signal and the first weight. In an embodiment, the transistor MP1is illustrated as being a p-channel metal-oxide semiconductor (PMOS). However, the embodiment of the inventive concept is not limited thereto. A PMOS, an n-channel metal-oxide semiconductor (NMOS), or a combination of the PMOS and the NMOS may be implemented as the switch. The transistor of the current source CS1may also be a PMOS, an NMOS, or a combination of the PMOS and the NMOS. In an embodiment, the first synapse121_1may further include a digital-to-analog converter (DAC). The DAC of the first synapse121_1may receive digital bits representing the first weight and output a voltage representing the first weight to the current source CS1. The first synapse121_1may further include a register, a memory cell (for example, a static random access memory (SRAM) cell, a dynamic random access memory (DRAM) cell, a latch, a NAND flash memory cell, a NOR flash memory cell, a resistive random access memory (RRAM) cell, a ferroelectric random access memory (FRAM) cell, a phase change random access memory (PRAM) cell, and a magnetic random access memory (MRAM) cell), and the like for storing digital bits. In an embodiment, as shown inFIG.2, the first synapse121_1may include only the current source CS1and the transistor MP1, and the above-described DAC and the registers or the memory cells for storing digital bits are included in a semiconductor device in which the spike neural network100is implemented, but may be separated from the synaptic circuit120. In this case, the DAC separated from the synaptic circuit120may transmit a voltage representing a weight to the synaptic circuit120, or the registers or the memory cells for storing digital bits may transmit the digital bits to the synaptic circuit120. In any case, the current source CS1of the first synapse121_1may receive the voltage representing the first weight. A second synapse121_2may be implemented in the same manner as the first synapse121_1. The second synapse121_2may receive a voltage representing a second weight (Weight 2), and may receive a second input spike signal (Input 2). The second synapse121_2may generate a second operation signal (Operation2) based on the second input spike signal and the second weight. A third synapse121_3may be implemented in the same manner as the first synapse121_1. The third synapse121_3may receive a voltage representing a third weight (Weight 3), and may receive a third input spike signal (Input 3). The third synapse121_3may generate a third operation signal (Operation3) based on the third input spike signal and the third weight. Here, the first to third weights may be the same or different from each other. The first to third input spike signals may also be the same or different from each other. The neuron131_1may include a comparator132_1which compares a membrane signal (a sum signal) in which operation signals output from the first to three synapses121_1to121_3are combined and a threshold signal. The membrane signal may be generated based on the operation signals. The comparator132_1may compare a voltage Vm of the membrane signal with a voltage Vth of the threshold signal. The neuron131_1may generate an output spike signal (an output) based on a comparison result of the comparator132_1. For example, the neuron131_1may output an output spike signal when the voltage Vm of the membrane signal becomes greater (higher) than the voltage Vth of the threshold signal or when the voltage Vm of the membrane signal reaches the voltage Vth of the threshold signal (fire). In another example, the neuron131_1may output an output spike signal when the voltage Vm of the membrane signal becomes smaller (lower) than the voltage Vth of the threshold signal or when the voltage Vm of the membrane signal reaches the voltage Vth of the threshold signal (fire). The neuron131_1may include a bias circuit133_1. The bias circuit133_1may conditionally supply a bias current to the comparator132_1depending on the membrane signal. The comparator132_1may perform a comparison operation based on the bias current and may be operated by the bias current. The bias circuit133_1may be implemented separated from the comparator132_1, or may be included in the comparator132_1. Since the spike neural network circuit100is operated based on an input spike signal and an output spike signal, an interval (period, section, etc.) in which the voltage Vm of the membrane signal is greater than the voltage Vth of the threshold signal is shorter than an interval in which the voltage Vm of the membrane signal is smaller (less) than the voltage Vth of the threshold signal. The neuron131_1may be operated in most of the interval in which the voltage Vm of the membrane signal is smaller than the voltage Vth of the threshold signal, and the comparison operation of the neuron131_1is only required when the voltage Vm of the membrane signal is relatively high. The bias circuit133_1may not continuously supply (provide) the bias circuit. The bias circuit133_1may not supply the bias current to the comparator132_1when the voltage Vm of the membrane signal is relatively low, and may supply the bias current to the comparator132_1when the voltage Vm of the membrane signal is relatively high. As a result, a current and a voltage consumed in the comparator132_1may be reduced or minimized. Particularly, as the number of the neurons131of the neuron circuit130is increased, the current and voltage reduction described above is more effective. The bias current is conditionally supplied according to an operation condition (a voltage level of the membrane signal), and thus, may be referred to as a conditional bias current, and the bias circuit133_1may be referred to as a conditional bias circuit. The spike neural network circuit100_1may include a capacitor Cm in which charges are accumulated (integrated) by the first to third operation signals (currents) output from the first to third synapses121_1to121_3. A first end of the capacitor Cm may be connected to the first to third synapses121_1to121_3, and a second end of the capacitor Cm may be connected to a power supply voltage (ground voltage) GND. The capacitor Cm may be charged by currents output from the first to third synapses121_1to121_3and corresponding to the first to third weights. The voltage Vm of the capacitor Cm is the voltage Vm of the membrane signal, and may be a value in which currents output from the first to third synapses121_1to121_3are accumulated. The voltage Vm of the capacitor Cm may be a value determined by the first to third weights output from the first to third synapses121_1to121_3to the first to third input spike signals. The voltage Vm of the capacitor Cm may be provided to the neuron131_1. The number of synapses connected to the capacitor Cm through the transmission line is illustrated as being 3 inFIG.2. However, the embodiment of the present invention is not limited thereto. The spike neural network circuit100may further include other capacitors in which charges are accumulated by currents output from other synapses. The capacitor Cm may be referred to as a membrane capacitor or a membrane. The spike neural network circuit100_1may include a transistor MN1which discharges the charges accumulated in the capacitor Cm depending on a leakage signal. The transistor MN1may receive the leakage signal through a gate terminal. The transistor MN1may be connected between the capacitor Cm and the power supply voltage GND. The transistor MN1may be connected to the capacitor Cm in parallel. The transistor MN1may control the rate (speed) at which operation signals output from the first to third synapses121_1to121_3are accumulated in the capacitor Cm. A voltage of the leakage signal may be pre-defined. The transistor MN1is illustrated as being an NMOS inFIG.2, but may be implemented using a PMOS, an NMOS, or a combination of the PMOS and the NMOS. In an embodiment, unlike what is shown inFIG.2, transistors MP1to MP3and CS1to CS3of the first to third synapses121_1to121_3may each be implemented using an n-channel metal-oxide semiconductor (NMOS) connected between a transmission line and the power supply voltage GND. In this case, the capacitor Cm may be connected between the transmission line and the power supply voltage VDD, and the transistor MN1may be implemented using a PMOS instead of an NMOS. FIG.3exemplarily illustrates a block diagram of a comparator ofFIG.2.FIG.3will be described with reference toFIG.2. A comparator132_1amay be the comparator132_1ofFIG.2, and a bias circuit133_1amay be included in the comparator132_1a, or may be the bias circuit133_1ofFIG.2. The bias circuit133_1amay include a transistor MN2which receives the voltage Vm of the membrane signal through a gate terminal and a transistor MN3which receives a voltage Vbias1of a first bias signal through a gate terminal. The transistor MN2may be turned on or turned off depending on the voltage Vm of the membrane signal. When the voltage Vm of the membrane signal is greater than a threshold voltage of the transistor MN2, the transistor MN2may be turned on. A drain terminal of the transistor MN2may be connected to a source terminal of the transistor MN3. A source terminal of the transistor MN2may be connected to the power supply voltage GND. The transistor MN3may generate a bias current based on the voltage Vbias1of the first bias signal. A drain terminal of the transistor MN3may be connected to a node n1(a common node). A drain terminal of the transistor MN3may be connected to the drain terminal of the transistor MN2. When the transistor MN2is turned on, the bias current of the transistor MN3may be supplied to the comparator132_1athrough the transistor MN2, and when the transistor MN2is turned off, the bias current of the transistor MN3may not be supplied to the comparator132_1athrough the transistor MN2. Only when the transistor MN2is turned on, the bias current of the transistor MN3may flow through the transistor MN2, and power may be consumed by the bias current and the power supply voltage VDD. Here, the power supply voltage VDD of the comparator132_1amay be the same as the power supply voltage VDD of the synapses121or may be different therefrom. Referring toFIG.3, the transistors MN2and MN3may be connected in series. Unlike what is shown inFIG.3, the transistor MN2may be connected between the node n1and the transistor MN3, and the transistor MN3may be connected between the transistor MN2and the power supply voltage GND. The transistors MN2and MN3may be implemented using an NMOS, a PMOS, or a combination of the NOMS and the PMOS. The comparator132_1amay include a transistor MN4which receives the threshold signal through a gate terminal and a transistor MN5which receives the membrane signal through a gate terminal. Source terminals of the transistors MN4and MN5may be commonly connected to the node n1. A drain terminal of the transistor MN4may be connected to a node n2. A drain terminal of the transistor MN5may be connected to a node n3. The transistor MN4may generate a current flowing between the nodes n1and n2depending on the Vth of the threshold signal. The transistor MN5may generate a current flowing between the nodes n1and n3depending on the voltage Vm of the membrane signal. The transistors MN4and MN5may act as a switch for performing a comparison operation for the threshold signal and the membrane signal. The comparator132_1amay include a transistor MP4connected between the node n2and the power supply voltage VDD and a transistor MP5connected between the node n3and the power supply voltage VDD. A gate terminal and a drain terminal of the transistor MP may be connected to each other (diode connection). A drain terminal of the transistor MP5may be connected to the node n2. The transistors MP4and MP5provide a high impedance to a load terminal of the comparator132_1aso as to increase an amplification rate of the comparator132_1awhich amplifies the difference between the voltage Vth of the threshold signal and the voltage Vm of the membrane signal. Depending on a ratio of a current flowing through the transistor MP5and a current flowing through the transistor MN5, a voltage of the node n3may be determined. The transistors MN2, MN3, MN4, MN5, MP4and MP5may configure (constitute) a first stage of the comparator132_1a. The comparator132_1amay include a transistor MN6which receives a bias signal through a gate terminal and a transistor MP6which receives the voltage of the node n3through a gate terminal. A drain terminal of the transistor MN6may be connected to a node n6. A source terminal of the transistor MN6may be connected to the power supply voltage GND. A drain terminal of the transistor MP6may be connected to the node n6. A source terminal of the transistor MP6may be connected to the power supply voltage VDD. The transistors MN6and MP6may configure a second stage of the comparator132_1a. In a node n4, an output spike signal may be generated. A voltage Vspike_out of the output spike signal may be determined according to the result of comparing the voltage Vm of the membrane signal with the voltage Vth of the threshold signal. When the voltage Vm of the membrane signal reaches the voltage Vth of the threshold signal, a logic value of the voltage Vspike_out of the output spike signal is changed from a first value (for example, low) to a second value (for example, high) (or vice versa), so that the output spike signal may be activated and fired. In an embodiment, types of the transistors ofFIG.3are not limited to those shown inFIG.3. Also, the logic value of the output spike signal is not limited to the example described above. FIG.4exemplarily illustrates a block diagram of the comparator ofFIG.2.FIG.4will be described with reference toFIG.2andFIG.3. A comparator132_1bmay be the comparator132_1ofFIG.2, and a bias circuit133_1bmay be included in the comparator132_1b, or may be the bias circuit133_1ofFIG.2. Differences between the comparator132_1band the comparator132_1awill be mainly described, and the description of components having the same reference numerals will be omitted. The bias circuit133_1bmay further include a transistor MN7connected between the transistor MN2and the power supply voltage GND. A gate terminal and a drain terminal of the transistor MN7may be connected to each other (diode connection). The source terminal of the transistor MN2may be connected to the drain terminal of the transistor MN7instead of the power supply voltage GND. The transistor MN2may be supplied with a voltage increased by a threshold voltage of the transistor MN7from the power supply voltage GND instead of the power supply voltage GND. Unlike the comparator132_1a, when the voltage Vm of the membrane signal becomes greater than the sum of the threshold voltage of the transistor MN7and the threshold voltage of the transistor MN2, the comparator132_1bmay be supplied with a bias current through the transistors MN2and MN7. Accordingly, an interval in which a bias current is supplied in the comparator132_1bmay be shorter than that in the comparator132_1a. Also, the transistor MN7may further reduce the magnitude of the bias current of the comparator132_1b. The comparator132_1bmay further include transistors MN8and MP8which configure an inverter. A transistor MN8may receive a voltage of the node n4through a gate terminal, and a drain terminal of the transistor MN8may be connected to a node n5. Also, a source terminal of the transistor MN8may be connected to the power supply voltage GND. A transistor MP8may receive the voltage of the node n4through a gate terminal, and a drain terminal of the transistor MP8may be connected to the node n5. Also, a source terminal of the transistor MP8may be connected to the power supply voltage VDD. In the node n5, an output spike signal may be generated. When the voltage Vm of the membrane signal reaches the voltage Vth of the threshold signal, the logic value of the voltage Vspike_out of the output spike signal is changed from the second value to the first value, so that the output spike signal may be activated and fired. The comparator132_1bmay further include transistors MP9, MN9and MN10. A transistor MP9may receive a voltage of the node n5through a gate terminal, and a drain terminal of the transistor MP9may be connected to a node n6. Also, a source terminal of the transistor MP9may be connected to the power supply voltage VDD. A transistor MN9may receive the voltage of the node n5through a gate terminal, and a drain terminal of the transistor MN9may be connected to the node n6. Also, a source terminal of the transistor MN9may be connected to a transistor MN10. The transistor MN10may receive a voltage Vbias2of a second bias signal through a gate terminal, and a drain terminal of the transistor MN10may be connected to the source terminal of the transistor MN9. Also, a source terminal of the transistor MN10may be connected to the power supply voltage GND. Transistors MP9, MN9, and MN10may generate a quiescence adjustment signal in the node n6. The comparator132_1bmay further include a capacitor Cq. One end of the capacitor Cq may be connected to the node n6, and the other end of the capacitor Cq may be connected to the power supply voltage GND. When an output spike signal is activated, the transistor MP9is turned on, and by a current flowing through the transistor MP9, charges may be accumulated in the capacitor Cq. When the output spike signal is deactivated, the charges accumulated in the capacitor Cq may be discharged through the transistors MN9and MN10. When the output spike signal is deactivated, the transistor MN9may be turned on. Depending on the second bias signal, the transistor MN10may control the rate or duration at which the charges (that is, the quiescence adjustment signal) charged in the capacitor Cq are discharged. The comparator132_1bmay further include a transistor MN11which receives the quiescence adjustment signal (the voltage of the node n6) through a gate terminal. A drain terminal of the transistor MN11may be connected to a node n7, and a source terminal of the transistor MN11may be connected to the power supply voltage GND. The transistor MN11may be a pull-down transistor which drives the voltage Vm of the membrane signal with (to) the power supply voltage GND depending on the voltage of the node n6. Depending on the quiescence adjustment signal, the transistor MN11may electrically connect the node n7in which the membrane signal is generated to the power supply voltage GND. The capacitor Cq and the transistors MN9to MN11and MP9may configure a quiescence adjustment circuit134_1bwhich lowers the voltage Vm of the membrane signal to the power supply voltage GND. The quiescence adjustment circuit134_1bmay adjust an interval in which the membrane signal is deactivated or an interval in which the output spike signal is deactivated. A quiescence of the neuron131_1may represent a duration (time) during which the voltage Vm of the membrane signal is driven or maintained with the power supply voltage GND corresponding to a reset, or a duration during which the output spike signal is activated and then deactivated. The quiescence may be adjusted based on the second bias signal, the transistor MN10, and a capacity of the capacitor Cq. Even when an input spike signal is activated and operation results are output from the synapses121in the quiescence, since the voltage Vm of the membrane signal is maintained as the power supply voltage GND, the operation results may be ignored. The comparator132_1bmay further include a transistor MP11which receives the voltage Vspike_out of the output spike signal through a gate terminal. A drain terminal of the transistor MP11may be connected to the node n7, and a source terminal of the transistor MP11may be connected to the power supply voltage VDD. The transistor MP11may be a pull-up transistor which drives the voltage Vm of the membrane signal with the power supply voltage VDD depending on the voltage Vspike_out of the output spike signal. For example, immediately after the output spike signal is activated, the transistor MP11may be turned on to drive the voltage Vm of a membrane with the power supply voltage VDD, and accordingly, the voltage Vm of the membrane may represent an instantaneous up-swing. The transistor MP11may electrically connect the node n7in which the membrane signal is generated to the power supply voltage VDD immediately after the output spike signal is activated. When the output spike signal is activated, the transistor MP11is turned on to instantaneously increase the voltage Vm of the membrane, and then the transistor MN11is turned on to drive the voltage Vm of the membrane with the power supply voltage GND corresponding to a reset state. When the voltage Vm of the membrane is lowered to the power supply voltage GND, new operation signals may be received from the synapses121. In an embodiment, the spike neural network circuit100_1may further include a voltage generator which generates the first and second bias signals, the leakage signal, and the threshold signal. Each of voltage levels of the first to second bias signals, the leakage signal, and the threshold signal may be pre-defined, or programmed in the voltage generator. In an embodiment, types of the transistors ofFIG.4are not limited to those shown inFIG.3. Also, the logic value of the output spike signal is not limited to the example described above. FIG.5exemplarily illustrates a timing diagram showing the operation of a comparator ofFIG.4.FIG.5will be described with reference toFIG.4. InFIG.5, the horizontal axis represents duration, and the vertical axis may represent either voltage or current. Exemplarily, the membrane signal may be a sine wave. In an interval between T1and T2time points, the voltage Vm of the membrane signal may be lower than the voltage Vth of the threshold signal, the logic value of the voltage of the node n3may be the second value, the logic value of the voltage of the node n4may be the first value, and the bias current of the comparator132_1bmay not be supplied. When the voltage Vm of the membrane signal is lower than the sum of threshold voltages of the transistors MN7and MN2, the bias current of the comparator132_1bmay not be supplied. In an interval between T2and T3time points, the voltage Vm of the membrane signal may be higher than the voltage Vth of the threshold signal, the logic value of the voltage of the node n3may be the first value, the logic value of the voltage of the node n4may be the second value, and the bias current of the comparator132_1bmay be supplied. When the voltage Vm of the membrane signal is greater than the sum of threshold voltages of the transistors MN7and MN2, the bias current of the comparator132_1bmay be supplied. The power consumption of the comparator132_1bin the section between the T1and T2time points may be less than the power consumption of the comparator132_1bin the section between the T2and T3time points. FIG.6exemplarily illustrates a timing diagram showing the operation of the comparator ofFIG.4.FIG.6will be described with reference toFIG.4. InFIG.6, the horizontal axis represents duration (time), and the vertical axis may represent voltage. Referring toFIG.6, as the input spike signal is repeatedly activated and deactivated, the voltage Vm of the membrane signal may be gradually increased. Near a T4time point, when the voltage Vm of the membrane signal reaches the voltage Vth of the threshold signal, the voltage of the node n4of the comparator132_1bis activated and the voltage of the voltage Vspike_out of the output spike signal may be activated. Near the T4time point, the voltage Vm of the membrane signal may represent an instantaneous up-swing by the transistor MP11. After the voltage Vspike_out of the output spike signal is activated, the voltage Vm of the membrane signal may be lowered to the power supply voltage GND and deactivated by the transistor NM11. After the voltage of the voltage Vspike_out of the output spike signal is activated, the voltage (a pause adjustment signal) of the node n6may be discharged by the transistor NM10operated based on the second bias signal. Again, as the input spike signal is repeatedly activated and deactivated, the voltage Vm of the membrane signal may be gradually increased. Near a T5time point, when the voltage Vm of the membrane signal reaches the voltage Vth of the threshold signal, the voltage of the node n4of the comparator132_1bis activated and the voltage of the voltage Vspike_out of the output spike signal may be activated. Referring toFIG.6, an interval in which the output spike signal is activated may be much shorter than an interval in which the output spike signal is deactivated. FIG.7is a block diagram more specifically illustrating the synapses of the synaptic circuit and the neurons of the neuron circuit ofFIG.1.FIG.7will be described with reference toFIG.1andFIG.2. A spike neural network circuit100_2may include the first to third synapses121_1to121_3, the capacitor Cm, and the transistor MN1. The spike neural network circuit100_2is the spike neural network circuit100ofFIG.1. For convenience of description, the axon circuit110is not shown, and only some synapses121_1,121_2, and121_3of the synaptic circuit120are shown. The first to third synapses121_1to121_3, the capacitor Cm, and the transistor MN1of the spike neural network circuit100_2are substantially the same as the first to third synapses121_1to121_3, the capacitor Cm, and the transistor MN1of the spike neural network circuit100_1. Differences between the spike neural network circuit100_2and the spike neural network circuit100_1will be mainly described. The spike neural network circuit100_2may include a neuron131_2. For convenience of description, only one neuron131_2of the neuron circuit130is illustrated. The neuron131_2may include a comparator132_2and a bias circuit133_2. The neuron131_1compares the membrane signal with the threshold signal, but the neuron131_2compares the membrane signal with the first bias signal. The first bias signal may be used to generate a bias current of the comparator132_2and at the same time, may be provided as the threshold signal ofFIG.2. That is, the first bias signal may be referred to as the threshold signal. The bias circuit133_2may conditionally supply the bias current to the comparator132_2depending on the membrane signal. Except that the neuron131_2uses the first bias signal as the threshold signal, the neuron131_2may be operated similarly to the neuron131_1. FIG.8exemplarily illustrates a block diagram of the comparator ofFIG.7.FIG.8will be described with reference toFIG.7. A comparator132_2amay be the comparator132_2ofFIG.7, and a bias circuit133_2amay be included in the comparator132_2a, or may be the bias circuit133_2ofFIG.7. The comparator132_2amay include a transistor MP12which receives the voltage Vbias1of the first bias signal through a gate terminal. A drain terminal of the transistor MP12may be connected to a node n8. A source terminal of the transistor MP12may be connected to the power supply voltage VDD. The transistor MP12may generate a bias current based on the first bias signal. The transistor MP12may output the bias current corresponding to the first bias signal to a transistor MN13. The comparator132_2amay include transistors MN13and MN14. The transistor MN13may receive the bias current corresponding to the first bias signal. A gate terminal and a drain terminal of the transistor MN13may be connected to each other (diode connection). A source terminal of the transistor MN13may be connected to a node n9. The transistor MN13may be connected between the node n8and the node n9. A transistor MN14may receive the bias current corresponding to the first bias signal through the transistor MN13. A gate terminal and a drain terminal of the transistor MN14may be connected to each other (diode connection). A source terminal of the transistor MN14may be connected to the power supply voltage GND. The transistor MN14may be connected between the node n9and the power supply voltage GND. The transistors MN13and MN14may copy the bias current corresponding to the first bias signal to the bias circuit133_2a(current mirroring). The bias circuit133_2amay include a transistor MN15which receives a voltage of the node n8through a gate terminal and a transistor MN16which receives a voltage of the node n9through the gate terminal. A drain terminal of the transistor MN15may be connected to a node n10. A source terminal of the transistor MN15may be connected to a drain terminal of the transistor MN16. The drain terminal of the transistor MN16may be connected to the source terminal of the transistor MN15. A source terminal of the transistor MN16may be connected to a drain terminal of a transistor MN17. Through the transistors MN15and MN16, the bias current corresponding to the first bias signal may flow. Unlike what is shown, the comparator132_2amay not include the transistors MN13and MN15. In this case, the drain terminal of the transistor MP12and the drain terminal of the transistor MN14may be connected to each other, and the drain terminal of the transistor MN16and a drain terminal of a transistor MP16may be connected to each other. The comparator133_2amay include the transistor MN17which receives the voltage Vm of the membrane signal through a gate terminal. A drain terminal of the transistor MN17may be connected to the source terminal of the transistor MN16. A source terminal of the transistor MN17may be connected to the power supply voltage GND. The transistor MN17may be connected between the transistor MN16and the power supply voltage GND. The transistor MN17may be turned on or turned off depending on the voltage Vm of the membrane signal. When the voltage Vm of the membrane signal is greater than a threshold voltage of the transistor MN17, the transistor MN17may be turned on. When the transistor MN17is turned on, a bias current corresponding to the first bias signal may be supplied to the comparator132_2athrough the transistor MN17, and when the transistor MN17is turned off, the bias current corresponding to the first bias signal may not be supplied to the comparator132_2athrough the transistor MN2. Only when the transistor MN17is turned on, a bias current may flow through the transistor MN17, and power may be consumed by the bias current and the power supply voltage VDD. The bias circuit133_2amay include the transistor MP16. A gate terminal and a drain terminal of the transistor MP16may be connected to each other (diode connection), and may be connected to the node n10. A source terminal of the transistor MP16may be connected to a transistor MP17. The comparator133_2amay include the transistor MP17which receives the voltage Vm of the membrane signal through a gate terminal. A source terminal of the transistor MP17may be connected to the power supply voltage VDD. A drain terminal of the transistor MP17may be connected to a source terminal of the transistor MP16. InFIG.3, the comparator132_1acompares the voltage Vm of the membrane signal with the voltage Vth of the threshold signal. The comparator132_2amay compare a current of the membrane signal with a bias current of the first bias signal. Through the transistors MP16and MP17, a pull-up current depending on the voltage Vm of the threshold signal may be generated. Since the transistors MP16and MP17generate the pull-up current, the logic value of the output spike signal Vspike_out may be driven with the second value. Through the transistors MP16and MP17, a pull-down current (a bias current) depending on the first bias signal may be generated. Since the transistors MN15and MN16generate the pull-down current, the logic value of the output spike signal Vspike_out may be driven with the first value. The voltage Vspike_out of the output spike signal may be determined according to the result of comparing the current of the membrane signal with the current of the first bias signal. For example, when the current of the membrane signal becomes smaller than the bias current of the first bias signal or when the current of the membrane signal reaches the bias current of the first bias signal, the logic value of the voltage Vspike_out of the output spike signal is changed from the second value to the first value, so that the output spike signal may be activated and fired. In another example, when the current of the membrane signal becomes greater than the bias current of the first bias signal or the current of the membrane signal reaches the bias current of the first bias signal, the output spike signal may be activated and fired. The output spike signal may be generated in the node n10. In an embodiment, types of the transistors ofFIG.8are not limited to those shown inFIG.8. Also, the logic value of the output spike signal is not limited to the example described above. FIG.9exemplarily illustrates a block diagram of the comparator ofFIG.7.FIG.9will be described with reference toFIG.2andFIG.3, andFIG.7andFIG.8. A comparator132_2bmay be the comparator132_2ofFIG.7, and a bias circuit133_2bmay be included in the comparator132_2b, or may be the bias circuit133_2ofFIG.7. Differences between the comparator132_2band the comparator132_2aand differences between the comparator132_2band the comparator132_1bwill be mainly described, and the description of components having the same reference numerals will be omitted. The transistors MP12, MP16, MP17and MN13to MN17of the comparator132_2bhave been explained with reference toFIG.8. The transistors MN8to MN11, MP8, MP9and MP11of the comparator132_2bhave been explained with reference toFIG.4. The transistors MN9to MN11and MP9and the capacitor Cq may constitute a quiescence adjustment circuit134_2b. The quiescence adjustment circuit134_2bmay be implemented substantially the same as the quiescence adjustment circuit134_1b. The comparator132_2bmay include the transistors MN6and MP6which constitute an inverter. The transistor MN6may receive the voltage of the node10through a gate terminal. The drain terminal of the transistor MN6may be connected to the node n4. The source terminal of the transistor MN6may be connected to the power supply voltage GND. The transistor MP6may receive the voltage of the node10through a gate terminal. The drain terminal of the transistor MP6may be connected to the node n4. The source terminal of the transistor MP6may be connected to the power supply voltage VDD. In an embodiment, types of the transistors ofFIG.9are not limited to those shown inFIG.9. Also, the logic value of the output spike signal is not limited to the example described above. A spike neural network circuit according to an embodiment of the inventive concept may include a comparator operated by a conditional bias current. Accordingly, the power consumption by the spike neural network circuit may be reduced. The above description relates to specific examples for implementing the inventive concept. The inventive concept present invention will include embodiments that can be simplified or easily changed, as well as the embodiments described above. In addition, the inventive concept will also include techniques that can be easily modified and implemented in the future using the embodiments described above. | 43,271 |
11861484 | DETAILED DESCRIPTION The detailed description set forth below, in connection with the appended drawings, is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring such concepts. Based on the teachings, one skilled in the art should appreciate that the scope of the disclosure is intended to cover any aspect of the disclosure, whether implemented independently of or combined with any other aspect of the disclosure. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth. In addition, the scope of the disclosure is intended to cover such an apparatus or method practiced using other structure, functionality, or structure and functionality in addition to or other than the various aspects of the disclosure set forth. It should be understood that any aspect of the disclosure disclosed may be embodied by one or more elements of a claim. The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. Although particular aspects are described herein, many variations and permutations of these aspects fall within the scope of the disclosure. Although some benefits and advantages of the preferred aspects are mentioned, the scope of the disclosure is not intended to be limited to particular benefits, uses, or objectives. Rather, aspects of the disclosure are intended to be broadly applicable to different technologies, system configurations, networks and protocols, some of which are illustrated by way of example in the figures and in the following description of the preferred aspects. The detailed description and drawings are merely illustrative of the disclosure rather than limiting, the scope of the disclosure being defined by the appended claims and equivalents thereof. Artificial neural networks, having either convolutional or fully connected layers, enable processing for image recognition, object detection, and natural language processing. These features also enable support for autonomous driving applications as well as content-aware camera processing. Deep convolutional neural networks (DCNNs) have promising applications in emerging embedded, wearable, and Internet of Things (IoT) markets. In operation, a deep convolutional neural network (or DCNN) may be composed of a large number of weight tensors multiplied by activation tensors. These weight tensors and activation tensors enable multiplying of input data by weights in various filters of the DCNN. In a previous layer of the DCNN, the activation tensors may be fed through nonlinear functions. In operation, processing in DCNNs generally involves convolution of weight tensors and activation tensors to perform tasks. DCNNs, therefore, consume significant computing power performing convolution of the large number of weight tensors and activation tensors. Deep convolutional neural networks, however, tend to shrink input features during computations through the various network layers. Shrinking of the input feature size during computations fails to preserve an original size of the input features. Input feature padding may be used to preserve the input feature size during computations though the neural network layers. Although input feature padding preserves the input feature size, processing of the padded values unduly increases memory bandwidth utilization in deep convolutional neural networks. Additional pre-processing and post-processing operations performed on activation tensors may include data cropping as well as data conversion, which unduly increase memory bandwidth utilization in deep convolutional neural networks. Aspects of the present disclosure are directed to neural processing unit (NPU) direct memory access (NDMA) hardware pre-processing and post-processing of NDMA data for convolutional neural networks (CNNs). Adding hardware pre-processing and post-processing capability reduces memory bandwidth pressure and wasted cycles in compute units of an NPU. As described, the term NDMA data may refer to data (e.g., image data, activation tensors, or other like convolutional data) moved from main memory to storage closer to the compute units of an NPU (e.g., read clients and/or write clients). NDMA hardware pre-processing and post-processing is software programmable, which ultimately results in better resource utilization and energy efficiency. In aspects of the present disclosure, programmability of the hardware pre-processing and post-processing capability is provided at a grain level of a layer in the neural network. FIG.1illustrates an example implementation of a system-on-a-chip (SOC)100, which may include a neural processing unit (NPU)108or a multi-core NPU configured to perform hardware pre-processing and post-processing of NDMA data in accordance with certain aspects of the present disclosure. Variables (e.g., neural signals and synaptic weights), system parameters associated with a computational device (e.g., neural network with weights), delays, frequency bin information, and task information may be stored in a memory block associated with an NPU108, in a memory block associated with a central processing unit (CPU)102, in a memory block associated with a graphics processing unit (GPU)104, in a memory block associated with a digital signal processor (DSP)106, in a memory block118, or may be distributed across multiple blocks. Instructions executed at the CPU102may be loaded from a program memory associated with the CPU102or may be loaded from a memory block118. The SOC100may also include additional processing blocks tailored to specific functions, such as a connectivity block110, which may include fifth generation (5G) connectivity, fourth generation long term evolution (4G LTE) connectivity, Wi-Fi connectivity, USB connectivity, Bluetooth connectivity, and the like, and a multimedia processor112that may, for example, detect and recognize gestures. In one implementation, the NPU is implemented in the CPU, DSP, and/or GPU. The SOC100may also include a sensor processor114, image signal processors (ISPs)116, and/or navigation module120, which may include a global positioning system. The NPU108may be based on an ARM instruction set. In an aspect of the present disclosure, the instructions loaded into the NPU108may comprise program code to program configuration registers of a neural processing unit (NPU) direct memory access (NDMA) core for a read client and/or a write client. The instructions loaded into the NPU108may also comprise program code to stream data blocks of a data stripe to/from an external memory of the NDMA core. In addition, the instructions loaded into the NPU108may comprise program code to pre-process and post-process the data blocks in a buffer of the NDMA core during streaming of the data blocks. Deep learning architectures may perform an object recognition task by learning to represent inputs at successively higher levels of abstraction in each layer, thereby building up a useful feature representation of the input data. In this way, deep learning addresses a major bottleneck of traditional machine learning. Prior to the advent of deep learning, a machine learning approach to an object recognition problem may have relied heavily on human engineered features, perhaps in combination with a shallow classifier. A shallow classifier may be a two-class linear classifier, for example, in which a weighted sum of the feature vector components may be compared with a threshold to predict to which class the input belongs. Human engineered features may be templates or kernels tailored to a specific problem domain by engineers with domain expertise. Deep learning architectures, in contrast, may learn to represent features that are similar to what a human engineer might design, but through training. Furthermore, a deep network may learn to represent and recognize new types of features that a human might not have considered. A deep learning architecture may learn a hierarchy of features. If presented with visual data, for example, the first layer may learn to recognize relatively simple features, such as edges, in the input stream. In another example, if presented with auditory data, the first layer may learn to recognize spectral power in specific frequencies. The second layer, taking the output of the first layer as input, may learn to recognize combinations of features, such as simple shapes for visual data or combinations of sounds for auditory data. For instance, higher layers may learn to represent complex shapes in visual data or words in auditory data. Still higher layers may learn to recognize common visual objects or spoken phrases. Deep learning architectures may perform especially well when applied to problems that have a natural hierarchical structure. For example, the classification of motorized vehicles may benefit from first learning to recognize wheels, windshields, and other features. These features may be combined at higher layers in different ways to recognize cars, trucks, and airplanes. Neural networks may be designed with a variety of connectivity patterns. In feed-forward networks, information is passed from lower to higher layers, with each neuron in a given layer communicating to neurons in higher layers. A hierarchical representation may be built up in successive layers of a feed-forward network, as described above. Neural networks may also have recurrent or feedback (also called top-down) connections. In a recurrent connection, the output from a neuron in a given layer may be communicated to another neuron in the same layer. A recurrent architecture may be helpful in recognizing patterns that span more than one of the input data chunks that are delivered to the neural network in a sequence. A connection from a neuron in a given layer to a neuron in a lower layer is called a feedback (or top-down) connection. A network with many feedback connections may be helpful when the recognition of a high-level concept may aid in discriminating the particular low-level features of an input. The connections between layers of a neural network may be fully connected or locally connected.FIG.2Aillustrates an example of a fully connected neural network202. In a fully connected neural network202, a neuron in a first layer may communicate its output to every neuron in a second layer, so that each neuron in the second layer will receive input from every neuron in the first layer.FIG.2Billustrates an example of a locally connected neural network204. In a locally connected neural network204, a neuron in a first layer may be connected to a limited number of neurons in the second layer. More generally, a locally connected layer of the locally connected neural network204may be configured so that each neuron in a layer will have the same or a similar connectivity pattern, but with connection strengths that may have different values (e.g.,210,212,214, and216). The locally connected connectivity pattern may give rise to spatially distinct receptive fields in a higher layer, because the higher layer neurons in a given region may receive inputs that are tuned through training to the properties of a restricted portion of the total input to the network. One example of a locally connected neural network is a convolutional neural network.FIG.2Cillustrates an example of a convolutional neural network206. The convolutional neural network206may be configured such that the connection strengths associated with the inputs for each neuron in the second layer are shared (e.g.,208). Convolutional neural networks may be well suited to problems in which the spatial location of inputs is meaningful. One type of convolutional neural network is a deep convolutional neural network (DCNN).FIG.2Dillustrates a detailed example of a DCNN200designed to recognize visual features from an image226input from an image capturing device230, such as a car-mounted camera. The DCNN200of the current example may be trained to identify traffic signs and a number provided on the traffic sign. Of course, the DCNN200may be trained for other tasks, such as identifying lane markings or identifying traffic lights. The DCNN200may be trained with supervised learning. During training, the DCNN200may be presented with an image, such as the image226of a speed limit sign, and a forward pass may then be computed to produce an output222. The DCNN200may include a feature extraction section and a classification section. Upon receiving the image226, a convolutional layer232may apply convolutional kernels (not shown) to the image226to generate a first set of feature maps218. As an example, the convolutional kernel for the convolutional layer232may be a 5×5 kernel that generates 28×28 feature maps. In the present example, four different convolutional kernels were applied to the image226at the convolutional layer232because four different feature maps are generated in the first set of feature maps218. The convolutional kernels may also be referred to as filters or convolutional filters. The first set of feature maps218may be subsampled by a max pooling layer (not shown) to generate a second set of feature maps220. The max pooling layer reduces the size of the first set of feature maps218. That is, a size of the second set of feature maps220, such as 14×14, is less than the size of the first set of feature maps218, such as 28×28. The reduced size provides similar information to a subsequent layer while reducing memory consumption. The second set of feature maps220may be further convolved via one or more subsequent convolutional layers (not shown) to generate one or more subsequent sets of feature maps (not shown). In the example ofFIG.2D, the second set of feature maps220is convolved to generate a first feature vector224. Furthermore, the first feature vector224is further convolved to generate a second feature vector228. Each feature of the second feature vector228may include a number that corresponds to a possible feature of the image226, such as “sign,” “60,” and “100.” A softmax function (not shown) may convert the numbers in the second feature vector228to a probability. As such, an output222of the DCNN200is a probability of the image226including one or more features. In the present example, the probabilities in the output222for “sign” and “60” are higher than the probabilities of the others of the output222, such as “30,” “40,” “50,” “70,” “80,” “90,” and “100”. Before training, the output222produced by the DCNN200is likely to be incorrect. Thus, an error may be calculated between the output222and a target output. The target output is the ground truth of the image226(e.g., “sign” and “60”). The weights of the DCNN200may then be adjusted so the output222of the DCNN200is more closely aligned with the target output. To adjust the weights, a learning algorithm may compute a gradient vector for the weights. The gradient may indicate an amount that an error would increase or decrease if the weight were adjusted. At the top layer, the gradient may correspond directly to the value of a weight connecting an activated neuron in the penultimate layer and a neuron in the output layer. In lower layers, the gradient may depend on the value of the weights and on the computed error gradients of the higher layers. The weights may then be adjusted to reduce the error. This manner of adjusting the weights may be referred to as “back propagation” as it involves a “backward pass” through the neural network. In practice, the error gradient of weights may be calculated over a small number of examples, so that the calculated gradient approximates the true error gradient. This approximation method may be referred to as stochastic gradient descent. Stochastic gradient descent may be repeated until the achievable error rate of the entire system has stopped decreasing or until the error rate has reached a target level. After learning, the DCNN may be presented with new images (e.g., the speed limit sign of the image226) and a forward pass through the network may yield an output222that may be considered an inference or a prediction of the DCNN. Deep belief networks (DBNs) are probabilistic models comprising multiple layers of hidden nodes. DBNs may be used to extract a hierarchical representation of training data sets. A DBN may be obtained by stacking up layers of Restricted Boltzmann Machines (RBMs). An RBM is a type of artificial neural network that can learn a probability distribution over a set of inputs. Because RBMs can learn a probability distribution in the absence of information about the class to which each input should be categorized, RBMs are often used in unsupervised learning. Using a hybrid unsupervised and supervised paradigm, the bottom RBMs of a DBN may be trained in an unsupervised manner and may serve as feature extractors, and the top RBM may be trained in a supervised manner (on a joint distribution of inputs from the previous layer and target classes) and may serve as a classifier. Deep convolutional neural networks (DCNNs) are networks of convolutional networks, configured with additional pooling and normalization layers. DCNNs have achieved state-of-the-art performance on many tasks. DCNNs can be trained using supervised learning in which both the input and output targets are known for many exemplars and are used to modify the weights of the network by use of gradient descent methods. DCNNs may be feed-forward networks. In addition, as described above, the connections from a neuron in a first layer of a DCNN to a group of neurons in the next higher layer are shared across the neurons in the first layer. The feed-forward and shared connections of DCNNs may be exploited for fast processing. The computational burden of a DCNN may be much less, for example, than that of a similarly sized neural network that comprises recurrent or feedback connections. The processing of each layer of a convolutional network may be considered a spatially invariant template or basis projection. If the input is first decomposed into multiple channels, such as the red, green, and blue channels of a color image, then the convolutional network trained on that input may be considered three-dimensional, with two spatial dimensions along the axes of the image and a third dimension capturing color information. The outputs of the convolutional connections may be considered to form a feature map in the subsequent layer, with each element of the feature map (e.g.,220) receiving input from a range of neurons in the previous layer (e.g., feature maps218) and from each of the multiple channels. The values in the feature map may be further processed with a non-linearity, such as a rectification, max(0,x). Values from adjacent neurons may be further pooled, which corresponds to down sampling, and may provide additional local invariance and dimensionality reduction. Normalization, which corresponds to whitening, may also be applied through lateral inhibition between neurons in the feature map. The performance of deep learning architectures may increase as more labeled data points become available or as computational power increases. Modern deep neural networks are routinely trained with computing resources that are thousands of times greater than what was available to a typical researcher just fifteen years ago. New architectures and training paradigms may further boost the performance of deep learning. Rectified linear units may reduce a training issue known as vanishing gradients. New training techniques may reduce over-fitting and thus enable larger models to achieve better generalization. Encapsulation techniques may abstract data in a given receptive field and further boost overall performance. FIG.3is a block diagram illustrating a deep convolutional neural network350. The deep convolutional neural network350may include multiple different types of layers based on connectivity and weight sharing. As shown inFIG.3, the deep convolutional neural network350includes the convolution blocks354A,354B. Each of the convolution blocks354A,354B may be configured with a convolution layer (CONV)356, a normalization layer (LNorm)358, and a max pooling layer (MAX POOL)360. The convolution layers356may include one or more convolutional filters, which may be applied to the input data to generate a feature map. Although only two of the convolution blocks354A,354B are shown, the present disclosure is not so limiting, and instead, any number of the convolution blocks354A,354B may be included in the deep convolutional neural network350according to design preference. The normalization layer358may normalize the output of the convolution filters. For example, the normalization layer358may provide whitening or lateral inhibition. The max pooling layer360may provide down sampling aggregation over space for local invariance and dimensionality reduction. The parallel filter banks, for example, of a deep convolutional neural network may be loaded on a CPU102or GPU104of an SOC100to achieve high performance and low power consumption. In alternative embodiments, the parallel filter banks may be loaded on the DSP106or an ISP116of an SOC100. In addition, the deep convolutional neural network350may access other processing blocks that may be present on the SOC100, such as the sensor processor114and navigation module120, dedicated, respectively, to sensors and navigation. The deep convolutional neural network350may also include one or more fully connected layers362(FC1 and FC2). The deep convolutional neural network350may further include a logistic regression (LR) layer364. Between each layer356,358,360,362,364of the deep convolutional neural network350are weights (not shown) that are to be updated. The output of each of the layers (e.g.,356,358,360,362,364) may serve as an input of a succeeding one of the layers (e.g.,356,358,360,362,364) in the deep convolutional neural network350to learn hierarchical feature representations from input data352(e.g., images, audio, video, sensor data, and/or other input data) supplied at the first of the convolution blocks354A. The output of the deep convolutional neural network350is a classification score366for the input data352. The classification score366may be a set of probabilities, where each probability is the probability of the input data including a feature from a set of features. FIG.4is a block diagram illustrating an exemplary software architecture400that may modularize artificial intelligence (AI) functions. Using the architecture, applications may be designed that may cause various processing blocks of an SOC420(for example a CPU422, a DSP424, a GPU426, and/or an NPU428) to support hardware pre-processing and post-processing of NPU direct memory access (NDMA) during run-time operation of an AI application402, according to aspects of the present disclosure. The AI application402may be configured to call functions defined in a user space404that may, for example, provide for the detection and recognition of a scene indicative of the location in which the device currently operates. The AI application402may, for example, configure a microphone and a camera differently depending on whether the recognized scene is an office, a lecture hall, a restaurant, or an outdoor setting such as a lake. The AI application402may make a request to compiled program code associated with a library defined in an AI function application programming interface (API)406. This request may ultimately rely on the output of a deep neural network configured to provide an inference response based on video and positioning data, for example. A run-time engine408, which may be compiled code of a runtime framework, may be further accessible to the AI application402. The AI application402may cause the run-time engine, for example, to request an inference at a particular time interval or triggered by an event detected by the user interface of the application. When caused to provide an inference response, the run-time engine may in turn send a signal to an operating system in an operating system (OS) space410, such as a Linux Kernel412, running on the SOC420. The operating system, in turn, supports hardware pre-processing and post-processing of NDMA data performed on the CPU422, the DSP424, the GPU426, the NPU428, or some combination thereof. The CPU422may be accessed directly by the operating system, and other processing blocks may be accessed through a driver, such as a driver414,416, or418for, respectively, the DSP424, the GPU426, or the NPU428. In the exemplary example, the deep neural network may be configured to run on a combination of processing blocks, such as the CPU422, the DSP424, and the GPU426, or may be run on the NPU428. Referring again toFIG.1, the SOC100includes a neural processing unit (NPU)108or a multi-core NPU configured to perform hardware pre-processing and post-processing of NPU direct memory access (NDMA) data, in accordance with certain aspects of the present disclosure. In aspects of the present disclosure, an NDMA core of the NPU108is configured to move substantial chunks of data (e.g., an image frame of one-dimensional (1D), two-dimensional (2D), or three-dimensional (3D) data and/or activation tensors). In aspects of the present disclosure, the NDMA core moves the data chunks in and out of an array of compute elements of the NPU108(e.g., read clients and/or write clients) by streaming the data. During streaming of the data, the NDMA core may perform hardware pre-processing and post-processing during reading/writing of the data streaming to/from client buffers. In aspects of the present disclosure, streaming of data refers to movement of data in a stripe, block by block, in response to a single NDMA command. That is, streaming of data moves a small block (e.g., 1D, 2D, or 3D) at a time, and continues by moving another block after a period of time (e.g., to receive a bus grant signal). This process is repeated until a stripe of data is moved to/from a client buffer. In this example, the block size is programmable, which will generally be larger than a bus transaction size. In aspects of the present disclosure, the NDMA core of the NPU108can be configured to move a stripe of data (e.g., multiple blocks), for example as shown inFIGS.5A,5B, and5C. FIG.5Ais a block diagram of an image500partitioned into M-stripes, according to aspects of the present disclosure. Traditional streaming retrieves a chunk of memory aligned with the boundaries of main memory and stores the chunk of memory locally. Aspects of the present disclosure recognize that tensor computations in deep learning neural networks generally do not involve the entire chunk of memory, such as the image500. Generally, a subset of the chunk of data is used for tensor computation in deep learning neural networks. According to aspects of the present disclosure, this subset of data may be a stripe of the image500. As described, striping is a data processing technique in which an image500is partitioned into any desirable number of vertical slices (e.g., stripe 0, stripe 1, . . . , stripe m-1). In this example, the image500, including N-lines (e.g., line 0, line 1, line n-1), is carved into M-vertical slices. Each vertical slice is referred as a stripe (e.g., a stripe image or data stripe). In one example, the image500is an m-sliced image, in which the line width of the image500is partitioned into m line segments, which may or may not equal the N-lines of the image500. That is, the height of each stripe (e.g., stripe 0, stripe 1, . . . , stripe m-1), in most cases, matches the height of the image500. There is, however, no restriction mandating every stripe having an equal width or having a height equal to the height of the image500. FIG.5Bis a block diagram illustrating parameters of a stripe image560of an original image550, according to aspects of the present disclosure. Striping operates on an established coordinate system, allowing software users to specify the dimension and location of a sliced image (e.g., image500ofFIG.5A). The parameters of the stripe image560can be described in the context of high-level system design or a low-level hardware implementation. For example, from a high-level system perspective, the start location of the stripe image560may be specified in terms of an x_offset and a y_offset. The x_offset is the horizontal displacement between the left-most side of the stripe image560and the left-most side of the original image550, measured in terms of pixels. The y_offset is the vertical displacement between the top-most side of the stripe image560and the top-most side of the original image550, measured in terms of line numbers. Additional parameters include an image_width (e.g., the width of the original image550), image_height (e.g., the height of the original image550), a start_address (e.g., the starting location (e.g., address) of the stripe in external memory), an x_size (e.g., the width of the stripe), and a y_size (e.g., the height of the stripe). While pixel and line representation is one option for specifying the location of the stripe image560, this representation can be difficult and expensive to implement in hardware. For this reason, software users are expected to convert the parameters specified in a system domain into a hardware domain (e.g., the memory address of the pixel words) for reducing hardware complexity and cost. Regardless of the specified parameters, NDMA enables stripe read and stripe write for accessing NDMA data. FIG.5Cis a block diagram580illustrating further parameters of the stripe image560of the original image550ofFIG.5B, according to aspects of the present disclosure. Conceptually, stripe-based processing is a subset of block-based processing. Consequently, the block parameters of the stripe image560may be specified in terms of a block590, which is the smallest group of data moved by a single direct memory access (DMA) channel arbitration. The block parameters include a blk_start_addr, a blk_size, a last_blk_size, an x_side_dword, a num_blks parameter, and a row_incr parameter. The blk_start_addr parameter is the external memory address of each block at the start point. The blk_size and the last_blk_size parameters are used to define the size of the stripe image560. The blocks of the stripe image560generally have the same size, except for the last block, which has the last_blk_size. The num_blks parameter indicates the number of blocks in the stripe image560. The x_size_dword parameter is the word size of the block590. The row_incr parameter is a block address increment used to determine a next block's address by adding to the previous start address (e.g., blk_start_addr). As described, address hopping is a data access technique for accessing blocks within a stripe image (e.g.,560). During block streaming to stripe read and/or stripe write, data is saved to an external memory (e.g., double data rate (DDR) memory) in a 2D fashion. In particular, image data, such as the image500shown inFIG.5A, is understood to represent a 2D format. During NDMA operation, a stripe of data can be accessed from the 2D data block (e.g., block590shown inFIG.5C). In practice, data is stored in the external (e.g., DDR) memory using a contiguous address space. 2D and 3D data may be accessed using address hopping, for example, as shown inFIGS.6A and6B. FIG.6Ais a block diagram illustrating storage of a 2D data block600in an external memory, according to aspects of the present disclosure. The 2D data block600includes N-lines (e.g., line 0, . . . , line n-1) and is defined by a data_width parameter, a data_height parameter, and block address parameters (e.g., block_addr0_0, block_addr0_m, block_addrn_0, and block_addrn_m). The 2D data block600includes a stripe610defined by stripe_start_addr, x_offset, y_offset, x_size, and y_size parameters. FIG.6Bis a block diagram illustrating a three-dimensional representation of image data, according to aspects of the present disclosure. Representatively, a 3D data structure650is shown. In this example, data is stored in an external memory in a raster order of lines in a Dim0 direction (in pixels), and continuously in a Dim1 direction. The 3D data storage is repeated over Dim0-Dim1 raster order in a Dim2 direction. The 3D data storage format can be described as a 3D array (e.g., DDR_data[dim2][dim1][dim0]). Data access to a stripe of 3D rectangular blocks is performed in a predetermined order, for example, by repeating access over the Dim0 and Dim1 directions, and proceeding in raster order over the Dim2 direction. As described, Dim0 refers to a dimension that moves sequentially through contiguous NDMA words (e.g., a dword or a 256-bit word) in external memory; the term Dim1 refers to a dimension used when data is transferred in a 3D block (e.g., as shown inFIG.6B), and the term Dim2 refers to a dimension used when data is transferred as a 2D or 3D block. As further described, the terms “lines” and “rows” are used interchangeably to describe aspects of the present disclosure because both terms refer to the lines of an image. Strictly speaking, however, “line” refers to the main image, while “row” refers to the lines contained in a given read buffer (e.g., one stripe).FIG.6Balso shows left padding (e.g., padding(Dim0, left)), right padding (e.g., padding(Dim0, right)), top padding (e.g., padding(Dim2, top)), and bottom padding (e.g., padding(Dim1, bottom)). FIG.7is a block diagram illustrating an NPU700, including an NPU DMA (NDMA) core710and interfaces configured to provide hardware pre-processing and post-processing of NDMA data, according to aspects of the present disclosure. The NDMA core710includes a read engine720configured to provide a first memory interface to a read client (RCLT)702, including a client buffer704, and a write engine730configured to provide a second memory interface to a write client (WCLT)706, including a client buffer708. The memory interfaces to the client side (e.g., RCLT, WCLT) are memory read/write interfaces using a request/valid handshake. In aspects of the present disclosure, the read client RCLT702and the write client WCLT706may refer to an array to compute elements of the NPU700, which may support, for example, 16-NDMA read channels and 16-NDMA write channels for the various compute units of the NPU700. The NDMA core710also includes a bus interface (e.g., a synchronous media and switch fabric (MSF) interface) to a bus bridge740. In this configuration, the NDMA core710is connected to the bus bridge740as well as a network on chip (NoC)750, such as a multimedia subsystem (MMSS) NoC. The bus bridge740may be connected to the NoC750using, for example, an advance eXtensible interface (AXI). The NoC750may be connected to an external memory760(e.g., a DDR memory) through an external memory interface (e.g., an AXI bus). In this configuration, the NDMA core710is partitioned into two major logic components; namely the write engine730and the read engine720. The write engine730is configured to move processed client data to the external memory760in a stripe format (seeFIGS.5A-5C). On the other hand, the read engine720is configured to transfer fragmented data from the external memory760into client memories (e.g., read buffer722and/or write buffer732) for image processing or for configuration. The write client WCLT and the read client RCLT are independent of each other. As described, a write path implies an NDMA read from the write client WCLT and a write to the external memory760, and a read path implies an NDMA read from the external memory760and a write to the read client RCLT. In addition, the terms “read path,” “read client,” and “read channel” are used interchangeably. The terms “write path,” “write client,” and “write channel” are also used interchangeably in this document. In this aspect of the present disclosure, the NDMA core710avoids using large NDMA buffers. Instead, the NDMA core710may rely on client buffers of the read client RCLT and the write client WCLT for buffering NDMA data. This configuration provides flexibility by reusing the client's buffers from NDMA data transfer. In this configuration, the read engine720includes a read buffer722for storing (e.g., a bus width of) configuration data. The read engine720is configured to read 256-bits of configuration data from the read buffer722that is used for configuration of NDMA operation for the read client RCLT and/or the write client WCLT. In operation, the read engine720retrieves (e.g., one bus width number of) bits of image data (e.g., NDMA data) from the external memory760and stores those bits in the read buffer722. According to aspects of the present disclosure, the stored bits of image data may be subjected to hardware pre-processing and post-processing within the read buffer. As described, processing of MDMA data while stored in the read buffer722may refer to hardware pre-processing of the MDMA data, whereas processing of the MDMA data in the write buffer732may refer to hardware post-processing of the MDMA data. Prior to performing the hardware pre-processing of the MDMA data, the read engine720reads out the bits of image data, and each pixel is unpacked to a byte boundary using, for example, 256-bit data words (e.g., dword format). The expected data format is limited by other applications that packed the image data. The read engine720adds correspond paddings (left, right, top, bottom or all around a cube) or crops out unused pixels for pre-processing of the MDMA data. Cropping is generally available for 2D or 3D data movement. In operation, the MDMA core710retrieves a full dword (e.g., 256-bit) from the external memory760and crops off unneeded pixels and re-aligns the pixels when writing to the read client RCLT. Cropping is also used to shift a block boundary when a stripe is in Dim0 direction and left padding is specified. For example, a left crop is limited to the first dword of the Dim0 line and a right crop is limited to the last dword of the Dim0 line. Hardware pre-processing may include zero padding and non-zero padding, 2D padding or 3D padding, mirror padding, and/or group padding. The MDMA core710also supports data conversion from image format to NPU data types for a read operation with conversion back to image format for 2D and 3D storage. The NDMA core710also supports sign extension, such as sign or non-sign extending 8-bit per pixel (8 bpp) format to 16-bit per pixel (16 bpp) format. The read engine720sends the resultant data in a series of 256-bit words to the corresponding client memory destination locations. As further shown inFIG.7, the write engine730is configured to perform a 3D rectangle stripe write, a 2D rectangle stripe write, or a normal write to the external memory760in a streaming fashion (e.g., block by block). In this example, the write engine730is configured to retrieve 128-bits of data from the client buffer of the write client WCLT, pack to 64-bits word aligned (e.g., image pixel packing), form a dual word (128-bits) and write to the write buffer732. When data in the write buffer732has reached a completed transaction size (e.g., the number of beats per transaction is programmable), this NDMA data is read out of the write buffer732and sent out to the bus bridge740through a write arbiter714to write to the external memory760as, for example, a 256-bit data word. The write arbiter714and a read arbiter712may operate according to a round robin arbitration between different NDMA read channels or NDMA write channels. The NDMA read channels and the NDMA write channels are independent. A controller770is provided as a configuration interface of the NDMA core710. In aspects of the present disclosure, the controller770configures parameters for block data movement. In addition, the controller770configures parameters for hardware pre-processing of NDMA data, including packing, unpacking, padding, and cropping. The controller770may configure registers (e.g., register ports) of the NPU700to direct the NDMA core710during hardware pre-processing and post-processing of the NDMA. For example, pre-processing of NDMA data for pixel padding is performed for image processing modules that specify color information from previous lines or pixels to initiate their image processing task on the edges of the original image or stripes. Padding is also performed to maintain input feature size during convolution, for example, as shown inFIGS.8A and8B. FIGS.8A and8Bare block diagrams800and850illustrating padding of an input feature to maintain an input feature size during an operation (e.g., a multiply-accumulate (MAC) operation) using a filter, according to aspects of the present disclosure. In the block diagram800ofFIG.8A, a padded input feature820is shown. In this example, a 7×7 input feature810is padded with a single layer of padding822to form the padded input feature820. The padding822added to the 7×7 input feature810is used to maintain the original size of the 7×7 input feature810during processing through a convolutional layer using a 3×3 filter kernel840to produce a 7×7 output feature830. As shown inFIG.8A, the padded input feature820is processed by applying the 3×3 filter kernel840to 3×3 areas of the padded input feature820. In this example, a first 3×3 area of the padded input feature820is multiplied and accumulated with the 3×3 filter kernel840to compute a first output pixel832of a 7×7 output feature830(e.g., matrix multiplication). This process is repeated as the 3×3 filter kernel840slides left to right until a last 3×3 area of the padded input feature820is processed to compute a final output pixel of the 7×7 output feature830. That is, the weights in the 3×3 filter kernel840are multiplied by the 3×3 areas in the padded input feature820. The results from multiplying the 3×3 filter kernel840to the 3×3 areas of the padded input feature820are output to a new pixel (e.g.,832,834) of the 7×7 output feature830. FIG.8Bis a block diagram850illustrating multilayer padding of a padded input feature860to maintain an input feature size during multiply-accumulate (MAC) operations using a 5×5 filter kernel890, according to aspects of the present disclosure. In this example, the padded input feature860is composed of input feature values862(i1_1, i1_2, . . . , i3_3) and padding values864, which illustrate a multilayer (e.g., =two layer) constant padding type. The padding values864may be added during hardware pre-processing and/or post-processing by the NDMA core710shown inFIG.7. Although shown using the constant padding type, it should be recognized that other padding types are contemplated, including zero padding type, reflective mirror padding type, symmetric mirror padding type, and edge mirror padding type. For example, the mirror padding type may be beneficial for image processing modules due to the absence of true pixels beyond the boundary of an original image. In neural networks, padding is a layer pre-processing technique that is generally inefficient to perform using software. According to aspects of the present disclosure, software is used to program hardware configuration registers to direct the NDMA core710to perform hardware pre-processing and post-processing of NDMA, for example, as described inFIG.9. FIG.9illustrates a method for hardware pre-processing and post-processing of neural processing unit (NPU) direct memory access (NDMA) data, in accordance with aspects of the present disclosure. A method900begins at block902, in which configuration registers of a neural processing unit (NPU) direct memory access (NDMA) core are programmed for a read client and/or a write client. The read client and/or the write client may be compute units of the NPU700shown inFIG.7. At block904, data blocks of a data stripe are streamed to/from an external memory of the NDMA core. For example,FIG.7shows streaming of a data stripe to/from the external memory. In particular, data blocks are streamed between the read client RCLT and/or write client WCLT to/from the external memory760. At block906, the data blocks in a buffer of the NDMA core are pre-processed and/or post-processed during streaming of the data blocks. For example, pre-processing and/or post-processing of NDMA data may be performed as shown inFIGS.8A and8B. FIG.10further illustrates a method for hardware pre-processing and post-processing of neural processing unit (NPU) direct memory access (NDMA) data, in accordance with aspects of the present disclosure. A method1000begins at block1002, in which a neural processing unit (NPU) direct memory access (NDMA) core is idle after power up. At block1004, an NDMA core determines whether a new direct memory access (DMA) command from a controller is received. Once received, at block1006, all configuration registers are programmed to define, for example, image information, bus information, and address information. Once programmed, at block1008, a load command pulse is generated. In response, at block1010, client arbitration is initiated. Once initiated, at block1012, it is determined whether a client buffer is ready. Once the client buffer is ready, at block1014, it is determined whether an arbitration grant (arb_gnt) is received. For example, as shown inFIG.7, detection of the load command for either the read client RCLT or the write client WCLT triggers initial arbitration using the read arbiter712or the write arbiter714. While the arbitration is requested, the NDMA core determines whether the client buffer of the read client RCLT or the write client WCLT is ready depending on whether the read client RCLT or the write client WCLT is the target of the load command. Referring again toFIG.10, once the arbitration is granted, at block1016, hardware pre-processing and/or post-processing of NDMA data is performed. For example, as shown inFIG.7, a read engine720of the NDMA core710retrieves a predetermined number of bits (e.g., a bus width) of image data (e.g., NDMA data) from the external memory760and stores those bits in the read buffer722of the NDMA core. According to aspects of the present disclosure, the stored NDMA data in the read buffer722may be subjected to hardware pre-processing and post-processing. For example, the hardware pre-processing may include padding of an input tensor, as shown inFIGS.8A and8B. The method1000may further include unpacking NDMA data during streaming of the data blocks from the external memory, and repacking the NDMA data prior to streaming data blocks to the external memory hardware. Pre-processing of the NDMA data using the hardware of the NDMA core710is substantially more efficient compared with conventional software pre-processing and post-processing. In aspects of the present disclosure, one block of NDMA data is processed for each bus transaction. In addition, the single NDMA command involves a stripe data that is provided by streaming the data blocks of the stripe data. As shown inFIG.10, at block1018, it is determined whether a complete stripe is processed (e.g., stripe end). Once the complete stripe is processed, the method1000returns to the idle state at block1002until another NDMA command is received. Otherwise, at block1020it is determined whether an end of a current block is detected. Once detected, control flow returns to block1010, in which the method1000waits for client arbitration. In some aspects, the methods900,1000may be performed by the NPU108(FIG.1) and/or the NPU700(FIG.7). That is, each of the elements of methods900,1000may, for example, but without limitation, be performed by the NPU108or the NPU700, including the NDMA core710and/or other included components. Aspects of the present disclosure are directed to neural processing unit (NPU) direct memory access (NDMA) hardware pre-processing and post-processing of NDMA data for convolutional neural networks. NDMA moves NDMA data from main memory to storage closer to the compute units of an NPU for local storage to perform pre-processing and post-processing of the NDMA data. NDMA hardware pre-processing and post-processing is software programmable by programming configuration registers to control NDMA operation, resulting in better resource utilization and energy efficiency. Adding hardware pre-processing and post-processing capability to an NPU reduces memory bandwidth pressure and wasted cycles in compute units of the NPU. An artificial neural network model includes means for programming configuration registers of an NPU, means for streaming data blocks of a data stripe, and/or means for pre-processing and post-processing data blocks in an NDMA core. In one aspect, the programming means, streaming means, and/or pre-processing and post-processing means may be the NPU108, program memory associated with the NPU108, memory block118, NPU700and the NDMA core710configured to perform the functions recited. The means for pre-processing and post-processing of data blocks in a buffer of the NDMA core includes means for padding NDMA data, means for cropping NDMA data, means for sign extending NDMA data, means for unpacking NDMA data and/or means for repacking NDMA data prior to streaming. In one aspect, the padding means, the cropping means, the sign extending means, the unpacking means, and/or the repacking means may be the NPU108, program memory associated with the NPU108, the memory block118, the NPU700, and the NDMA core710configured to perform the functions recited. In another configuration, the aforementioned means may be any module or any apparatus configured to perform the functions recited by the aforementioned means. The various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software component(s) and/or module(s), including, but not limited to, a circuit, an application specific integrated circuit (ASIC), or processor. Generally, where there are operations illustrated in the figures, those operations may have corresponding counterpart means-plus-function components with similar numbering. As used herein, the term “determining” encompasses a wide variety of actions. For example, “determining” may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Additionally, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Furthermore, “determining” may include resolving, selecting, choosing, establishing, and the like. As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover: a, b, c, a-b, a-c, b-c, and a-b-c. The various illustrative logical blocks, modules and circuits described in connection with the present disclosure may be implemented or performed with a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array signal (FPGA) or other programmable logic device (PLD), discrete gate or transistor logic, discrete hardware components or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. The steps of a method or algorithm described in connection with the present disclosure may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in any form of storage medium that is known in the art. Some examples of storage media that may be used include random access memory (RAM), read only memory (ROM), flash memory, erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), registers, a hard disk, a removable disk, a CD-ROM and so forth. A software module may comprise a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media. A storage medium may be coupled to a processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The methods disclosed herein comprise one or more steps or actions for achieving the described method. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims. The functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in hardware, an example hardware configuration may comprise a processing system in a device. The processing system may be implemented with a bus architecture. The bus may include any number of interconnecting buses and bridges depending on the specific application of the processing system and the overall design constraints. The bus may link together various circuits including a processor, machine-readable media, and a bus interface. The bus interface may be used to connect a network adapter, among other things, to the processing system via the bus. The network adapter may be used to implement signal processing functions. For certain aspects, a user interface (e.g., keypad, display, mouse, joystick, etc.) may also be connected to the bus. The bus may also link various other circuits such as timing sources, peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further. The processor may be responsible for managing the bus and general processing, including the execution of software stored on the machine-readable media. The processor may be implemented with one or more general-purpose and/or special-purpose processors. Examples include microprocessors, microcontrollers, DSP processors, and other circuitry that can execute software. Software shall be construed broadly to mean instructions, data, or any combination thereof, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Machine-readable media may include, by way of example, random access memory (RAM), flash memory, read only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), registers, magnetic disks, optical disks, hard drives, or any other suitable storage medium, or any combination thereof. The machine-readable media may be embodied in a computer-program product. The computer-program product may comprise packaging materials. In a hardware implementation, the machine-readable media may be part of the processing system separate from the processor. However, as those skilled in the art will readily appreciate, the machine-readable media, or any portion thereof, may be external to the processing system. By way of example, the machine-readable media may include a transmission line, a carrier wave modulated by data, and/or a computer product separate from the device, all which may be accessed by the processor through the bus interface. Alternatively, or in addition, the machine-readable media, or any portion thereof, may be integrated into the processor, such as the case may be with cache and/or general register files. Although the various components discussed may be described as having a specific location, such as a local component, they may also be configured in various ways, such as certain components being configured as part of a distributed computing system. The processing system may be configured as a general-purpose processing system with one or more microprocessors providing the processor functionality and external memory providing at least a portion of the machine-readable media, all linked together with other supporting circuitry through an external bus architecture. Alternatively, the processing system may comprise one or more neuromorphic processors for implementing the neuron models and models of neural systems described herein. As another alternative, the processing system may be implemented with an application specific integrated circuit (ASIC) with the processor, the bus interface, the user interface, supporting circuitry, and at least a portion of the machine-readable media integrated into a single chip, or with one or more field programmable gate arrays (FPGAs), programmable logic devices (PLDs), controllers, state machines, gated logic, discrete hardware components, or any other suitable circuitry, or any combination of circuits that can perform the various functionality described throughout this disclosure. Those skilled in the art will recognize how best to implement the described functionality for the processing system depending on the particular application and the overall design constraints imposed on the overall system. The machine-readable media may comprise a number of software modules. The software modules include instructions that, when executed by the processor, cause the processing system to perform various functions. The software modules may include a transmission module and a receiving module. Each software module may reside in a single storage device or be distributed across multiple storage devices. By way of example, a software module may be loaded into RAM from a hard drive when a triggering event occurs. During execution of the software module, the processor may load some of the instructions into cache to increase access speed. One or more cache lines may then be loaded into a general register file for execution by the processor. When referring to the functionality of a software module below, it will be understood that such functionality is implemented by the processor when executing instructions from that software module. Furthermore, it should be appreciated that aspects of the present disclosure result in improvements to the functioning of the processor, computer, machine, or other system implementing such aspects. If implemented in software, the functions may be stored or transmitted over as one or more instructions or code on a non-transitory computer-readable medium. Computer-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage medium may be any available medium that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Additionally, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared (IR), radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray® disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Thus, in some aspects computer-readable media may comprise non-transitory computer-readable media (e.g., tangible media). In addition, for other aspects computer-readable media may comprise transitory computer-readable media (e.g., a signal). Combinations of the above should also be included within the scope of computer-readable media. Thus, certain aspects may comprise a computer program product for performing the operations presented herein. For example, such a computer program product may comprise a computer-readable medium having instructions stored (and/or encoded) thereon, the instructions being executable by one or more processors to perform the operations described herein. For certain aspects, the computer program product may include packaging material. Further, it should be appreciated that modules and/or other appropriate means for performing the methods and techniques described herein can be downloaded and/or otherwise obtained by a user terminal and/or base station as applicable. For example, such a device can be coupled to a server to facilitate the transfer of means for performing the methods described herein. Alternatively, various methods described herein can be provided via storage means (e.g., RAM, ROM, a physical storage medium such as a compact disc (CD) or floppy disk, etc.), such that a user terminal and/or base station can obtain the various methods upon coupling or providing the storage means to the device. Moreover, any other suitable technique for providing the methods and techniques described herein to a device can be utilized. It is to be understood that the claims are not limited to the precise configuration and components illustrated above. Various modifications, changes, and variations may be made in the arrangement, operation, and details of the methods and apparatus described above without departing from the scope of the claims. | 62,903 |
11861485 | DETAILED DESCRIPTION Various embodiments and aspects of the disclosure will be described with reference to details discussed below, and the accompanying drawings will illustrate the various embodiments. The following description and drawings are illustrative of the disclosure and are not to be construed as limiting the disclosure. Numerous specific details are described to provide a thorough understanding of various embodiments of the present disclosure. However, in certain instances, well-known or conventional details are not described in order to provide a concise discussion of embodiments of the present disclosure. Reference in the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in conjunction with the embodiment can be included in at least one embodiment of the disclosure. The appearances of the phrase “in one embodiment” in various places in the specification do not necessarily alt refer to the same embodiment. When performing image processing in a systolic array, generally of M by N multiply accumulate (MAC) processing elements (PEs), the first convolution layer normally represents 10 to 15% of the total computation performed, due to the large spatial size of the input image. Usually, the input channel is RGB (red, green, blue) format, or three channels of image data, which has low MAC utilization in the first couple of convolution layers of inference of a neural net. The computation in the first couple of convolution layers does not map well into PE array based AI engine architectures, because less channels are offered in the first couple of convolution layers, meaning that the majority of the array's input bandwidth may not be utilized. Especially, the MAC utilization is low for the first couple of layers if the PE size (i.e., number of PEs in the systolic array) is large. One solution to this problem uses two systolic arrays, one systolic array specifically designated to the first layer and the other systolic array used for the remaining convolution layers in the CNN. The solution described herein uses one systolic array, and a format converter to more closely match the number of channels of image data to the number of PEs in each of the first couple of layers of the systolic array, and achieve higher utilization of the MAC PEs. In one embodiment, an AI engine includes a systolic array and a data format converter. The systolic array of M by N multiply and accumulate (MAC) processing elements (PEs) has N MAC PEs on one side for input of up to N channels of image data. The data format converter rearranges data of the input image. The data of the input image has a pixel height and a pixel width in a first number of channels. The first number of channels is equal to the number of colors per pixel. The data format converter rearranges the data to a second, greater number of channels. Each of the second number of channels has data of a lesser pixel height, a lesser pixel width and one of the colors. The data format converter inputs the second number of channels to the one side of the systolic array. The second number of channels is less than or equal to N and closer to N than the first number of channels, and results in greater MAC PE utilization in the first and second convolution layer inferences in the systolic array than would be so for inputting the first number of channels to the one side of the systolic array. One embodiment is a method of operating an AI engine. Data of an input image has a pixel height and a pixel width in a first number of channels. The first number of channels is equal to a number of colors per pixel. The data of the input image is arranged to a second, greater number of channels. Each of the second number of channels has data of a lesser pixel height, a lesser pixel width, and one of the colors. The second number of channels is input to one side of a systolic array. The systolic array has M by N MAC PEs with N MAC on one side for input of up to N channels of image data. The second number of channels is less than or equal to N and closer to N than the first number of channels. Use of the second number of channels results in greater MAC PE utilization in the first and second convolution layer inference in the systolic array than would be so for inputting the first number of channels to the one side of the systolic array. One embodiment is a tangible, non-transitory, computer-readable media that has instructions on it. The instructions cause a processor to perform a method, described below. Data of an input image has a pixel height and a pixel width in a first number of channels. The first number of channels is equal to a number of colors per pixel. The data of the input image is arranged to a second, greater number of channels. Each of the second number of channels has data of a lesser pixel height, a lesser pixel width, and one of the colors. The second number of channels is input to one side of a systolic array. The systolic array has M by N MAC PEs with N MAC on one side for input of up to N channels of image data. The second number of channels is less than or equal to N and closer to N than the first number of channels. Use of the second number of channels results in greater MAC PE utilization in the first and second convolution layer inference in the systolic array than would be so for inputting the first number of channels to the one side of the systolic array. Other aspects and advantages of the embodiments will become apparent from the following detailed description taken in conjunction with the accompanying drawings which illustrate, by way of example, the principles of the described embodiments. FIG.1is a block diagram of an AI (artificial intelligence) engine100with a systolic array104of N by N MAC PEs, showing two example locations for a data format converter114in accordance with embodiments of the present disclosure. The AI engine100has a neural network core102that includes the systolic array104, an accumulator at the output of the systolic array104, a scaling module108that receives output from the accumulator106, an activation module110that receives output from the scaling module108, and a pooling module112that receives output from the activation module110. Further components in the AI engine100include a DMA (direct memory access) module120, a DSP (digital signal processor) or RISC (reduced instruction set computer)118, SRAM (static random access memory)124, another DMA module126, ISP (image signal processor)128coupled to a camera130, DDR (dual data rate) controller132connected to DDR (dual data rate) memory134, PCIe (peripheral control interface express) interface136coupled to a host138, ail connected to and communicating through a bus140, in this example an AXI (advanced extensible interface) bus140. Embodiments of the format converter114can be implemented in a DSP, in a RISC (i.e., DSP or RISC118), or in a data transform module116coupled to the systolic array104in the neural net core102. This AI engine100is an example that is suitable for a format converter114, and further embodiments of AI engines that could use a format converter114are readily devised in keeping with the teachings herein. FIG.2illustrates an example of rearranging data of an input image202from three channels210(red, green, blue or RGB) to twelve channels208, for improved MAC PE utilization in a systolic array, as performed by the data format converter114. To start with, the image202in this example has a pixel height of448and a pixel width of448, in three channels210(i.e., RGB channels). Each pixel in the image202has a red value, a green value and a blue value, and each of these color values is output in a respective channel210. The data format converter114arranges the image into groups204of pixels. In this example each pixel group204is a 4×4 group of pixels, i.e., a group204of pixel height four and pixel width four, with each pixel having three color values. Then, the data format converter114arranges the data of each pixel group204into four subgroups206that each have a 2×2 group of pixels. That is, each of the four subgroups206has a pixel height of two and a pixel width of two, again with each pixel having three color values. In one embodiment, the subgroup206includes pixels that are adjacent in the subgroup but that are nonadjacent in the image202. For example, the uppermost subgroup206has adjacent pixels1and3, but pixels1and3are not adjacent in the pixel group204and the image202. Next, the data format converter114arranges the data of each subgroup206into separate channels for each of the colors (red, green, blue) and outputs these as respective channels. In this example, this results in four subgroups times three colors each, for a total of twelve channels208. In various embodiments, this example of rearranging data of an input image is generalized to various image sizes and various numbers of channels. The objective is to increase the number of channels to the input of the systolic array, for greater utilization of the MAC PEs in the first and second layers of the systolic array. The action of rearranging image data into a greater number of channels can be implemented in hardware, software executing on one or more processors, firmware, or combinations thereof, in various combinations of serial operations and parallel operations in various embodiments. FIG.3depicts further details in an example of rearranging data of an input image202from three channels210to twelve channels208. InFIG.3, the 4×4 pixel group204(seeFIG.2) is arranged as three, 4×4 pixel color groups302,304,306, one for each color red, green, blue. The respective color value for each pixel in the 4×4 pixel group204is represented in the respective 4×4 pixel color group. The red values for the pixels of the 4×4 pixel group204are in the red 4×4 pixel color group302, the green values for the pixels of the 4×4 pixel group204are in the green 4×4 pixel color group304, and the blue values for the pixels of the 4×4 pixel group204are in the blue 4×4 pixel color group306. Each 4×4 pixel color group302,304,306is arranged as four 2×2 pixel color subgroups. The red 4×4 pixel color group302is arranged as four 2×2 pixel red subgroups308,310,312,314. The green 4×4 pixel color group304is arranged as four 2×2 pixel green subgroups316,318,320,322. The blue 4×4 pixel color group306is arranged as four 2×2 pixel blue subgroups324,326,328,330. Each of the four 2×2 pixel color subgroups, for each of the three colors, is output as a respective channel, to the systolic array104(seeFIG.1), for a total of twelve channels. This is further depicted in the right half ofFIG.3as RGB three channel data332expanded to twelve channels. The RGB three channel data332is shown in three channels, R, G, B. The twelve output channels of the data format converter114are shown in groups. A 0th group334of three color channels R0, G0, B0is from the 0th row of 2×2 pixel color subgroups314,322,330. A first group336of three color channels R1, G1, B1is from the first row of 2×2 pixel color subgroups312,320,328. A second group338of three color channels R2, G2, B2is from the second row of 2×2 pixel color subgroups310,318,326. A third group340of three color channels R3, G3, B3is from the third or bottom row of 2×2 pixel color subgroups308,316,324. FIG.4depicts a data converter402reshaping data into an AI engine404, in an embodiment. The AI engine404has multiple convolution layers406,408,410,412. Each convolution layer406,408,410,412has a convolution module, a batch normalization module, and a Relu (rectified linear unit). The first convolution layer406receives output of the data converter402. The second convolution layer408receives output of the first convolution layer406into a data converter414. The third convolution layer410receives output of the second convolution layer408. The fourth convolution layer412receives output of the third convolution layer410. Input images are reshaped by the data converter402, for example using a CPU (central processing unit) or GPU (graphical processing unit), before feeding into the AI engine404. Output of the first convolution layer406is reshaped by data converter414at the input of the second convolution layer408. With these two data converters402,414, the first couple of layers are reshaped to increase the number of channels, because otherwise the channels would be much fewer than the number of MAC PEs receiving the channels. In one embodiment, the data converter414at the input of the second convolution layer408is different from the data converter402that feeds into the AI engine404, because the data converter414may also include NCHW→NHCW reshaping from one data format to another. For these data formats, N is the batch number, C is the number of channels (also known as feature MAPS), H is the height and W is the width. FIG.5depicts experimental results, comparing MAC PE utilization in a systolic array for various numbers of channels. The configuration for the experiment has an array size 64×64, i.e., the systolic array104(seeFIG.1) has M equals N equals sixty-four. There is an SRAM IFMAP (static random access memory input feature map) of size 2048, an SRAM filter of size 2048, an SRAM OFMAP (static random access memory output feature map) of size 2048, a YOLO tiny model (an object detection deep learning model often used in mobile and ADAS for object detection), and a data flow of weight stationary. Weights Stationary means the model weights are loaded into the systolic array firstly and stay there until all the feature maps are passed thorough and multiplied by the stationary weights to then calculate the results. An input size of 416×416 using three channels502was found to have a MAC utilization of 10.55%. An input size of 208×208 using forty-eight channels504was found to have a MAC utilization of 24.11%, which is more than double the MAC utilization of the three channel input. An input size of 104×104 using one hundred and ninety-two channels506was found to have an only slightly higher MAC utilization of 25.00%. Analysis of the experimental results shows that the number of channels should be increased through rearranging of the image data, so that the number of channels is close to the number of PEs on the channel receiving side of the PE array. In this experiment, forty-eight channels504is closer in number to 64 PEs than three channels502, and achieves higher MAC utilization. Analysis of the experimental results further shows that a number of channels larger than the number of PEs on the channel receiving side of the PE array does not significantly improve MAC utilization. In this experiment, one hundred and ninety-two channels506does not significantly improve MAC utilization over forty-eight channels504. FIG.6illustrates an example of rearranging data of an input image602from three channels618(red, green, blue) to forty-eight channels612, in an embodiment of the data format converter600. Data of the input image602is input to the data format converter600as three channels618, one each for RGB or red, green, blue. The data format converter600arranges the data in eight pixel by eight pixel groups (i.e., 8×8 pixel groups)604. The data format converter600arranges each 8×8 pixel group604as sixteen two pixel by two pixel groups (i.e., 2×2 pixel groups) for each of red, green, blue colors. These forty-eight 2×2 pixel color groups are output as forty-eight channels612to a 64×64 MAC PE systolic array614. Specifically in this example, the sixteen red 2×2 pixel color groups606, sixteen green 2×2 pixel color groups608, and the sixteen blue 2×2 pixel color groups610formed from each of the 8×8 pixel groups604are output as forty-eight channels612. It should be appreciated that further image sizes, arrangements of groups of pixels, subgroups or groups within groups of pixels, color groups or subgroups, and channels from data of an input image and from a data format converter are readily devised for further embodiments in keeping with the teachings herein. Image data can be rearranged in serial operations, parallel operations, or combinations thereof in various embodiments. FIG.7is a flow diagram of a method of operating an AI engine, in an embodiment. The method and variations thereof can be performed by one or more processors, and more specifically can be performed by an AI engine with a data format converter as described herein in various embodiments. The method and variations thereof can be embodied in instruct ions on a tangible, non-transitory, computer-readable media, for execution by a processor. In an action702, the data format converter arranges the data of an input image in groups of pixels. Examples of groups of pixels are shown inFIGS.2,3and6, and further sizes of groups and arrangements of groups are readily devised in keeping with teachings herein. In an action704, the data format converter arranges each group of pixels as smaller subgroups of pixels for each color. Examples of subgroups of pixels in red, green and blue colors are shown inFIGS.3and6, and further sizes of groups and subgroups, numbers of colors, and arrangements of subgroups are readily devised in keeping with teachings herein. In an action706, the data format converter outputs data channels to a systolic array. Each channel is for a subgroup of pixels of one color, for each group of pixels. Examples of output of data channels are shown inFIGS.2,3and6, and further numbers of channels and arrangements of groups of pixels, subgroups of pixels, color subgroups of pixels and corresponding channels are readily devised in keeping with teachings herein. With reference toFIGS.1-7, various embodiments of a data format converter reshape input images according to a MAC array configuration in hardware design. One or more data format converters reshape data entering the first couple of layers of a MAC array to a number of channels that is less than or equal to the number of PEs receiving the channels at that layer, with a data transformation designed to maximize the hardware utilization of the MAC array. Embodiments of a method described herein can apply to current AI engines or newly developing engines. The reshape can be done in other computation resources like a CPU or GPU, or the data transform in an AI engine. Reshaping so that the number of channels produced by the data format converter for input to one side of a systolic array is closer to, but less than or equal to the number of MAC PEs on the input side of the systolic array, results in greater MAC PE utilization in the first and second convolution layer inference(s) in the systolic array than would be so for inputting a lesser number of channels to the one side of the systolic array. The foregoing description, for the purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the embodiments and its practical applications, to thereby enable others skilled in the art to best utilize the embodiments and various modifications as may be suited to the particular use contemplated. Accordingly, the present embodiments are to be considered as illustrative and not restrictive, and the invention is not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims. | 19,685 |
11861486 | DETAILED DESCRIPTION OF THE EMBODIMENT Specific structural or step-by-step descriptions for the embodiments according to the concept of the present disclosure disclosed in the present specification or application are merely illustrative for the purpose of describing the embodiments according to the concept of the present disclosure. The examples according to the concept of the present disclosure may be carried out in various forms and are not interpreted to be limited to the examples described in the present specification or application. Since the embodiment according to the concept of the present disclosure may have various changes and may have various forms, specific embodiments will be illustrated in the drawings and described in detail in the present disclosure or application. However, it should be understood that the examples according to the concept of the present disclosure are not limited to the specific examples, but include all changes, equivalents, or alternatives which are included in the spirit and technical scope of the present disclosure. Terminologies such as first and/or second may be used to describe various components but the components are not limited by the above terminologies. The above terminologies are used to distinguish one element from the other element. For example, a first element may be referred to as a second element without departing from a scope in accordance with the concept of the present disclosure and similarly, a second element may be referred to as a first element. It should be understood that, when it is described that an element is “coupled” or “connected” to another element, the element may be directly coupled or directly connected to the other element or coupled or connected to the other element through a third element. In contrast, when it is described that an element is “directly coupled” or “directly connected” to another element, it should be understood that no element is present therebetween. Other expressions which describe the relationship between components, for example, “between” or “directly between” or “adjacent to” and “directly adjacent to” should be interpreted in the same manner. Terminologies used in the present disclosure are used only to describe specific examples, and are not intended to limit the present disclosure. A singular form may include a plural form if there is no clearly opposite meaning in the context. In the present specification, it should be understood that terms “include” or “have” indicate that a feature, a number, a step, an operation, a component, a part, or a combination thereof described in the specification is present, but do not exclude a possibility of presence or addition of one or more other features, numbers, steps, operations, components, parts, or combinations thereof, in advance. If it is not contrarily defined, all terms used herein including technological or scientific terms have the same meaning as those generally understood by a person with ordinary skill in the art. Terminologies which are defined in a generally used dictionary should be interpreted to have the same meaning as the meaning in the context of the related art but are not interpreted as an ideally or excessively formal meaning if it is not clearly defined in this specification. When the examples are described, a technology which is well known in the technical field of the present disclosure and is not directly related to the present disclosure may be omitted. The reason is that unnecessary description is omitted to clearly transmit the gist of the present disclosure without obscuring the gist. Definition of Terminologies Here, in order to help the understanding of the disclosure proposed in the present specification, terminologies used in the present specification will be defined in brief. NPU is an abbreviation for a neural processing unit (or an electronic apparatus) and refers to a computer processor specialized for an operation of an artificial neural network model separately from the central processing unit (CPU). ANN is an abbreviation for a computer-implemented artificial neural network and refers to a network which connects nodes in a layered structure by imitating the connection of the neurons in the human brain through a synapse to imitate human intelligence. DNN is an abbreviation for a deep neural network and may mean that the number of hidden layers of the artificial neural network is increased to implement higher artificial intelligence. CNN is an abbreviation for a convolutional neural network and is a neural network which functions similar to the image processing performed in a visual cortex of the human brain. The convolutional neural network is known to be appropriate for image processing and is known to be easy to extract features of input data and identify the pattern of the features. Hereinafter, the present disclosure will be described in detail by describing preferred examples of the present disclosure with reference to the accompanying drawings. Hereinafter, examples of the present disclosure will be described in detail with reference to the accompanying drawings. FIG.1is a schematic conceptual diagram illustrating a schematic artificial neural network model. Hereinafter, the operation of the shown artificial neural network model110athat can be operated in the neural network processing unit (not shown, but conventionally understood as a computing processing chip or a plurality of computing processing chips) will be described. The shown artificial neural network model110aofFIG.1may be an artificial neural network trained to perform various inference functions, such as object recognition and voice recognition. The artificial neural network model110amay be a deep neural network (DNN). However, the artificial neural network model110aaccording to examples of the present disclosure is not limited to a deep neural network. For example, the artificial neural network model110amay be implemented as a model such as Transformer, YOLO, BiseNet, RCNN, VGG, VGG16, DenseNet, SegNet, DeconvNet, DeepLAB V3+, U-net, SqueezeNet, Alexnet, ResNet18, MobileNet-v2, GoogLeNet, Resnet-v2, Resnet50, Resnet101, Inception-v3 and the like. However, the present disclosure is not limited to the above-described models. Also, the artificial neural network model110amay be an ensemble model based on at least two different models. Hereinafter, an inference process performed by the exemplary artificial neural network model110awill be described. The artificial neural network model110amay be a sample of a deep neural network model including an input layer110a-1, a first connection network110a-2, a first hidden layer110a-3, a second connection network110a-4, a second hidden layer110a-5, a third connection network110a-6, and an output layer110a-7. The first hidden layer110a-3and the second hidden layer110a-5may also be referred to as a plurality of hidden layers. It is noted that the present disclosure is not limited only to the artificial neural network model110aillustrated inFIG.1. The input layer110a-1may exemplarily include input nodes x1and x2. That is, the input layer110a-1may include information about two input values. It is noted that the input layer110a-1may include information about more than two input values. For example, the first connection network110a-2may include but is not limited to information about six weight values for connecting nodes of the input layer110a-1to nodes (i.e., the shown3nodes) of the first hidden layer110a-3, respectively. Each weight value is multiplied with the input node value, and an accumulated value of the multiplied values is stored in the first hidden layer110a-3. It is noted that the first hidden layer110a-3and the second hidden layer110a-5may include more than three nodes. As shown inFIG.1, the first hidden layer110a-3may include nodes a1, a2, and a3. That is, the first hidden layer110a-3may include information about three node values. For example, the second connection network110a-4may include information about nine weight values for connecting the three nodes of the first hidden layer110a-3to the three nodes of the second hidden layer110a-5, respectively. It is noted that the second connection network110a-4, like any other connection network, may include information not limited to a certain fixed number of weight values. The weight value of the second connection network110a-4is multiplied with the node value input from the corresponding first hidden layer110a-3and the accumulated value of the multiplied values is stored in the second hidden layer110a-5. For example, the second hidden layer110a-5may include nodes b1, b2, and b3. That is, the second hidden layer110a-5may include information about three node values. It is noted that the number of nodes induced in any hidden layer is not limited to three. For example, the third connection network110a-6may include information about six weight values which connect nodes of the second hidden layer110a-5and nodes of the output layer110a-7, respectively. The weight value of the third connection network110a-6is multiplied with the node value input from the second hidden layer110a-5, and the accumulated value of the multiplied values is stored in the output layer110a-7. For example, the output layer110a-7may include nodes y1and y2. That is, the output layer110a-7may include information about two node values. It is worth repeatedly noting that the number of nodes included in each layer is not limited to the number as shown in the sample model inFIG.1. FIG.2Ais a schematic diagram showing a basic structure of a convolutional neural network (CNN). Referring toFIG.2A, an input image may be displayed as a two-dimensional matrix including rows of a specific size and columns of a specific size. The input image may have a plurality of channels, wherein the channels may represent the number of color components of the input data image. The convolution process means performing a convolution operation with the kernel (i.e., the two-dimensional matrix) while traversing the input image at a specified interval. When the convolutional neural network goes from the current layer to the next layer, it can be transmitted to the next layer by reflecting the weights between layers through convolution. For example, convolution can be defined by two main parameters: the size of the input image (typically a 1×1, 3×3 or 5×5 matrix) and the depth of the output feature map (the number of kernels) and these key parameters can be computed by convolution. These convolutions may start at depth 32, continue to depth 64, and end at depth 128 or 256. Convolution can be executed by sliding these windows of size 3×3 or 5×5 over the 3D input feature map, stopping at every position, and extracting 3D patches of surrounding features. Each of these 3D patches can be transformed into a 1D vector through tensor multiplication with the same learning weight matrix called weights. These vectors can be spatially reassembled into a 3D output map. All spatial locations of the output feature map may correspond to the same location of the input feature map. A convolutional neural network may include a convolutional layer that performs a convolution operation between input data and a kernel (i.e., a weight matrix) that is learned over many iterations of gradient update during a learning process. If (m, n) is the kernel size and W is set as the weight value, the convolution layer can perform convolution of the input data and the weight matrix by calculating the dot product. The step size that the kernel slides across the input data is called the stride length, and the kernel area (m×n) can be called the receptive field. The same convolutional kernel is applied across different locations of the input, which reduces the number of kernels learned. This also enables position invariant learning, wherein if a significant pattern is present in the input, the convolution filter can learn that pattern regardless of the position of the sequence. A convolutional neural network can be tuned or trained so that input data lead to specific inference output. A convolutional neural network may be tuned using backpropagation based on comparisons between the inference output and the ground truth until the inference output progressively matches or approximates the ground truth. A convolutional neural network can be trained by adjusting the weights between neurons based on the difference between the ground truth data and the actual output. FIG.2Bis a schematic diagram showing the operation of the convolutional neural network illustratively. Referring toFIG.2B, for example, an input image is shown as a two-dimensional matrix having a size of 5×5. In addition, the diagram illustrates three nodes, i.e., channel1, channel2, and channel3, as a way of illustration. At convolution layer 1, the convolution operations are independently conducted in multiple channels, each of which processes one kernel. The input image is convolved with kernel1,2, and3for channel1,2, and3at the first, second, and third node of layer 1 respectively, and as the results, feature maps1,2, and3are output respectively. Similarly, at the pooling layer 2, the pooling operations are independently conducted in multiple channels, each of which processes one kernel. The feature maps1,2, and3output from the layer 1 are input to the three nodes of the layer 2. Layer 2 may receive feature maps output from layer 1 as input and perform pooling. The pooling may reduce the size or emphasize a specific value in a matrix. Pooling methods include max-pooling, average pooling, and minpooling. Maximum pooling (max-pooling) is used to collect the maximum values in a specific region of a matrix, average pooling can be used to find the average within a specific region, and min-pooling can be used to select the minimum pixel value with a specific region of a matrix. In the example ofFIG.2B, the size of each of the feature map of a 5×5 matrix is reduced to a 4×4 matrix by pooling. Specifically, the first node of the layer 2 receives the feature map1for channel1as an input, performs pooling, and outputs it as, for example, a 4×4 matrix. The second node of layer 2 receives the feature map2for channel2as an input, performs pooling, and outputs, for example, a 4×4 matrix. The third node of layer 2 receives the feature map3for channel3as an input, performs pooling, and outputs it as a 4×4 matrix, for example. Similarly, at the convolution layer 3, the convolution operations are independently conducted in multiple channels, each of which processes one kernel. The first node of layer 3 receives the output from the first node of layer 2 as input, performs convolution with kernel4, and outputs the result. The second node of layer 3 receives the output from the second node of layer 2 as an input, performs convolution with kernel5for channel2, and outputs the result. Similarly, the third node of layer 3 receives the output from the third node of layer 2 as input, performs convolution with kernel6for channel3, and outputs the result. In this way, convolution and pooling are repeated in an alternative interval, and finally, a fully connected layer may be output. The corresponding output may be input to an artificial neural network for image recognition again. The CNN described so far is the most used method in the computer vision field among various deep neural network (DNN) methods. However, one of the disadvantages of performing CNN is that it has to compute a very large amount of floating-point numbers and requires additional parameters for floating-point numbers. Therefore, the CNN operation is usually accelerated/facilitated by using specialized hardware, such as a graphic processing unit (GPU). In particular, various deep learning development frameworks such as TensorFlow, ONNX, and PyTorch have appeared, and these frameworks allow users to easily accelerate computation using GPUs. However, GPUs have the disadvantage of consuming a lot of electrical power, making them unsuitable for performing CNNs in small computing systems. Therefore, research has been conducted to accelerate CNN using field programmable gate array (FPGA) or application specific integrated circuit (ASIC)-based hardware that consumes much less power but has lower processing speed than GPU. ASIC-based accelerators typically outperform FPGA-based accelerators in terms of both performance and energy efficiency. The main reason is that ASICs can run at lower power consumption and faster clocks than FPGAs. On the other hand, the memory requirement of running a CNN remains high because it requires a very large number of parameters due to the characteristics of CNN, that is, floating point calculations. In the case of AlexNet, a CNN structure that won the 2012 ImageNet recognition challenge, about 240 MB of parameters were required for floating point. This size of parameters is problematic as it is unsuitable for storage in the memory of a small computing system. In order to solve this problem, a study on a low precision network that reduces the size of the input value or the parameter size of the layer of the CNN is desired and has been conducted. <Binarized Neural Network (BNN)> Among the studies of low precision networks, a binarized neural network (BNN) emerged. Binarized neural network (BNN) is an extreme structure even in the structure of a low precision neural network, in which the weight and the layer input value are binarized to +1/−1. That is, a BNN is a neural network composed of 1-bit parameters. In a BNN, the multiplication and accumulation (MAC) operation of CNN is simplified, and there is little difference in accuracy of the outcome from a CNN using floating point for low-complexity images (CIFAR-10, MNIST, SVHN). This BNN has an efficient structure for accelerated processing by less-power-consuming hardware. The biggest reason is that the size of memory required to load the existing parameters has been reduced by about 32 times, and as a result of the reduction, it is easy to load most of the parameters in on-chip RAM. As such, since a BNN does not require multiplication operation and memory usage is extremely reduced, hardware resources and electricity consumption trimmed down to make the machine learning via a BNN more economical. More specifically, a BNN uses XNOR operation (in lieu of multiplications and cumulative additions), which is a logical operation, to perform 1-bit operation. Multiplication can be implemented through XNOR operation, and cumulative addition can be implemented through the pop-count instruction that can determine the number of bits set to 1 in the register. Therefore, real number (i.e., floating number) or integer multiplication and addition are not required, thereby increasing the operation speed. That is, since the operation unit is reduced from 32 bits to 1 bit, the memory bandwidth, in theory, is increased by 32 times. Table 1 below is an example of an XNOR operation. TABLE 1InputOutputaba XNOR b001010100111 To implement cumulative addition after multiplication, the pop-count instruction is utilized. The pop-count command returns the number of bits set to 1 in the bit as shown in Table 2 below. Cumulative addition is possible by multiplying the result of pop-count instruction by 2 and subtracting the total number of bits. TABLE 28-bit register a1011 01008-bit register b0110 1101a XNOR b0010 0110pop-count (a XNOR b)32 * pop-count (a XNOR b) − 8−2 After binarizing the parameters of the BNN as shown in the following equation, N multiplications can be accelerated through one XNOR logic operation by packing N parameters in N-bit registers. γb={+1,ifγ≥0,-1,otherwise[Equation1] Above is the general concept of BNN described at the theoretical level. However, the theoretical concept of BNN leaves a lot of practical nuances to be ironed out to make BNN useful. Hereinafter, the system, device, and apparatus disclosed as embodiments of materialized BNN will be described. Examples of the Present Disclosure Introduction Recently, machine learning (ML) has become one of the most popular technologies because it can be easily applied in various fields. In particular, DNN, as one approach of ML, has been proven to have high accuracy and remarkable performance in performing classification tasks in computer vision and speech recognition fields. For many applications that require higher accuracy while having large data sets, there is a tendency to study deep neural networks with more parameters and layers with larger model size. As DNNs become more complex, the memory demands for parameter storage also increase and require more computations, which greatly affects power and resource efficiency. In particular, in order to handle the increased number of operations, a larger number of logic gates are required in designs for implementation in FPGAs or ASICs, resulting in increased energy consumption while lowering processing performance. On the other hand, data required for most large DNNs may not be completely stored in an internal (on-chip) memory, and thus an external memory (e.g., off-chip DRAM) must be frequently accessed. Such access consumes considerable energy and time, and computation performance degradation. Among recent studies, optimization methods for improving DNN performance have been suggested, and one of these various methods is a method of lowering the calculation precision for potentially redundant information. In this method, since all parameters are binarized, the size of memory demand can be drastically reduced, and since the multiplication operation is replaced with the XNOR function, the size of the operation can be reduced, and thus the energy consumption can be dramatically reduced. However, binarizing all parameter and feature map values has its own disadvantage—lower accuracy. Accordingly, an example of the present disclosure aims to provide a BNN accelerator (i.e., NPU, a hardware architecture/device) that can efficiently use resources while achieving the best performance and maintaining high accuracy. There are two approaches to implementing a DNN in hardware. The first approach is a single layer architecture, in which one hardware block processes one layer at a time, that is, a layer-by-layer architecture. The second is a streaming architecture that implements the entire DNN. Compared to the single-layer architecture, the streaming architecture can dramatically increase performance regardless of the number of layers. However, the streaming architecture has disadvantages such as high cost and low flexibility in design. Therefore, the example of the present disclosure presents an optimally balanced BNN streaming hardware with the optimal performance and efficiency. The optimal performance and efficiency may be expressed as a ratio of power to frame per second (FPS). The main features of the hardware architecture presented by the example of the present disclosure may be summarized as follows. An efficient pipeline unrolling mechanism that maximizes the utilization of the max-pooling layer: a line buffer can provide more than one input window at the same time. Therefore, since the OR operation can always be utilized in the pooling layer, power and resource efficiency can be improved. Also, due to the nature of the streaming architecture, the memory for storing weight values may be eliminated. Accordingly, the XNOR logic gate may be removed from the binary convolution layer or replaced with a NOT gate. A combination of a weight reuse scheme and a K-mean which is referred to as a MAC operation, is applied to both the conventional convolutional layer and the fully-connected layer: through this, additional hardware costs, timing consumption and the number of flip-flops used for synchronization can be reduced. Moreover, since the proposed architecture is a streaming architecture, the MAC operation method can be implemented directly without using additional hardware. MAC operator that compresses the pop-count tree with two options for the adder (i.e., a 6-bit adder and a 3-bit add compressor). It is a design that helps reduce resources and energy, and can provide more than one output using the same pop-count command. A proposed accelerator (i.e., NPU) facilitates hardware implementation for various types and a workflow for automating the hardware implementation. In particular, the workflow includes the number of convolutional layers, the number of fully connected layers (FCNs), the number of channels, the bit-width setting for the input of each layer, and the channels using the same pop-count command, and a script automatically generates RTL code (Register Transfer Level code) based on details provided by the same user. In order to verify the performance of the proposed architecture, tests were conducted using the MNIST and Cifar-10 benchmark data sets, and as a result, it was confirmed that it consumes 3 times less lookup table (LUT) compared to the conventional architecture with the same accuracy and exhibited almost the same FPS/W. Also, the proposed architecture could eliminate the FPGA block RAM and DSP. Through this performance verification, it was confirmed that the architecture proposed in the present disclosure is a BNN hardware architecture with the best power and area efficiency paired with high accuracy. In the following section II, theories used to optimize the BNN hardware implementation will be described in detail. In section III, the proposed hardware architecture is described in detail. In section IV, the process of generating a register transfer level (RTL) design and the performance of the architecture proposed in the present disclosure are described in comparison with other studies. II. BNN THEORY BACKGROUND FOR UNDERSTANDING THE PROPOSED HARDWARE ARCHITECTURE II-1. Theoretical Background of BNN BNN is a kind of artificial neural network in which weights and active outputs are limited to positive and negative values, i.e., −1 and +1. To convert real variables into these values, two different binarizing functions can be used. First, the deterministic function is as follows: xb=Sign(x)={+1,ifx≥0-1,otherwise[Equation2] Second, the stochastic function is as follows: xb=Sign(x)={+1,withprobabilityρ=σ(x)-1,withprobability1-ρ[Equation3] Here, σ (x)=max(0; min(1, (x+1)/2)), and xbis the output of the function after binarization. While deterministic functions are implemented in actual hardware, probabilistic functions are implemented in actual hardware as well. FIG.3is a schematic diagram illustrating a structure for performing an XNOR operation and an accumulation operation in a convolutional layer. By using the binarizing function in BNN, all weights and outputs for the convolutional layer and the fully connected layer are reduced to one bit before being used for the next operation. Accordingly, all multiplication operations that consume a lot of hardware resources can be replaced with a much simpler XNOR logic gate123as shown inFIG.3. The adder tree125shown inFIG.3used for the next process, accumulation, includes a pop-count performing unit, so that its structure can be made much simpler. In addition, since the MAC (i.e., multiply-accumulate) operation is a major factor in overloading the neural network, BNN performance can be improved by using batch-normalization and max-pooling. The techniques applied to each type of operation are as follows. 1) Batch-Normalization and Binarization in BNN Unlike MAC operations, batch-normalization functions use floating-point parameters and operations such as division, root-square, and multiplication. In general, the batch-normalized value of X can be calculated as follows: Y=X-μvar+εγ+β[Equation4] where ε is a small number to avoid round-off problems. μ and var represent the mean, and γ and β, variables of the training data, are constants obtained during the learning process. This normalized value Y can be binarized as follows: Z={1,ifY≥00,ifY≤0;[Equation5] The two steps including normalization and binarization can be combined into one through a simpler threshold comparison process as shown in the following equation. Z=1⇔X-μvar+εγ+β≥0[Equation6] If sign(γ)={1ifγ>00ifγ<0 then, the following equation can be used. Z=(X≥(-βvar+εγ+μ)XNORsign(γ)[Equation7] Furthermore, the combination of batch-normalization and binarization in a hardware implementation results in the output of a comparator and successive XNOR gates. 2) Maximum Pooling Operation in BNN In BNN, after the batch-normalization operation, some layers may use max-pooling to reduce activation of input for successive layers. Theoretically, the output of the max-pooling operation can be binarized before being passed to the next layer. By exchanging the binarization module and the max-pooling module with each other, the batch-normalization and the binary function can be combined and output the result Z. In addition, calculating the maximum values of a binary window is equivalent to taking a binary value as input and finding the output of an OR operation. FIGS.4A,4B and4Cshow examples of combining batch-normalization, binarization and max-pooling in a pipeline architecture. As shown inFIGS.4A to4C, a series of processes starting with batch-normalization and performing an OR operation may be represented as one pipeline. From a hardware implementation point of view, the OR operation is much simpler compared to computing the maximum value function in a non-binary neural network. 3) Weight Reuse Scheme As described above, BNN has been proposed as a solution for minimizing hardware resources and power consumption. In particular, the BNN model with one bit-width has become known as an efficient solution that can reduce the computational load while maximizing data processing by using the operation of the XNOR logic gate and the pop-count instruction. However, when examining the pattern of weight values, additional optimization is needed because general BNN models still have redundant clutter. Through the additional optimization, it is possible to reduce some computational operations and reduce memory usage. The binary number is used to represent the two states of “0” and “1”. When randomly choosing a random binary number, the probability of being 0 or 1 is 50%. When randomly having two sets of N binary bits, the number of bits of the first set that are individually repeated in the second set may be considered. When calculating K output channels for a binary convolution layer, each bit in the set of input values containing (M×M×C) binary bits is replaced by (M×M×C) in the K set of binary kernel values. An XNOR operation can be performed with the corresponding bit. Here, M is the window size and C is the number of input channels. Consequently, a similar number of bits among two arbitrary sets of kernel bits can be considered. Optimization can be achieved by reusing weights to take advantage of binary convolution. An XNOR operation and an operation by a pop-count instruction may be performed on a corresponding set of binary weight values (M×M×C) to generate an output, and the output may be reused to generate another output. For the purpose of having a straightforward visualization, it can be assumed that there are N different bits between the two sets of binary kernel bits. For all i from 1 to N, when A ({A1, A2, A3, . . . AN}) exists for the first kernel set and B ({B1, B2, B3, . . . BN}) exists for the second kernel set, it can be assumed that the set of unknown input feature maps is X {X1, X2, X3, . . . XM×M×C}. Here, {X1, X2, . . . XN} represents N different bits in the two sets of binarized bits. When performing XNOR one random bit using 1 and 0 and summing the two outputs, the final output is always 1. In order to generate N random bits, the following equation may be used. N=∑1NXnor(Ai;Xi)+∑1NXnor(Bi;Xi)[Equation8] In the two kernel sets, the left C ({C1, C2, C3, . . . CM×M×C-N}) kernel bits are all identical. Accordingly, according to Equation 8, the result of performing the pop-count command for ({A, C} XNOR X) may be calculated as in the following Equations. P1=∑1NXnor(Ai;Xi)+∑N+1MxMxCXnor(Ci-N;Xi)[Equation9]P2=∑1NXnor(Bi;Xi)+∑N+1MxMxCXnor(Ci-N;Xi)=∑1NXnor(Bi;Xi)+P1-∑1NXnor(Ai;Xi)=∑1NXnor(Bi;Xi)+P1-(N-∑1NXnor(Bi;Xi))[Equation10] Finally, the following equation can be used to calculate P2based on P1. P2=P1-N+2∑1NXnor(Bi;Xi)[Equation11] According to Equation 7, the output of the second channel can be calculated as follows. O2=2(P1-N+2∑1NXnor(Bi;Xi))-M×M×C[Equation12] For the first convolutional layer, the entire input precision pixel can be used very similarly. Specifically, for the sum of two multiplications, the result is 0 (Ax−1 and Ax1). Here, A is any full precision pixel. Thus, it has to be considered that the number of bits that differ between the two sets of kernel bits, the sum of D multiplications between D arbitrary full-precision input pixels, and D different bits in the second channel, which is D and S2. The output of the second channel may be calculated using the following equation. O2=O1−(0−S2)+S2=O1+2S2[Equation 13] In this way, a full convolution operation can be implemented, which can save considerable hardware resources while compensating for accuracy. III. ARCHITECTURE PROPOSED IN THE PRESENT DISCLOSURE III-1. HIGH-LEVEL Streaming Architecture FIG.5is a schematical diagram illustrating the concept of the BNN streaming architecture proposed in the present disclosure. As can be seen with reference toFIG.5, the BNN streaming architecture1000includes a dedicated BNN accelerator (i.e., a dedicated BNN NPU)100and a memory (e.g., a DDR memory)200, and one or more direct memory access (DMA)300aand/or300b. For example, the dedicated BNN accelerator (i.e., the dedicated BNN NPU)100and the DMAs300aand/or300bmay be implemented as programmable logic (PL), and the memory200may be implemented as a processing system (PS). The BNN dedicated accelerator (i.e., BNN dedicated NPU)100uses one or more direct memory access (DMA)300aand/or300b, and may be connected to a main memory (e.g., DDR memory)200through an AXI-4 stream bus. The dedicated BNN accelerator (i.e., dedicated BNN NPU)100may include a first block110for the first layer, a second block120for the second layer, a third block150for the ithlayer, and a fourth block170for the nthlayer. InFIG.5, it is illustrated that the second layer is a convolutional layer, the ithlayer is the max-pooling layer, and the nthlayer is a fully connected layer. The memory200may be divided into two areas. A first area may be used to store an input image, and a second area may be used to store an output. The one or more DMAs300aand/or300bprovide addresses and data lengths for the two memory areas. Each input pixel from the first memory area is sequentially transferred to a BNN dedicated accelerator (i.e., a BNN dedicated NPU)100. After a predetermined processing time, the classification result for the input image is output and transferred to the second memory area. Hereinafter, the bandwidth of the streaming data bus will be described. Unlike conventional artificial neural network accelerators (e.g., general NPU) that can process only one layer of artificial neural networks, the dedicated BNN accelerator (i.e., dedicated BNN NPU)100presented in the present disclosure may implement the entire artificial neural network in hardware. Dedicated BNN accelerator (i.e., dedicated BNN NPU)100presented in the present disclosure is based on a pipeline-type streaming architecture. That is, the dedicated BNN accelerator (i.e., the dedicated BNN NPU)100according to the present disclosure distributes the load generated while performing inference to layers. The number of pipelines is equal to the number of layers. Therefore, if pixels of the input image are continuously received, all layers can operate simultaneously, and very high performance can be achieved. Additionally, since the output of the previous layer is directly transferred to the next layer without intermediate storage, the propagation delay can be reduced and the size of the required memory can be remarkably reduced as well. Meanwhile, since all layers are implemented with different hardware modules, input data can be continuously processed without interruption. If the number of layers increases, only the pipeline needs to be increased, so there may be no performance degradation. For example, if it is assumed that an image of size E*F is input, data of the input image may be transmitted every clock cycle. In this case, the dedicated BNN accelerator (i.e., the dedicated BNN NPU)100according to the present disclosure may finish the inference of classifying the image in merely E*F clock cycles. As a result, the performance of the dedicated BNN accelerator (i.e., the dedicated BNN NPU)100according to the disclosure of the present disclosure can be flexibly increased according to the number of image pixels input for every clock cycle. Specifically, blocks for each layer may be implemented under a pipeline scheme. As shown inFIG.5, the second block120for the second layer, that is, the convolutional layers may be divided into four parts (i.e., four sub-blocks). Schematically, as shown inFIG.3, a first part (i.e., first sub-block) of the four parts is an XNOR logic gate123and performs a multiplication operation. The second part (i.e., second sub-block) is an adder tree125and may include a pop-count performing unit as shown inFIG.3. In addition, the third part (i.e., third sub-block) may be the batch-normalization performing unit127, and the fourth part (i.e., fourth sub-block) may be the binarization unit129for performing binarization. If the third block150located after the second block120is for a max-pooling layer, the output of the binarization unit129can be transferred directly to a third block150for a max-pooling layer. As such, it is feasible to perfectly implement the convolutional layer and the pooling layer in the architecture presented in the present disclosure. FIG.6is a schematic diagram illustrating a connection relationship between the second block120and the third layer150shown inFIG.5. As shown inFIG.6, the second block120for the second layer shown inFIG.5may include an adder tree125including a first line buffer121, a XNOR logic gate123, and a pop-count performing unit, a batch-normalization performing unit127and a binarization unit129. In addition, the third block150shown inFIG.5may include the second line buffer151and the max-pooling performing unit153and/or155as shown inFIG.6. In order to complete the process from input to output of the layer, the values of pixels input from the previous layer are transferred to the first line buffer121of the second block. The first line buffer121transfers the values of each pixel to the XNOR logic gate123. Specifically, when a predetermined number of pixel values are loaded into the first line buffer121, the first line buffer121generates window values and transmits them to the XNOR logic gate123. The output of the XNOR logic gate123is compared with a pre-determined threshold value in the batch-normalization performing unit127, and an appropriate operation is performed according to the weight γ for batch-normalization. Meanwhile, according to an example of the present disclosure, not all parameters are stored, but only the pre-determined threshold value is stored for the batch-normalization performing unit127. Therefore, the need of using memory for storing weights and the sign(γ) function for performing batch normalization can be completely eliminated. According to an example of the present disclosure as described above, the pre-determined threshold values may be transmitted without delay, thereby improving the processing speed. Outputs of the batch-normalization performing unit127and the binarization unit129are transferred to the second line buffer151of the third block150. InFIG.6, although it is shown that the third block150for the maximum pooling layer is connected after the second block120for the convolutional layer, unlike what is shown in the figure, the third block150may be for a convolutional layer other than the maximum pooling layer as shown in the figure. That is to say, the third block150can be a layer of any nature to fit the need. When a plurality of values loaded into the second line buffer151of the third block150reach a predetermined condition, an output window is generated and transmitted to the max-pooling performing unit153and/or155. The max-pooling performing unit153and/or155may convert data by performing an OR operation, and output the data to the third line buffer171of the fourth block of a subsequent layer if any. III-2. Microarchitecture for Window Generation into Convolution and Pooling Layer Hereinafter, architectures for the convolution layer and the pooling layer will be described in detail. FIG.7is a schematic diagram illustrating an example of a general convolution between C window (M×M) and (K×C×M×M) filters. Referring toFIG.7, an input feature map710of size (E×F) is shown for a clearer demonstration. The dimension/size of the filter720is represented as (M×M), and the number of input channels is represented as C. The number of output channels711,712, and713is represented as K. For the convolution operation, a window containing data of the input feature map710of size (M×M) is multiplied with a filter720of the same size. For example, M may be 3, and the size of the pooling window in the pooling layer may be (2×2). In order to transfer data to a subsequent layer, a shift-register based line buffer may be disposed at a rear end of the previous layer. If such a shift-register-based line buffer is used, the operation can be performed immediately when the required amount of data is transferred without waiting for the feature map to be completely generated in the previous layer. That is, since each layer does not need to wait until the previous layer is finished, the processing time can be significantly reduced. According to an example of the present disclosure, two types of line buffers are provided. The first type of line buffer may be a convolution line buffer (CLB), and the second type of line buffer may be a pooling line buffer (PLB). For example, as shown inFIG.6, the first type of line buffer (i.e., CLB) may be the first line buffer121in the second block120. In this case, the first type of line buffer (i.e., CLB, that is, the first line buffer121) may store values of input pixels from a previous layer and then provide a convolution window for a multiplication operation. In general, when the number of generated windows is only one, the first type of line buffer (i.e., CLB) may include a ((M−1)×E+M) pipeline register. FIG.8is a schematic diagram illustrating an example of an output having two windows on a first type of line buffer (i.e., CLB). As shown inFIG.8, when the number of generated windows820and830increases every N clock cycles, the size of the first type of line buffer (i.e., CLB) may reach ((M−1)×E+M+N−1). In the process of multiplication and pop-count operation, the number of operations may be N times greater than that of generating one window. When the number of registers on the first type of line buffer (i.e., CLB) is (E×(M-⌊M2⌋-1)+⌊M2⌋+N), the first type of line buffer (i.e., CLB) begins to select appropriate values for the output window through the coordinates on the first type of line buffer (i.e., CLB). Next, after performing a pop-count operation, a corresponding valid output signal (e.g., out_valid) is asserted. For N>1, in order to continuously generate N windows every clock cycle, the first type of line buffer (i.e., CLB) receives N new input values during the same period. For the first convolutional layer, if the memory provides N input pixels every clock cycle, no problem would occur in all layers. The detailed process is shown in Table 3. TABLE 3Algorithm 1.Convolution line buffer pseudocode.Input: Activation output of previous layer.Output: Window (W) with size: MxM1:for ix = 0 to M-1 do2:for iy = 0 to M-1 do3:for i = 0 to N-1 do4:y=ry+iy⌊M2⌋5:x=rx+ix+I⌊M2⌋6:if x < 0 or x ≥W then7:W[ix,iy]=padding_value8:else9:W[ix,iy]=L[(M-1-iy)E+(M-1-ix)+i]10:end if11:end for12:end for13:end for In Table 3 above, rx and ry are window center coordinates on the frame. ix and iy represent the coordinates of each pixel on the window, and L represents a CLB with size ((M−1)×E+M+N−1). In order to make algorithm 1 easier to understand, it will be described with reference toFIGS.9and10. FIG.9is a schematic diagram illustrating the position of a window and a first type of line buffer (i.e., CLB) on an image frame when ready to generate output.FIG.10is a schematic diagram illustrating a position of a window and a first type of line buffer (i.e., CLB) on an image frame when execution is completed. For example, as shown inFIG.6, the first type of line buffer (i.e., CLB) may be the first line buffer121in the second block120. The state of the first type of line buffer (i.e, CLB, that is, the first line buffer121) is visually shown inFIG.9, when the first type of line buffer (i.e., CLB, that is, the first line buffer121) starts to generate an output using one window920, and the state of the first type of line buffer (i.e., CLB, that is, the first line buffer121) during image frame transmission is shown inFIG.10. In the examples ofFIGS.9and10, M is 3 and E and F are 28 for the MNIST data set. In addition, when the Cifar-10 data set is used in the examples ofFIGS.9and10, E and F may be equal to 32. On the other hand, the second type of line buffer may be a pooling line buffer (PLB). For example, the second type of line buffer (i.e., PLB) may be the second line buffer151in the third block150as shown inFIG.6. Specifically, as shown inFIG.6, the second type of line buffer (i.e., PLB, that is, the second line buffer151) is located at the front end in the third block150for the pooling layer and may be connected to the second block120including the batch normalization performing unit127. Alternatively, the second type of line buffer (i.e., PLB, that is, the second line buffer151) may be located outside the third block150and located between the second block and the third block. The output from the second block120is transferred to the second type of line buffer (i.e., PLB) and the second line buffer151, and the windows generated from the PLB151are generated by the max-pooling performing unit153and/or155. FIG.11Ais a schematic diagram illustrating a second type of line buffer (i.e., PLB), andFIG.11Bis a schematic diagram illustrating an output of a second type of line buffer (i.e., PLB). The second type of line buffer (i.e., PLB) may be the second line buffer151in the third block150as shown inFIG.6. As shown inFIG.11A, the second type of line buffer (i.e., PLB) may not consider padding on a boundary, compared to the first type of line buffer (i.e., CLB). Only valid signals can be asserted at corresponding locations in the input feature map. Assuming that the size of the window1101is 2×2 and the number of generated windows1101is 1, after every 2 clock cycles for the first pooling layer and for the ithpooling layer, a valid signal can be enabled after every 2*i clock cycles. This may satisfy the condition y % 2=0. In order to use a spare interval between the two intervals in which the second type of line buffer (i.e., PLB) creates a window, an embodiment of the present disclosure provides to generate windows from the second type of line buffer (i.e., PLB) every clock cycle by increasing the level of input data. In particular, when the max-pooling performing unit153shown inFIG.11Bis simultaneously provided with N (where N is >1 and is a multiple of 2) input values, the number of generated windows1101may be N/2. It can be confirmed that these windows in the pooling layer are generated every clock cycle when the condition y % 2=0 is satisfied. Accordingly, the second type of line buffer (i.e., PLB) may increase the processing speed N times for N values of parallel inputs. This means that hardware resources required for subsequent multiplication and pop-count operations can be cut in half. The size of the second type of line buffer (i.e., PLB) will be described as follows. If the second type of line buffer (i.e., PLB) outputs N/2 windows corresponding to N parallel inputs, the size of the second type of line buffer may be determined by (E+N) registers. Further, when the size of the second type of line buffer (i.e., PLB) is (N/2>1) based on the number of parallel inputs provided from the previous layer, the size needs to be larger, and if (N/2=1), then the size does not need to be larger. Furthermore, the number of parallel input pixels does not need to be a power of two, nor need to be divided by E. For example, if the size of the pooling window is (2×2) and if the number N of parallel inputs from the previous layer is 2, the size of the second type of line buffer (i.e., PLB) may be determined by E+2 registers, and one pooling window may be generated twice every clock cycle. On the other hand, if the previous layer provides simultaneous inputs (N=4), the size of the second type of line buffer (i.e., PLB) may be determined by (E+4) registers, and two windows may be generated four times more than when (N=1) is used after every clock cycle. From a perspective of operating mechanism, when the second type of line buffer (i.e., PLB) is completely filled with valid data, the second type of line buffer (i.e., PLB) starts to create a pooling window. In particular, the time delay from when the input signal is asserted to be valid can be determined based on (E+N)/N clock cycles. The details are described as algorithm 2 shown in the table below. Algorithm 2 is shown with reference toFIGS.11A and11B, which shows the operation of the line buffer when (N=1). TABLE 4Algorithm 2 Pooling line buffer pseudocode.Input: Sequential pixel chain from output of the Batchnorm.Output: window (W) with size: 2x21: if (x+1) : N) and y%2 then2: for i = 0 to N/2−1 do3: W(i)(0,0) = L[E+1+2i]4: W(i)(0,1) = L[E+2i]5: W(i)(1,0) = L[1+2i]6: W(i)(1,1) = L[0+2i]7: end for8: end if III-3. Micro-Architecture for MAC Operation and Batchnorm After the above-described windows are created through the shift-register based line buffer, data go through a series of operations, i.e., multiplication operation, addition through a pop-count instruction, and batch-normalization before being transferred to the next layer. These processes are always the most important time paths and the sections that consume the most energy. For this reason, the operating clock frequency is lowered, and system performance is deteriorated. However, in the architecture presented in the present disclosure, the data paths of these processes are optimal, and processing time delay can be minimized and power consumption can be reduced. A detailed description is as follows. First, since all weight values are constants in the multiplication operation, XNOR gates can be replaced with NOT gates when the weight value is 0, and when the weight value is 1, the input may be directly connected to the pop-count instruction processing unit. Similarly, in a batch-normalization operation, the sign value (i.e., γ) allows one to decide whether to use a NOT gate or an XNOR gate. As such, the NOT gate is selected based on the binarized weight or an input value is bypassed based on the binarized weight. By using the XNOR gate, the time delay can be significantly reduced and the use of memory resources can be minimized. Second, the architecture presented in the present disclosure can efficiently process the BNN using parallelism, a pipeline technique, and weight reuse optimization. This will be described in detail as follows. 1) Pipeline and Parallelism Mechanisms A major factor in determining the performance of a processor is its maximum clock frequency. However, the major time paths act as a factor in lowering the clock frequency. Accordingly, the present disclosure proposes adding an intermediate register into the BNN architecture to shorten these time paths. System performance can be significantly improved after the initial delay cycle of the intermediate register. By arranging the intermediate registers in appropriate places based on the requirements (frequency, area, power) and parameter input (i.e., number of channels and input bit width), the delay path of the entire logic circuit can be shorter than the target clock period. An example of design of a pipeline that adds to the output of a multiplication operation is shown inFIG.12. FIG.12is a schematic diagram illustrating an XNOR gate, an adder tree, and a pipeline of batch-normalization. As shown inFIG.12, an adder tree125including an XNOR gate123for a multiplication operation and a pop-count and a batch normalization performing unit127are connected by a pipeline. An intermediate register may be disposed between the adder tree125including the XNOR gate123and the pop-count and the batch normalization performing unit127. The intermediate register may be placed on a plurality of positions alternatively or simultaneously as illustrated in a drawing. For example, the first region of the intermediate register may transmit a necessary first parameter (e.g., X) through a pipeline between the XNOR gate123and the adder tree125. In addition, the second region of the intermediate register may transmit a necessary second parameter (e.g., Z) through a pipeline between the adder tree125and the batch-normalization performing unit127. Alternatively, a plurality of the intermediate registers may be provided. For example, the first intermediate register may transmit a necessary first parameter through a pipeline between the XNOR gate123and the adder tree125. In addition, the second intermediate register may transmit a necessary second parameter through a pipeline between the adder tree125and the batch normalizer127. Regarding the parallelism technique, concurrent computing helps to improve overall system performance in hardware implementations. However, there is a trade-off in parallelism technique. Firstly, i) it requires a significant amount of hardware, which increases power consumption and ii) increases congestion, resulting in design difficulties. Since both weight data and feature map data are reduced to 1 bit in BNN, many loops in convolution operation cannot function properly without sufficient hardware resources. This will be described with reference toFIG.13. FIG.13is a schematic diagram illustrating all six-loops as codes in a convolution operation. Among the six loops shown inFIG.13, the inner loops from the third loop to the sixth loop may be unfolded. First, by unrolling (or expanding) loops3through6, a balance can be achieved between data processing and data generation. This suppresses idle time from occurring in the subsequent layer regardless of the filter size and the number of kernels in the subsequent layer. By unrolling (or expanding) all loops in this way, all input windows created in the line buffer can be performed simultaneously, resulting in significantly reduced time delay. Additionally, loop2can be unrolled to examine hardware utilization. Unrolling loop2can be achieved simply by increasing the number of windows generated every clock cycle from the line buffer and duplicating the operation of the MAC block. 2) MAC Operation As described in Section II above, it is very effective to utilize the weight reuse technique to optimize the pop-count instruction execution. The weight reuse technique can utilize graph partitioning and Hamiltonian's shortest path algorithm. For the Hamiltonian shortest path, it is an easy method to increase the number of weight reuse operations. However, for a convolutional layer containing many channels, the above technique requires a large number of flip-flops and significantly increases the delay. In general, using a Hamiltonian path makes the output of each channel depend on the output of the previous channel except the first output. As a result, many registers have to be added to synchronize with subsequent layers, which increases the initial delay and requires more hardware resources. For example, when a Hamiltonian graph is applied to K output channels, the number of flip-flops used for synchronization is determined by the following equation. K×⌈Km-1⌉×bitwidth[Equation14] Here, m is the number of output channels calculated within the same clock period, and bitwidth is the width of data used to store the output of the pop-count command operation. The above-mentioned disadvantage will be described with reference toFIG.14a. FIGS.14aand14bare schematic diagrams illustrating different mechanisms when using two techniques, that is, a Hamiltonian shortest path technique and a K-means cluster technique. Specifically,FIG.14Ais a schematic diagram illustrating sequential processes of performing an XNOR operation and a pop-count instruction in the second block120for a convolution layer when using a Hamiltonian shortest path technique, andFIG.14bis a schematic diagram illustrating a process of simultaneously performing the XNOR operation and the pop-count instruction in the second block120for the convolution layer when the K-mean cluster technique is used. Since an algorithm for finding the Hamiltonian shortest path practically does not exist, considering the number of vertices, it is very interesting to investigate this matter. For example, the study of finding the Hamiltonian shortest path for a fully connected graph with N vertices is worth the challenge. Two discussions are underway to find the shortest Hamiltonian cycle to find the Hamiltonian shortest path. The first discussion is the “exact solution” used to accurately find the shortest Hamiltonian cycle by reducing the number of searches for the Hamiltonian cycle. However, the “exact solution” consumes a lot of time and effort in calculating the final result for a large graph. In general, as the number of vertices increases, the processing time increases with an exponential power, e.g., N22N. The second discussion is an approximation algorithm that is more common for large graphs. In order to solve the above problem, a partitioning technique that divides the entire graph into a small number of sub-graphs has been discussed. The number of vertices in the sub-graph is limited to 64. However, this has the disadvantage of increasing the number of output channels implemented in the hardware design. Furthermore, the number of sub-graphs depends on the limited number of vertices (i.e., 64) in the sub-graph. Therefore, as the number of output channels increases, more hardware resources are required to implement the output channels and increase the power consumption. Therefore, an example of the present disclosure similarly uses a graph, but suggests an alternative way for improvement. According to the proposed scheme, it is assumed that each set of (M×M×C) binary weight values represent a vertex, and the number of bits different between the two sets is the distance of the edge connecting the two vertices. In order to partition the graph, a K-mean cluster algorithm can be used for every R (i.e., the number of output channels) from 1 to K. The optimal R value gives the smallest number of binarized bits used to produce the result of all output channels. This allows all repeating binarized bits to be removed. The proposed method uses a K-mean cluster. This is shown inFIG.14B. The equation for finding the optimal R value is as follows. R=argminR(∑i=1R∑j=1miDistij+R×C×M×M)[Equation15] where R is the number of sub-graphs, mi represents the number of vertices with the same center, Distijis the distance connecting the center point i and the vertex j, and (R×C×M×M) represents the total number of bits in the R output channels. In the proposed method, a K-mean cluster is used to find the R group of vertices and corresponding centroids. Theoretically, the output of the K-mean cluster algorithm contains R sub-graphs. where R denotes the initial R centroid and the corresponding centroids based on the coordinates of all vertices. In the first process, each vertex is grouped so that the distance from each vertex to the center of the group is the shortest. In the second process, the center of a new group in each group of vertices is selected through the following equation. Mi=Σj=1mixj/mi[Equation 16] where Miand xjare the center of the ithgroup and jthvertex coordinates. The above two steps can be repeated so that the sum of all distances from all vertices to the center is minimized. However, the distance between any two vertices may be valid information. Therefore, in the second process above, only the vertex having the shortest sum of distances to all vertices in the group can be selected. On the other hand, the K-mean cluster has the following limitations: Different R initial centroids make the partitioning method different. On the other hand, calculating all of the other R initial center points and R values from 1 to K (i.e., the number of output channels) wastes a very long time when the number of output channels is a large layer. For example, when a layer includes K output channels, the total number of cases to be analyzed is as follows. Number_of_cases=∑i=1K-1CKi=2K-2.[Equation17] When the number of search cases is 100,000 or more, in order to reduce the number of cases, K-mean++ for initializing the first R centroid may be used. In addition, in order to make the output result more accurate, the second centroid for the number of all cases is computed, and an optimal value can be selected. In addition, when one layer has K output channels and the number of clusters varies from 1 to K, the total number of cases may be K2smaller than the number of cases (i.e., 2K−2) when using the basic K-mean algorithm (here K>5). 3) MAC Compression FIG.15is a graph illustrating pop-count compression of a 6:3 adder.FIG.16is a graph showing pop-count compression of a 3:2 adder.FIG.17is a graph showing the power consumption ratio e of the BNN architecture proposed in the present disclosure as a percentage when using the Cifar-10 data set.FIG.18is a graph showing the area of the BNN architecture proposed in the present disclosure as a usage ratio in the case of using the Cifar-10 data set. In order to further optimize the MAC operation consuming most of the hardware resources and power in the BNN architecture as shown inFIGS.17and18, two techniques using compression in the pop-count command may be applied. First, as shown inFIG.15, 6:3 compression may be applied by adding a 6:3 adder to the adder tree125in order to reduce the number of LUTs. Each bit of the output result from the least significant bit (LSB) to the most significant bit (MSB) can be sequentially calculated using a 6:3 adder. In this way, too many bits can be prevented from being input to the adder and hardware resources can be saved. Similarly, in an automated hardware implementation, a 3:2 adder compression may be provided within the adder tree125as shown inFIG.16. Based on the input bit width of the pop-count instruction operation, 3:2 compression and 6:3 compression can be selected and applied to the adder tree. The table below shows that 3:2 compression only uses some resources (7.5% LUTs for the MNIST model and 9.5% LUTs for the Cifar-10 model) and 6:3 compression consumes less power (6.7% for the MNIST model and 11.5% for the Cifar-10 model) in both models. Table 5 below shows hardware resources and power consumption when three options (no compression, 3:2 compression, and 6:3 compression) are applied to the adder tree. TABLE 5Number of windows = 1Look up tableFlip-flopsPowerMNISTNon-compress10,5275,9230.428(100 MHz)Compress 3:29,7405,7230.413Compress 6:310,3105,7200.399CIFAR-10Non-compress311,54638,5716.598(50 MHz)Compress 3:2281,86138,5666.256Compress 6:3290,60038,5305.837 FIG.19Ashows a MAC operation process when there is no reuse of a pop-count command, andFIG.19Bshows a MAC operation process when a pop-count command is reused. An example of the present disclosure proposes a reuse technique of a pop-count instruction in order to significantly save hardware resources. If the pop-count command is not reused as shown inFIG.19A, the execution of K pop-count commands for K output channels can be implemented, whereas the pop-count command is reused as shown inFIG.19B. In this case, the execution of the pop-count command can be reduced to X times, where X is the number of output channels using the same pop-count command. To maintain the sustainability of the streaming architecture, a clock source using the pop-count instruction can be X times faster than a clock source using another. The value of X may be determined based on the required performance. Increasing the value of X degrades hardware overhead and performance. On the other hand, if X is reduced, hardware overhead and performance may increase. III-4. Architecture's Running Time As described in Section III-1, the architectural design proposed in this specification is a structure having a pipeline stage equal to the number of layers. By overlapping steps, performance and initial pipeline filling time can be dramatically improved. In particular, a convolution line buffer (CLB) of a specific layer may generate window values after a clock cycle according to the following equation. Nf=E*(M-⌊M2⌋-1)+⌊M2⌋+N[Equation18] Moreover, by unwinding the loop (or expanding or unrolling) and applying the pipeline, the multiplication and pop-count instruction execution module may only require a typical number of clock cycles (i.e., Np). Based on the number of output channels and the input bit width, the number of clock cycles can be modified to have the highest frequency and to meet the timing requirements. Accordingly, as shown inFIG.20, an operation of a subsequent layer may be performed after an Nf+Npclock cycle that is later than the start time of a specific layer. FIG.20is a schematic diagram illustrating a processing time of the architecture proposed in the present disclosure. For each pooling layer, E+N clock cycles may be needed to produce an output (window size is 2×2, number of simultaneous inputs is N). Therefore, the subsequent convolutional layer should wait for E+N cycles after the current pooling layer. In terms of fully connected layers, Nfcclock cycles are required to receive the first data and generate a second temporary maximum value of 10 output channels. The number of clock cycles may be flexibly changed according to a required frequency. In particular, in an experiment using a frequency of 300 MHz, it was confirmed that the fully connected layer requires 3 clock cycles to find the maximum value from 10 temporary output values. Since the input data are continuously filled in the proposed BNN hardware accelerator (i.e., NPU), when loop2is not unrolled, the sum of the initial time delays in all layers (convolutional layer, pooling layer, and fully connected layer) may be determined by E*F clock cycles in order to process one inference operation. For the case of not unrolling loop2, the number of clock cycles required can be reduced by N times (where N is the number of windows in the CLB). Consequently, (E×F/N) clock cycles are required to classify one input image. IV. EXPERIMENT RESULT IV-1. BNN Model Analysis In order to explore the potential model architecture space, to obtain the optimal BNN model, and to make the software model compatible with the proposed optimization technique with improved accuracy, some learning conditions are required for all models. In particular, a batch-normalization operation is added after each convolutional layer, and the maximum pooling layer may be placed after the batch-normalization operation of the second convolutional layer. For models using the MNIST data set, the binary search can be a layer. The initial input for performing training on the MNIST data set may be as follows.1) The range of the number of layers in the BNN model: L={3; 4; 5}2) Maximum number of channels per layer: Ci≤503) Target accuracy threshold Until a BNN model with the minimum number of channels is found for all layers, a binary search can be used for each L value to reduce the number of channels in each layer uniformly based on the above three inputs. Next, binary search can be continuously used to minimize the number of channels for each particular layer based on the above model. As a result, an optimal BNN model corresponding to a specific L value can be determined. Each model may have a variable number of layers represented by elements of the set L. Therefore, the number of output models is expressed as the size of the set L. Moreover, in each initial BNN model, if only the number of channels in each layer is optimized, all components of the network architecture can be independently predefined to reduce the search space. In terms of the learning environment, a productive optimizer using the adaptive moment estimation (Adam) optimizer for the first 30 epochs and the stochastic gradient descent (SGD) optimizer for the remaining 70 epochs can be utilized. Here, the learning rate may be set to 0.03 and the momentum to 0.5 may be set. For models using the Cifar-10 dataset, some training conditions can be changed based on the model structure to be compatible with the proposed hardware architecture. In particular, padding may be added with a value of −1 for each convolutional layer to improve the accuracy with only a smaller number of channels. In addition, the output feature map of the last convolutional layer is guaranteed to be 1×1 dimension, which makes it possible to apply the MAC optimization method to the fully connected layer. For a training environment, the Adam optimizer can be used with 500 epochs. The learning rate was 0.005 for the first 40 epochs, 0.0001 for the 80th epoch, 5e-05 (or 5*10{circumflex over ( )}−5) for the 120th epoch, and 5e-06 (or 5*10{circumflex over ( )}−6) for the 160th epoch. The present disclosure finds one model for the Cifar-10 data set and two models for the MNIST data set using the aforementioned approach. The first model is for the Cifar-10 data set. Using these models, the effectiveness of the proposed architecture can be demonstrated. First, for the MNIST data set, the BNN model can be simplified in terms of hardware implementation when the target accuracy is set to 98.4% or higher in the first model optimization for MNIST. This model can be defined as MD1. On the other hand, as a result of exploring many BNN models with various configurations to find a very efficient BNN model with reasonable accuracy in hardware implementation, an efficient second model with an accuracy of 97.7% was found. This second model is defined as MD2. As a result of performing the architecture search, two optimal models were found for the two respective accuracy thresholds of 98.4% and 97.7%. For the model for 98.4%, according to Table 6, it can be seen that the model with three convolutional layers has the shortest inference latency compared to other models with the same accuracy because the model has the smallest number of layers. Also, it can be seen that this three-layer model shows the best results in terms of hardware resources. Therefore, this model can be selected as the MD1 model. Similarly, the MD2 model can be found among many candidates with similar accuracy by considering the required hardware resources and the corresponding accuracy. In summary, both models have 3 convolutional layers and 1 fully connected layer. The MD1 model contains 26 channels for the first convolutional layer, 24 channels for the second layer, and 31 channels for the last convolutional layer. The MD2 model has 17 channels for the first convolutional layer, 15 channels for the second layer, and 22 channels for the last convolutional layer. A batch-normalization function is applied after each convolutional layer of the two models, and max-pooling can be applied to the last two convolutional layers. Finally, as mentioned above, both models use 16-bit fixed-point input pixels and binary weights for the first convolution. The weights and input feature maps are binarized in the second layer. Second, in the case of Cifar-10, a model with an accuracy of 80.2% was found. Here, six convolutional layers (size 64, size 96, size 96, size 128, size 192) and two fully connected layers (size 256, size 10) may be disposed at the end. A max-pooling layer can be added after connecting a batch-normalization operation after each layer and performing the batch-normalization operation from the second convolutional layer to the last convolutional layer. TABLE 6Output channelsLayer5 Layers4 Layers3 Layers11924262202224320233142022519LUTs19,9542173719211Flip-flops1002698309104 Table 6 above compares hardware resource usage between the optimal models. IV-2. Automated Hardware Implementation and Validation Process Needless to say, designing a hardware accelerator (i.e., NPU) for each model is time consuming, labor intensive and error prone. FIG.21is a schematic flowchart illustrating an automated implementation process of hardware based on special modules and parameter extraction. The present disclosure proposes a hardware implementation framework that automates the hardware architecture creation at the register transfer level (RTL) based on user constraints on the BNN model. Scripts can be used to automatically generate RTL designs according to user-specified constraints. All parameters are divided into two sets (a set of module parameters and a set of general parameters). The proposed hardware architecture may include hardware modules specialized for general functions such as batch-normalization, CLB, multiplication, pop-count instructions, PLB, pooling and the like. To create an RTL design, a generic structure (a schematic structure) can first be defined based on a script using a set of generic parameters. Here, the design may be determined according to the number and location of specific modules. Next, module parameter sets can be used to set all input module parameters for each generic module at a specific location in the architecture. Finally, all configured hardware modules can be connected via script to automatically generate the entire RTL design. InFIG.21, all parameters of each module are described and a general parameter set is shown. FIG.22is a schematic flowchart illustrating a verification process of a hardware implementation. As can be seen with reference toFIG.22, in order to verify that the BNN model and the implemented hardware accelerator are equivalent to each other in the software implementation, the proposed architecture was verified for various BNN models with various layers, channels, and the accuracy. First, the C/C++ model is created based on the parameters and model structure of the PyTorch model S2201. Each layer of the PyTorch model is created as a C/C++ function S2203. The output of each layer can be compared between the C/C++ model and the PyTorch model. After creating the C/C++ model, a series of C/C++ models corresponding to different channels and number of layers were prepared. Second, each hardware accelerator is implemented with an automatic script S2205. Next, using Synopsys™' VCS simulation tool, the waveform of each data path is precisely verified by comparing the results with the corresponding C/C++ model. Finally, the implemented accelerator is ported to the FPGA S2207, and the operation of the hardware accelerator is verified using the C/C++ model. The VCS simulation results were verified bit by bit in the data path through the integrated logic analyzer (ILA) provided by Xilinx™ FPGAs. After training using this automated process, hardware accelerators corresponding to the updated software model can be implemented immediately. As a result, manual labor can be eliminated from the hardware design stage to the verification stage for the target application. IV-3. Hardware Implementation Experiment To evaluate all model features, based on the proposed hardware architecture, input BNN model structure, and user-specified design parameters, an RTL specification sheet is generated using an automation script. Regarding the hardware device, the proposed architecture was implemented on Xilinx™'s Ultra96 evaluation board with Ultrascale+ MPSoC. In particular, a quad-core Arm Cortex-A53 application processing unit (APU) and a dual-core Arm Cortex-R5 real-time processing unit are mounted on the process subsystem (PS). The programmable logic (PL) component consists of 141,120 flip-flops, 70,560 look-up tables (LUTs), 360 DSP slices, and a 7.6 Mbits block of RAM. As described above, simulations were performed on the RTL design generated using Synopsys VCS, and the image classification results were compared with the output of the bit-true C++ model for the input BNN model. In the next step, the proposed design was synthesized and implemented using Vivado™ 2018.3. All experimental results describing the number of LUTs, the number of flip-flops and the expected power consumption were collected in Vivado's report. In particular, in order to estimate the power efficiency of the BNN core, the power consumption was collected only from the PL part of the chip composed of FPGA logic gates. The software-based model and the implemented hardware accelerator were compared bit by bit. In particular, the functionality of the FPGA bitstream was fully verified against 10,000 images in the data set. On the PS side, the host C code running on the ARM processor contains two tasks. First, set up and run direct memory access (DMA) to transfer the test image frame by frame from DRAM to the hardware accelerator, and transfer the classification result from a hardware accelerator back to DRAM. Next, all classification results received after the arrival of the last result are compared with the known output of the C/C++ model. FIG.23is an exemplary diagram illustrating an example of a system architecture for BNN. FIG.23shows a programmable logic (PL) and a processing system (PS). The PL includes a BNN dedicated accelerator (i.e., a BNN dedicated NPU)100and a DMA300. The PS includes an ARM processor and a DDR controller. The PS may be connected to a BNN dedicated accelerator (i.e., a BNN dedicated NPU)100through the DMA300. The DDR controller may communicate with the DMA300through the AXI-4 bus, while the ARM processor may communicate with the DMA through the AXI-Lite bus. IV-4. Experimental Evaluation To estimate the efficiency of the proposed architecture, a series of experiments corresponding to various parameter sets and goals were performed. In particular, five factors were investigated: clock speed, release level of loop2, MAC optimization method, MAC compression method, and classification accuracy. FIG.24is an exemplary graph illustrating power efficiency and frequency effects in the situation with the release of loop2. First, MD1 models were synthesized with different frequency values of 100, 150, 200, 250 and 300 MHz. Checking the results shows that the hardware resources are not completely affected by the operating frequency. In contrast, frame rates (or frames per second) increase the power consumption of hardware implementations. Specifically, according toFIG.24, it is worth noting that the FPS/W ratio steadily increases with the clock frequency for all loop-2unrolling levels. This indicates that for the proposed architecture, the image classification speed increases faster than the increase in power consumption, resulting in better power efficiency at higher frequencies. TABLE 7Number of windows = 1ANumber of windows = 2ANumber of windows = 4AHW(FPS = 3.83 × 105 B)(FPS = 7.65 × 105 B)(FPS = 1.53 × 106 B)No reuseReuseNo reuseReuseNo reuseReuseModelAccuracyindicesweightweightweightweightweightweightMD198.40LUTs19,2111050328,53415,46054,59529,156(54.67%)(43.63%)(53.4%)FFs9,1046,02312,9108,79523,08015,341(66.16%)(68.12%)(66.4%)Power1.1260.6761.6710.9733.3321.735(W)(60%)(58.22%)(52.07%)MD297.70LUTs10,891614415,4708,37029,11515,529(56.4%)(54.1%)(53.34%)FFs6,39445168,7955,96115,40410,058(70.6%)(67.77%)(65.3%)Power0.7050.470.9650.6071.7250.938(W)(66.67%)(62.9%)(54.4%)AThe number of windows is the number of windows generated by the CLB of the first convolution layer.BIf the the number of windows increases N times, the frame per second (FPS) also increases N times. The table above shows the effect of loop release, accuracy, and weight reuse method by hardware index. The frequency is 300 MHz. Second, the same MD1s with different values of N: 1, 2, and 4 were also synthesized to study the effect of the loop2release factor. An N-fold increase also increases the overall system throughput by an N-fold. For N=1, the frame rate of the accelerator is 3.83×105, and for N=2 and N=4, the frame rate increases to 7.65×105and 1.53×106, respectively. On the other hand, according to the results of Table 7, when N>1, hardware resources and power consumption increase much less than N times compared to N=1. In general, the number of LUTs (Look-up Tables) used for MD1 with N=1 without weight reuse is 19,211, whereas the number used for N=2 is 28,534 (1.48×), and the number used for N=4 is 54,595 (2.8×). The use of flip-flops (FF) gives an impressive number when compared to N=1. For N=2, it is 12,910 (1.42×), and for N=4, it is 23,080 (2.53×). In MD2, similar results were obtained regardless of whether or not weights were reused. For power evaluation,FIG.24shows the efficiency improvement when increasing the value of N for the MD1 model with weight reuse enabled. According to the graph shown, if N is increased from 1 to 4, the FPS/W ratio is doubled. Moreover, the graph shown indicates that the higher the degree of parallelism, the better the power efficiency without changing the frequency. Therefore, the proposed architecture maximizes the frequency and parallel level to obtain the highest efficiency. Next, in order to confirm the effect of applying the MAC optimization method, both MD1 and MD2 models with or without weight reuse were synthesized and tested at 300 MHz. The results in Table 7 show that hardware resources and power consumption are significantly reduced when kernel weights are reused to eliminate redundant calculations. In general, when MAC optimization method is enabled, the number of LUTs is in a range from 53% to 56% compared to designs without MAC optimization, on the other hand, the number of FFs is reduced from about 30% to 35% and the power consumption is also reduced to about 35% to 48% depending on the model size and the level of loop unrolling. On the other hand, if the results are analyzed in the horizontal direction (same model, different number of windows) and in the vertical direction (the N values are the same but the model sizes are different): (i) hardware resource and power consumption improvements tend to be higher for models with more channels and (ii) the same trends when increasing the level of parallelism. In terms of the correlation between MD1 and MD2, the amount of LUT used in MD2 is between 1.7 and 1.9 times less than that required in MD1 for other values of N, and the FF usage and power consumption are also reduced by 1.3-1.5 times and 1.4-1.9 times. The classification accuracy is reduced to a small extent of only 0.7%. In order to find an optimal solution, a level of accuracy that produces an appropriately efficient hardware implementation can be defined. After all investigations, it can be concluded that for a model with a certain accuracy, the proposed architecture can be utilized most effectively by running at the highest frequency and parallelism level. IV-5. Comparison with the Conventional Art In this section, the most desirable results of the proposed architecture are compared with the conventional arts using both the MNIST and Cifar-10 data sets. In the case of the MNIST data set, two models MD1 and MD2 implemented with the MAC optimization method along with pop-count compression, or without pop-count compression were selected and shown in Table 8 in comparison with the conventional arts. The results confirm that the pop-count compression method at 300 MHz can make the design smaller with fewer LUTs and FFs. However, power consumption was found to be higher compared to the model implemented without pop-count compression. The energy difference with and without pop-count compression is reduced for models with low or large frequencies. TABLE 8Freq.FrameratePowerPower eff.ModelAccuracyPlatform(MHz)LUTsFFsBRAMDSP(×103FPS)(W)GOP/s(FPS/W)MD1a98.4%Ultra9630015,4608,795007651.00418,330761,952MD1b98.4%Ultra9630014,0968,923007650.97318,330786,228MD1c98.4%Ultra9630029,15615,404001,530.61.73518,330882,190MD1d98.4%Ultra9630026,78015,238001,530.61.79518,330852,702MD2c97.7%Ultra9630015,52910,058001,530.60.9387,6471,631,769MD2d97.7%Ultra9630014,36110,513001,530.60.9777,6471,566,623FINN98.4%ZC70620082,988—14,256—1,561—9,086—FINN-R97.69Ultra9630038,205—7,560—847.5—5,110—BNN-PYNQ98.4Ultra9630026,80930,9473,9604356.61.3342,150267,342FP-BNN98.24%Stratix V150——44,20020——5,904—Re-BNet98.29%Spartan20025,60034,23087150330———XC7S50aThe number of windows is two and without pop-count compression.bThe number of windows is two and with pop-count compression.cThe number of windows is four and without pop-count compression.dThe number of windows is four and with pop-count compression. The table above shows the performance of the proposed architecture when using the MNIST data set compared to the previous one. Using the MNIST data set and binary weights, five architectures were selected that provided competitive performance. The first chosen is FINN, which is a hardware implementation of BNN models. FINN implements an MLP model that contains three fully connected layers and takes a 28×28 binary image as input. FINN has the fastest image classification rate (1,561 k FPS) in the MNIST dataset. The second chosen is the reference FINN-R, which is a kind of MLP model. This model is less accurate, but uses much fewer hardware resources. BNN-PYNQ is the latest version of FINN among Xilinx™'s open-source projects published on GitHub. For comparison, the project was downloaded and synthesized to reproduce using the mentioned hardware. This model has the same accuracy as FINN, but the architecture includes four fully connected layers. In addition, compared to FINN, this model uses significantly less resources but offers much lower performance (FPS=356.6 k FPS). The FP-BNN model also uses four fully connected layers to classify the MNIST data set. The FP-BNN model uses Altera Intel™'s Stratix V and uses a compression tree to optimize the pop-count operation. The last chosen is Re-BNet, an improved version of FINN. This model shows efficiency when maintaining 98.29% accuracy and requires only hardware resources such as 25,600 LUTs, which is much smaller than the original FINN. Table 8 shows all references and overall configuration and hardware implementation results of the two models presented in this disclosure. Hardware implementation based on the proposed architecture provides minimal area compared to all other work related to hardware utilization. Compared to BNN-PYNQ, which includes Xilinx™'s lightest architecture, the model presented in this disclosure, that is, the MD1 model with two windows consumes 1.84 times less LUT and also 3.77 times less FF, while it can have a frame rate 2.14 times higher and power consumption is 1.36 times lower. Even in the case of creating 4 windows, the model MD1 presented in the present disclosure used fewer resources than BNN-PYNQ, had slightly more LUTs and 2.1 times less FF, but still had a 4.3 times higher frame rate while maintaining the same accuracy. Compared to the original FINN, the four-window MD1 used 3× fewer LUTs, yielding a frame rate of 98%. On the other hand, the smaller model MD2 can provide decent accuracy like the FINN-R, but uses 2.4× fewer LUTs and produces 1.8× higher frame rates when running at the same clock speed. Unlike all other architectures, both MD1 and MD2 were able to completely eliminate the use of on-chip memory devices and DSP slices, resulting in significant power consumption improvements. As described above, the power efficiency of the architecture presented herein can be maximized when both the clock speed and the loop unrolling level are increased. MD1 and MD2 using 4 windows at 300 MHz and N=4 can deliver 3.8× and 6.1× higher FPS/W, respectively, compared to BNN-PYNQ. Although not all listed in Table 8, both models can be configured with a lower value of N, provided that frame rate is not prioritized as high as hardware resources. TABLE 9FreqAccAreaCifar-10(MHz)LUTs(%)kfp(fos/luts)Ours (X = 1)210290.080.22050.707Ours (X = 1)177281.580.21730.614Ours (X = 2)150232.280 21460.630Ours (X = 4)75156.380.2730.468FINN20046.2580.1130.280FINN125365.980 11250.340FINN-R237332.680.11020.306FINN-R30041.7380.119.0.467FBNA—26.9088.60.50.02ReBNet20053.2080 560.11FINN-30025.4380.11.90.074 The above table shows the efficiency of the architecture presented in the present disclosure compared to the conventional art. For the Cifar-10 data set, this section presents four architectures with different X values. When X=1, the proposed architecture can be implemented at 210 MHz and 177 MHz. Based on the results, it can be concluded that designing the architecture with the maximum frequency increases the area efficiency. When X=2, the frequency used for MAC operation is 300 MHz, and the rest operates at 150 MHz. At this time, compared to the case of X=1, the number of LUTs could be reduced from 18% to 20%. At X=4, the MAC operation continues at 300 MHz and the rest run at 75 MHz. Hardware overhead was reduced by 32% and 46% compared to X=2 and X=1, respectively. To evaluate the area efficiency, the proposed design was compared with the conventional art using the FPS/LUTs ratio as shown in Table 9. The proposed design could provide better area efficiency compared to all previous designs. In particular, it can be seen that the area efficiency of the proposed design when X=1 (0.707) is 1.5 times higher than that of the previous best design (0.467). In terms of performance, the proposed design could provide an ultra-fast frame rate of 205,000 frames per second. In summary, based on the results of Tables 8 and 9, it can be seen that for the MNIST and Cifar-10 data sets, the design proposed in this disclosure can provide much higher power and area efficiency than previous work. The main reason is the successful application of several new optimization methods based on the capabilities of streaming and binary architectures. In particular, this is because, unlike the conventional designs compared in Tables 8 and 9, all XNOR logic gates are removed or replaced with NOT gates (smaller than XNOR). As a result, the memory that stores the weight kernel values can also be eliminated. Therefore, in the design proposed in the present disclosure, the internal memory is 0 as shown in Table 8, whereas in the conventional design, a certain amount of memory (Block RAM, BRAM) is required. In addition, the design proposed in the present disclosure directly implements the MAC optimization method without additional resources. Also, in the proposed design, the line buffer does not store all the output feature maps, but only the data needed to provide it to the next layer. In this way, it uses much fewer hardware resources than the conventional design. In addition, the pipeline unrolling method maximizes the utilization of the max-pooling layer with line buffers that support various parallel levels, leading to the highest power and resource efficiency. More specifically, throughput can be increased by N times, but the required hardware overhead is much lower than N times. The last-mentioned MAC compression technique helps to save a significant amount of hardware resources without affecting the performance in the proposed design. V. CONCLUSION Equipped with small size parameters and low-cost computation, BNNs, as hardware accelerators, are suitable for implementation in Internet of Things (IoT) or edge applications. The streaming architecture underlying BNNs presented in the present disclosure employs various optimization techniques from a hardware and algorithm standpoint. The streaming architecture and unrolling mechanism enable high throughput, while the block RAM (BRAM)-less architecture and weight reuse method have the advantage of significantly reducing hardware resources and power consumption in the final routing implementation. In addition, the present disclosure presents an automated design generation flow for quickly implementing the optimal BNN model in an FPGA based on a user-defined BNN structure to achieve the goal of maximizing throughput and minimizing power consumption. The architecture for BNN presented in the present disclosure provides the optimal performance in terms of balancing throughput and power efficiency without sacrificing inference accuracy. Due to its small area and low latency, the design presented in the present disclosure is one of the best candidates for IoT or edge applications where low power consumption and real-time response are demanded. FIG.25is a schematic diagram illustrating an schematic architecture according to an example of the present disclosure. Referring toFIG.25, a schematic architecture1000may include a BNN dedicated accelerator (i.e., a BNN dedicated NPU)100and main memory (e.g., DDR memory)200, and one or more Direct Memory Access (DMA)300aand/or300b. The dedicated BNN accelerator (i.e., dedicated BNN NPU)100may include a first block110for a first layer, a second block120for a second layer, a third block130for a third layer, and an internal memory (i.e., on-chip memory)190. Although not shown inFIG.25, the dedicated BNN accelerator (i.e., BNN-only NPU)100may further include a third block150for the ithlayer and a fourth block170for the nthlayer as shown inFIG.5. As such, the dedicated BNN accelerator (i.e., the BNN-only NPU)100according to an example of the present disclosure may include a dedicated block for each layer. The internal memory (i.e., on-chip memory)190may include a first input feature map (shown as L1_INFMAP inFIG.25) for a first layer and a first parameter (i.e., as the first weight, shown as L1_weight inFIG.25) for the first layer. Also, the internal memory (i.e., on-chip memory)190may include a second input feature map (shown as L2_INFMAP inFIG.25) for the second layer and a second parameter (i.e., as the second weight, shown as L2_weight inFIG.25) for the second layer. The first parameter and the second parameter may be binarized values. Each of the first block100, the second block120, and the third block130may include one or a plurality of processing engines. The one or more processing engines may be connected in a streaming form, that is, in a pipeline form. Specifically, the one or more processing engines may be connected to each other in a pipeline structure based on a compiled BNN structure. One or more processing engines in each block may fetch input feature maps and parameters from the internal memory (i.e., on-chip memory)190and perform necessary operations. To this end, one or more processing engines in each block may include a line buffer capable of temporarily storing the input feature map and the parameters. As described above, the line buffer may be a first type of line buffer (i.e., CLB) or a second type of line buffer (i.e., PLB). The size of each line buffer may be set based on the size of the corresponding binarized feature map and the corresponding binarized weight. The one or more processing engines may include an XNOR logic gate or a NOT logic gate, a circuit for pop-count operation, a circuit for batch normalization, a circuit for binarization, and a circuit for pooling. The circuit for the pop-count operation may further include a compressor (e.g., a 6:3 compression or a 3:2 compressor). Meanwhile, the pop-count operation may be reused as described above. As shown inFIG.6, the batch-normalization circuit may perform batch-normalization based on a threshold value. The circuit for batch-normalization may select a NOT logic gate or an XNOR gate based on the binarized value. The examples of the present disclosure disclosed in the present disclosure and the drawings merely provide a specific example for illustrative description and better understanding of the technical description of the present disclosure, but are not intended to limit the scope of the present disclosure. It will be apparent to those of ordinary skilled in the art to which the present disclosure pertains that other modified examples based on the technical spirit of the disclosure can be implemented in addition to the examples disclosed herein. [National R&D Project Supporting this Invention] [Task Identification Number] 1711170668[Task Number] 2022-0-00248-001[Name of Ministry] Ministry of Science and ICT[Name of Project Management (Specialized) Institution] Institute of Information & Communications Technology Planning & Evaluation[Research Project Title] Development of Core Technology for PIM Artificial Intelligence Semiconductor (Design)[Research Task Title] Development of CXL-based PIM semiconductor technology for multiple DRAM modules considering memory consistency[Contribution Rate]1/1[Name of Organization Performing the Task] DeepX Co., Ltd.[Research period15 minutes] 2022 Apr. 1˜2022 Dec. 31 | 98,053 |
11861487 | DETAILED DESCRIPTION The present disclosure will be described below in further detail with reference to the accompanying drawings. This embodiment provides a low-power and compact neuron circuit implementing a ReLU activation function. Specifically, with reference toFIG.2, the low-power and compact neuron circuit implementing a ReLU activation function of this embodiment includes a first-layer synaptic array1, a neuron transistor2, and a second-layer synaptic array3, where the first-layer synaptic array1has a plurality of voltage output ends1a, and the second-layer synaptic array3has a plurality of voltage input ends3a; the neuron transistor2is a MOS transistor with a threshold voltage-adjustable property, and the neuron transistor2has a gate electrode g, a source electrode s and a drain electrode d; the gate electrode g is connected to each voltage output end1aof the first-layer synaptic array1; the drain electrode d of the neuron transistor2is connected to each voltage input3aof the second-layer synaptic array3. Herein, a single neuron transistor2serves as a neuron; for example, the neuron transistor2here may be an NMOS transistor or a PMOS transistor. In this embodiment, a voltage output value of each voltage output end1aof the first-layer synaptic array1is denoted as X, a threshold voltage of the neuron transistor2is denoted as Vth, a gate voltage of the neuron transistor2is denoted as Vg, where Vg=X, and a voltage input value of each voltage input end3aof the second-layer synaptic array3is denoted as Y; in the case where the voltage output value X is less than the threshold voltage Vth, that is, when Vg<Vth, the MOS transistor is not turned on, the neuron is not activated, and an output of the neuron is constant at 0; in the case where the voltage output value X is greater than or equal to the threshold voltage Vth, that is, Vg≥Vth, the MOS transistor is turned on, the neuron is activated, and the output of the neuron is X-Vth. That is to say, for each neuron, each time the synaptic array at an upper layer outputs a voltage value X greater than the threshold voltage Vth, a corresponding output value of the neuron is X-Vth, and the output value X-Vth of the neuron is transmitted to the synaptic array at a next lower layer; the input and output processes of the neuron satisfy the ReLU function (seeFIG.3). By adjusting the magnitude of the threshold voltage (Vth) of the transistor, it is possible to satisfy the decision computation and output of different synaptic array output values. Here, the ReLU function is as follows: Relu(X)={X-Vth(X≥Vth)0(X<Vth) As an implementation, the above-mentioned threshold voltage-adjustable property of the MOS transistor may be achieved through ferroelectric polarization reversal of a gate electrode of a ferroelectric-polarized MOS transistor, or changing a channel doping concentration, or doping a channel ion implantation concentration, or adjusting a gate oxide thickness, or by a gate electrode having a volatile threshold switching property. Here, the above-mentioned low-power and compact neuron circuit implementing a ReLU activation function in use is shown inFIG.4. The synaptic array is a 4×4 array with four synapses in each column connected to one neuron transistor. Herein, the 4×4 array as the synaptic array is configured between a word line decoder and a bit line decoder. For example, in this embodiment, the neuron transistor2in the low-power and compact neuron circuit implementing a ReLU activation function described above may also be a ferroelectric-polarized MOS transistor. Herein, the ferroelectric-polarized MOS transistor may be prepared by: step S1, employing a conventional front-end process for preparing a CMOS transistor, forming a shallow trench isolation region22on a substrate21, and isolating an active region by the shallow trench isolation region22, where the substrate21is a silicon substrate, and the structure formed here is shown inFIG.5; step S2, forming wells corresponding to each active region through ion implantation, where the NMOS features a P-well, and the PMOS features an N-well; step S3, forming a gate pattern through photolithographic development, depositing a SiO2layer23on an upper surface of the substrate21, depositing a ferroelectric material layer24on an upper surface of the SiO2layer23, and then depositing a polysilicon layer25on an upper surface of the ferroelectric material layer24, where the ferroelectric material layer24here is a HfZrO layer or a BTO layer, and the structure obtained in step S3is shown inFIG.6; step S4, etching the polysilicon layer25, the ferroelectric material layer24, and a SiO2layer23on the basis of the gate pattern to form a gate structure, where the gate structure is shown inFIG.7; and step S5, protecting the gate structure through sidewall masked isolation, and performing ion doping on both ends of the gate structure to form two ends of the source electrode and the drain electrode; employing a conventional CMOS back-end process to prepare the MOS transistor, where the MOS transistor, after preparation, is the ferroelectric-polarized MOS transistor, a structure of which is shown inFIG.8, in which a structure denoted by26is a gate protection wall, and a region denoted by27is a source-drain doped region. It should be noted that in the case where the neuron transistor2is a ferroelectric-polarized MOS transistor, a resistance value of the drain electrode of the neuron transistor in the above-mentioned low-power and compact neuron circuit implementing a ReLU activation function is calculated as follows. Given that a resistance value of a resistor connected in series between the neuron transistor and a next layer of array is R, R=(Vg−Vth−Vd)/[β·Vd(Vg−Vth−0.5Vd)]; β=(μ·W·Cox)/L; where Vg is a gate voltage of the neuron transistor, Vth is a threshold voltage of the neuron transistor, Vd is a drain-source voltage of the neuron transistor, μ is a carrier mobility in a channel of the neuron transistor, W is a channel width of the neuron transistor, Coxis a gate oxygen capacitance value of the neuron transistor, and L is a channel length of the neuron transistor. As is well known to those skilled in the art, the gate oxide layer herein is an insulating medium between the gate electrode of the transistor and the silicon substrate, typically employing silicon dioxide or the like, for insulating and preventing leakage of electricity. Herein, in this embodiment, the drain-source voltage Vd of the neuron transistor2is 0.1V, and the resistance value R=1/(β·Vd). Although the preferred embodiments of the present disclosure are described in detail above, it is apparent that modifications and variations of the present disclosure will occur to those skilled in the art. It is intended that the present disclosure be construed as including all such modifications and variations insofar as they come within the scope of the spirits and principles thereof. within the scope of the present disclosure. | 7,016 |
11861488 | DETAILED DESCRIPTION In the following description, numerous specific details are set forth to clearly describe various specific embodiments disclosed herein. One skilled in the art, however, will understand that the presently described invention may be practiced without all of the specific details discussed below. In other instances, well known features have not been described so as not to obscure this presentation. As outlined above, according to embodiments of this presentation, both the excitatory and inhibitory spiking neuron circuits consist of two resistively coupled PA type of relaxation oscillators based on VO2 NDR devices. Depending on its biasing scheme, a VO2 relaxation oscillator according to an embodiment of this presentation can emulate the action of a specific type of voltage-gated nerve cell membrane protein ion channel, such as K+, Cl−, and Na+ion channels. Depending on the topology of a circuit according to an embodiment of this presentation, either a positive or negative action potential can be generated by coordinated actions of a pair of two VO2 relaxation oscillators, each of them acting as a specific type of ion channel. According to an embodiment of this presentation, a PA relaxation oscillator is a dc biased network comprising one passive resistor, one passive capacitor, and one active current-controlled NDR device. The NDR active device is the gain element that enables electrical oscillations and signal amplification. According to an embodiment of this presentation, the NDR device comprises a scalable one-port (two-terminal) metal/VO2/metal nano-crossbar threshold switch. The resistive switching and NDR phenomena in the VO2 material is driven by a Joule-heating induced Mott insulator-to-metal phase transition (see U.S. patent application Ser. No. 15/417,049). The VO2 NDR threshold switch is locally active in certain operating regimes, and can therefore provide a signal gain in the AC domain, a feature necessary for the spike generation and neural computation. FIG.2Aillustrates schematically a function block diagram of an excitatory neuron circuit20according to an embodiment of this presentation, comprising a negatively biased Na+ gate or channel22followed by a positively biased K+ gate or channel24; andFIG.2Billustrates schematically a function block diagram of an inhibitory neuron circuit26according to an embodiment of this presentation, comprising a positively biased Cl− (or K+) gate or channel24followed by a negatively biased Na+ gate or channel22. According to an embodiment of this presentation, each gate or channel is a PA type of relaxation oscillator, each comprising as detailed hereafter a vanadium dioxide NDR switch (X1 or X2), a capacitor (C1 or C2), and a load resistor (RL1 or RL2). Input and output impedance blocks: Zin and Zout are acting as dendritic or axonal filters. As detailed hereafter, according to an embodiment of this presentation, circuit20generates an amplified positive action potential in response to an excitatory (positive) voltage pulse that is higher than a threshold value. This characteristic resembles the all-or-nothing behavior of a biological neuron. However, when an inhibitory input, e.g. a negative voltage pulse, is fed into the circuit, it does not generate an action potential even if the amplitude of the input is greater than the threshold needed for action potential firing. Similarly, according to an embodiment of this presentation, circuit26generates an amplified negative action potential, in response to a negative voltage pulse that is greater than a threshold value; whereas when an excitatory input is fed into the circuit, it does not generate an action potential even if the amplitude of the input is greater than the threshold needed for action potentialfiring. FIG.3Ashows a schematic of an excitatory neuron circuit20according to an embodiment of this presentation, comprising first (X1) and second (X2) NDR devices biased each with opposite polarities (−V1; +V2), said first and second NDR devices (X1, X2) being coupled to first and second grounded capacitors (C1, C2). According to an embodiment of this presentation, said first NDR device (X1) has a first node30connected to an input node32of the neuron circuit20by a first load resistor RL1 and a second node34connected to a first voltage source36; said first node (30) of said first NDR device (X1) being coupled to said first grounded capacitor (C1). According to an embodiment of this presentation, said second NDR device (X2) has a first node38connected to said first node30of said first NDR device X1 by a second load resistor RL2 and a second node40connected to a second voltage source42; said first node38of said second NDR device X2 being coupled to said second grounded capacitor C2; said first node38of said second NDR device X2 forming an output node44of the neuron circuit20. According to an embodiment of this presentation, the first voltage source36is a negative voltage source and the second voltage source42is a positive voltage source. The voltages −V1, +V2 provided by voltages sources36and42can have a same amplitude or they can have different amplitudes. According to an embodiment of this presentation, the d.c. voltage supplies are amplitude-matched only if the two NDR devices X1 and X2 are well matched in their switching threshold voltages. If the switching threshold voltages of X1 and X2 are different, then the values of their d.c. voltage supplies have to be chosen differently, so that both NDR devices are biased at the proper operating points (less than, but close to their switching threshold voltage) for the neuron circuit to spike properly. According to an embodiment of this presentation, the first and second NDR devices X1, X2 can each comprise, between their first (respectively30,38) and second (respectively36,40) nodes, a resistance (respectively Re1, Re2) in series with an NDR material. According to an embodiment of this presentation, the NDR material of the first and second NDR devices X1, X2 can be a layer or thin film of vanadium dioxide. According to an embodiment of this presentation, Re1 can have a value of a few hundred Ohm and can be the cumulative resistance of a first metal nanowire electrode arranged between the first node (30) and a first side of the NDR material of X1, and of a second metal nanowire electrode arranged between second node (34) and a second side of the NDR material of X1. Similarly, Re2 can have a value of a few hundred Ohm and can be the cumulative resistance of a first metal nanowire electrode arranged between the first node (38) and a first side of the NDR material of X2, and of a second metal nanowire electrode arranged between second node (40) and a second side of the NDR material of X2. According to an embodiment of this presentation, vanadium dioxide layer can generated by electroforming from a vanadium pentoxide layer, as detailed in U.S. application Ser. No. 15/417,049, which is incorporated by reference to this presentation. Alternatively, the vanadium dioxide layer can be directly prepared by a variety of thin film deposition methods, including but not limited to, reactive d.c. or r.f. magnetron sputtering of vanadium metal or vanadium oxide targets, atomic layer deposition followed by post-deposition anneal, or metallic precursor oxidation. According to an embodiment of this presentation, the first and second voltage sources (36,42) are arranged to bring the first and second NDR devices (X1, X2) close to their respective Mott Insulator-to-Metal Transition; and the voltage biases can be adjusted to set desired levels of voltage or current threshold for the neuron action potential generation (spike firing) and desired signal gains. According to an embodiment of this presentation, the first load resistor, the first NDR device, the first voltage source and the first grounded capacitor are arranged to form a first relaxation oscillator; and the second load resistor, the second NDR device, the second voltage source and the second grounded capacitor are arranged to form a second relaxation oscillator. According to an embodiment of this presentation, the NDR material of the first and second NDR devices X1, X2 can be a layer or thin film of vanadium dioxide, where vanadium dioxide has an Mott insulator-to-metal (IMT) transition temperature TC of 340 K (67° C.). The operation of such vanadium dioxide NDR devices only requires a very moderate Joule heating to raise the local temperature by 40 K (or ° C.) above room temperature. For example, the Inventor has calculated that a NDR device having a vanadium dioxide channel with a 10-nm radius (located for example in a thin film of vanadium pentoxide), has an extremely low estimated switching energy of 1.2 fJ, which is 50 times lower than a NbO2 device such as disclosed in the Pickett et al. document cited above. The Inventor projects that vanadium dioxide based neurons circuits according to embodiments of this presentation are capable to achieve a biologically-competitive 0.1 pJ/spike or less neuron energy use. A single VO2 NDR device can operate as low as 1.2 fJ, but the energy consumption of the complete neuron circuit (X1, X2, C1, C2, RL1, RL2) is dominated by the charging energy of the two capacitors. The 0.1 pJ/spike total energy consumption is estimated assuming exemplary d.c. bias level near 0.5 V and with 40-50 fF capacitors (such a small capacitor value is chosen for neuron size and spike frequency considerations). According to an embodiment of this presentation, neuron circuit20can be used in a neural circuit having a plurality (not shown) of neuron circuits connected in a network (not shown); input node32being arranged to receive an input waveform through an input impedance Zin; and output node44being arranged to provide an output waveform through an output impedance Zout. FIG.3Bshows a schematic of an inhibitory neuron circuit26according to an embodiment of this presentation, comprising first (X2) and second (X1) NDR devices biased each with opposite polarities (+V2; −V1), said first and second NDR devices (X2, X1) being coupled to first and second grounded capacitors (C2, C1). According to an embodiment of this presentation, said first NDR device (X2) has a first node38connected to an input node32of the neuron circuit26by a first load resistor RL2 and a second node40connected to a first voltage source42; said first node38of said first NDR device X2 being coupled to said first grounded capacitor C2. According to an embodiment of this presentation, said second NDR device (X1) has a first node30connected to said first node38of said first NDR device X2 by a second load resistor RL1 and a second node34connected to a second voltage source36; said first node30of said second NDR device X1 being coupled to said second grounded capacitor C1; said first node30of said second NDR device X1 forming an output node44of the neuron circuit26. According to an embodiment of this presentation, the first voltage source42of neuron circuit26is a positive voltage source and the second voltage source36is a negative voltage source. The voltages −V1, +V2 provided by voltages sources36and42can have a same amplitude or they can have different amplitudes. According to an embodiment of this presentation, the first and second NDR devices X2, X1 can each comprise, between their first (respectively38,30) and second (respectively40,34) nodes, a resistance (respectively Re2, Re1) in series with an NDR material. According to an embodiment of this presentation, the NDR material of the first and second NDR devices X2, X1 can be a layer or thin film of vanadium dioxide, for example identical to the one detailed previously for neuron circuit20. According to an embodiment of this presentation, Re2 can have a value of a few hundred Ohm and can be the cumulative resistance of a first metal nanowire electrode arranged between the first node (38) and a first side of the NDR material of X2, and of a second metal nanowire electrode arranged between second node (40) and a second side of the NDR material of X2. Similarly, Re1 can have a value of a few hundred Ohm and can be the cumulative resistance of a first metal nanowire electrode arranged between the first node (30) and a first side of the NDR material of X1, and of a second metal nanowire electrode arranged between second node (34) and a second side of the NDR material of X1. According to an embodiment of this presentation, vanadium dioxide layer can generated by electroforming from a vanadium pentoxide layer, as detailed in U.S. application Ser. No. 15/417,049, which is incorporated by reference to this presentation. Alternatively, the vanadium dioxide layer can be directly prepared by a variety of thin film deposition methods, including but not limited to, reactive d.c. or r.f. magnetron sputtering of vanadium metal or vanadium oxide targets, atomic layer deposition followed by post-deposition anneal, or metallic precursor oxidation. According to an embodiment of this presentation, the first and second voltage sources (42,36) are arranged to bring the first and second NDR devices (X2, X1) close to their respective Mott Insulator-to-Metal Transition; and the voltage biases can be adjusted to set desired levels of voltage or current threshold for the neuron action potential generation (spike firing) and desired signal gains. According to an embodiment of this presentation, the first load resistor, the first NDR device, the first voltage source and the first grounded capacitor are arranged to form a first relaxation oscillator; and the second load resistor, the second NDR device, the second voltage source and the second grounded capacitor are arranged to form a second relaxation oscillator. According to an embodiment of this presentation, one or more of neuron circuits20and26can be used in a neural circuit having a plurality (not shown) of neuron circuits connected in a network (not shown); for example as illustrated inFIG.1B. The Inventor used Mott IMT physics-based SPICE model of VO2 NDR devices to simulate the excitatory and inhibitory neuron circuits as shown inFIGS.3A and3B. In the SPICE model, the VO2 conduction channel is modeled as a cylindrical volume with a radius of 28-56 nm and a length of 50-100 nm. These dimensions are close to experimentally observed values in electroformed VO2 NDR devices (see U.S. patent application Ser. No. 15/417,049) or electroform-free VO2 NDR devices. It is noted that the excitatory and inhibitory neuron circuits20,26ofFIGS.3A and3Bare both tonic neuron circuits. According to embodiments of this presentation, and as detailed hereafter, tonic neuron circuits20and26can be made phasic by replacing the input load resistor (RL1 in20; RL2 in26) by a capacitor or a capacitor in series with a resistor. FIG.4Ashows a schematic of an excitatory phasic neuron circuit20′ according to an embodiment of this presentation, essentially identical to neuron circuit20ofFIG.3A, except that instead of Na+ gate or channel22, it comprises a Na+ gate or channel22′. Na+ gate or channel22′ is essentially identical to Na+ gate or channel22except that first load resistance RL1 is replaced by a first load capacitor Cin. Capacitor Cin ensures that the phasic neuron only responds to the a.c. component (time derivative) of the input current in 32, but not the d.c. current level as in the case of tonic neurons. FIG.4Bshows a schematic of an excitatory phasic neuron circuit20″ according to an embodiment of this presentation, essentially identical to neuron circuit20ofFIG.3A, except that instead of Na+ gate or channel22, it comprises a Na+ gate or channel22″. Na+ gate or channel22″ is essentially identical to Na+gate or channel22except that a first load capacitor Cin is connected in series with first load resistance RL1. A neuron such as22″ can show the same (e.g. nine as illustrated inFIGS.10B-10R: phasic spiking, phasic bursting, rebound spike, rebound burst, spike frequency adaptation, resonator, threshold variability, depolarizing after-potential, accommodation) spiking behaviors as would a neuron circuit22′ such as inFIG.4A, but the operating points are shifted due to the added impedance of RL1. FIG.5shows a schematic of an inhibitory phasic neuron circuit26′ according to an embodiment of this presentation, essentially identical to phasic neuron circuit20′ ofFIG.4A, but with a positive voltage source36′ (+V2) instead of negative voltage source36(−V1), and a negative voltage source42′ (−V1) instead of the positive voltage source42(+V2). Another way to describe an inhibitory phasic neuron circuit26′ according to an embodiment of this presentation would be that it is identical to a tonic inhibitory neuron26as illustrated inFIG.3B, but with an input capacitor Cin replacing (or alternatively connected in series with) input resistor RL2. FIG.6Ais an elevation view of an exemplary metal/oxide/metal device structure of a current-controlled NDR threshold switch or device (X1 or X2) of the neuron circuits (20,26,20′,20″ or26′) shown inFIGS.3A,3B,4A,4B,5. According to an embodiment of this presentation, the NDR device (X1 or X2) comprises, on a substrate50having a surface52, a first electrode wire54extending on said surface52along a first direction; a vanadium pentoxide layer56extending on and contacting at least a portion of said first electrode54; a second electrode wire58extending over said surface52along a second direction, such that the second electrode wire58extends on and contacts at least a portion of the vanadium pentoxide layer56above the first electrode wire54at a crossing point60; wherein a region of vanadium dioxide62is included in said vanadium pentoxide layer56between the first54and second58electrodes at said crossing point60. According to an embodiment of this presentation, the vanadium pentoxide layer56can be disposed within a recess64in a dielectric layer66formed over said first electrode54and at least part of said surface52not covered by said electrode54. According to an embodiment of this presentation, the substrate50can comprise a Si substrate covered with a layer of SiO2, SiNx, SiCN, SiCOH or porous SiCOH. According to an embodiment of this presentation, at least one of the first54and second58electrode wires comprises one layer or multiple layers of Cr, Ti, Co, Ni, Pt, Pd, Al, Cu, Mo, Ta, W, TiW, TiN, TaN, WN, TiSi2, WSi2, MoSi2, TaSi2, NiSi, CoSi2, and doped polysilicon. According to an embodiment of this presentation, at least one of the first and the second electrode wires comprises a protrusion (not shown) extending normal to said surface52toward the other of the first and second electrode wires in said region of vanadium dioxide62. According to an embodiment of this presentation, region56can be no different from dielectric layer66instead of being a region of vanadium pentoxide. FIG.6Bis a cross-section view of a current-controlled NDR threshold switch X1, X2 according to an embodiment of this presentation, showing a thin vanadium pentoxide layer56(or dielectric66) covering a portion of first electrode54and of surface52; and second electrode58extending on, and contacting, the region of vanadium dioxide62above the first electrode wire54at a crossing point of the two electrode wires. It is to be noted that the drawings inFIGS.6A and6Bare not to scale. For example, the metal layers/electrodes (54,58) can be as thick as, or thicker than, the VO2/dielectric layer (62/56,66). At this juncture, it is noted that there exist a number of publications on single vanadium dioxide based relaxation oscillators, including the reference “Metal-insulator transition-induced electrical oscillation in vanadium dioxide thin film”, by Y. W. Lee et al., Appl. Phys. Lett. 92, 162903 (2008); the reference “Electrical oscillations induced by the metal-insulator transition in VO2”, by H.-T. Kim et al., J. App. Phys. 107, 023702 (2010); the reference “Voltage- and current-activated metal-insulator transition in VO2-based electrical switches: a lifetime operation analysis”, by A. Crunteanu, Sci. Technol. Adv. Mater. 11, 065002 (2010); and the reference “Current-induced electrical self-oscillations across out-of-plane threshold switches based on VO2 layers integrated in crossbars geometry”, by A. Beaumont et al.,. J. Appl. Phys. 115, 154502 (2014). The three first references above all used lateral metal-VO2-metal device structures with electrodes separated by a few m, thus their switching threshold voltages are very large (10-25 V), and are unsuitable for low-power applications. The fourth reference above reported vertical metal-VO2-metal crossbar devices having a much thinner VO2 layer (130 nm), and demonstrated a threshold voltage as low as 0.8 V. A main drawback of the technology disclosed in this reference, though, was that it used a manufacturing process requiring sapphire substrates and high growth temperatures (˜500° C.) and was thus not suitable for CMOS-compatible IC processes. An embodiment of this presentation relates to an electronic circuit having one-port (two-terminal) passive elements (resistors, capacitors) and one-port locally-active VO2 nano devices that functions as electronic analog of an excitatory neuron, generating an amplified excitatory (positive) action potential (spike) if and only if excitatory (positive) voltage pulse or current inputs beyond certain thresholds are provided. An embodiment of this presentation relates to an electronic circuit having one-port (two-terminal) passive elements (resistors, capacitors) and one-port locally-active VO2 nano devices that functions as electronic analog of an inhibitory neuron, generating an amplified inhibitory (negative) action potential (spike) if and only if inhibitory (negative) voltage pulse or current inputs beyond certain thresholds are provided. According to an embodiment of this presentation, the aforementioned circuits are composed of two electrically coupled VO2 relaxation oscillators placed in series along the signal path. According to an embodiment of this presentation, each of the aforementioned VO2 relaxation oscillator is used to emulate the action of a specific type of voltage-gated membrane protein ion channel in a nerve cell membrane, including K+, Cl−, or Na+ ion channels. According to an embodiment of this presentation, the electrical coupling of the two VO2 relaxation oscillators can be achieved via a variety of passive first-order or higher-order RC filters, including but not limited to, first-order RC high-pass filter and RC parallel filter, second-order RC parallel filter, band-pass filter, and bridged-tee band stop (notch) filters. According to an embodiment of this presentation, each of the aforementioned VO2 relaxation oscillator can be composed of a positively or negatively biased (polarized) VO2 nano device placed in parallel with a grounded charge storage capacitor resembling the membrane capacitance, where both the voltage-biased (polarized) VO2 nano device and the grounded membrane capacitor can be connected in series with a load resistor. According to an embodiment of this presentation, the aforementioned VO2 nano device can be a current-controlled negative-differential-resistance (NDR) device, where the NDR is induced by a Mott insulator-to-metal (IMT) transition at sufficient Joule heating, by supplying a voltage bias across it and passing a current through it. Such device is locally active within the NDR operating regime, hence it can provide a signal gain in the a.c. domain. According to an embodiment of this presentation, the voltage biases applied to the VO2 nano devices in the aforementioned VO2 relaxation oscillator is designed to be close to, but not yet reaching the voltage threshold needed to trigger the Mott insulator-to-metal transition. The voltage biases can then be adjusted to set desired levels of voltage or current threshold for the neuron action potential generation (spike firing) and desired signal gains. According to an embodiment of this presentation, for the aforementioned neuron circuits to function, the load impedance (resistor or capacitor or resistor and capacitor in series or parallel, where appropriate) in each of the two coupled VO2 relaxation oscillators has to be appropriately valued, so that its corresponding load line crosses the VO2 NDR regime in the current-voltage relationship at a single point, thereby an astable multivibrator can be formed when a reactive circuit element (a capacitor) is added in parallel. In other words, one needs to follow the general operating principle for a relaxation oscillator for the neuron circuits to function. According to an embodiment of this presentation, the aforementioned VO2 nano device is a metal/VO2/metal tri-layer device. VO2 is the active work medium that provides the needed insulator-to-metal transition, mimicking the opening/close of a voltage-gated protein ion channel in the nerve cell membranes. According to an embodiment of this presentation, in one possible physical implementation, the metal/VO2/metal tri-layer device can be formed by first depositing a set of metal nanowires, then depositing a layer of VO2 thin film, and finalizing the structure by depositing a second set of metal nanowires at an angle to the first set of nanowires. The first and second set of nanowires can be placed at an angle, which can be 90 degree. According to an embodiment of this presentation, in yet another physical implementation, the metal/VO2/metal tri-layer device can be formed by first depositing a set of metal nanowires and achieve planarized surface after dielectric deposition and chemical mechanical polishing (CMP); then depositing a dielectric layer and etch nanoscale Via holes that open up the first set of metal nanowires; then deposit a layer of VO2 thin film to completely fill the Via holes; then etch off the redundant VO2 thin films deposited on the top surface of the dielectric layer followed by CMP planarization; and finalizing the structure by depositing a second set of metal nanowires at an angle to the first set of nanowires to form a cross-point junction covering the nanoscale Via holes. The first and second set of nanowires can be placed at an angle, which can be 90 degree. Such a metal/VO2/metal tri-layer device could look like the structure illustrated inFIG.6A, where regions66and64would belong to a same dielectric layer; the region of vanadium dioxide62would be a layer of VO2 thin film that completely fills a Via hole; electrode54would be a planarized metal nanowire of the first set and electrode58a metal nanowire of the second set. According to an embodiment of this presentation, in yet another physical implementation, the VO2 layer in between the two sets of metal nanowires can be replaced by a layer of an insulating amorphous V205 thin film material. A nanocrystalline VO2 conduction channel can then be formed inside the insulating amorphous V205 layer at the crossing of the electrode wires by a one-time operation termed electroforming (see U.S. patent application Ser. No. 15/417,049). According to an embodiment of this presentation, in a variation of the aforementioned physical implementations, the locally-active VO2 nano conduction channel may be directly deposited or electroformed inside nanoscale vias fabricated in a film of dielectric layer66commonly used in a semiconductor IC process, such as a film of SiO2, SiNx, SiCN, SiCOH or porous SiCOH; or other appropriate dielectric materials as recited in the present application. According to an embodiment of this presentation, the VO2 material can be replaced with other types of materials possessing similar heat-driven insulator-to-metal transitions. The material can be a binary, ternary, or more sophisticated oxide compounds, or other materials such as chalcogenides. Overall, a major benefit of a neuron circuit according to this presentation, as compared to conventional Si CMOS based solutions, comprises the superior device-level performance of vanadium dioxide (VO2) NDR switches, including scalability, switching speed (0.1 to 10 ps), and ultralow power consumption. Si devices are non-stackable, while the VO2 NDR devices are made by deposited films and can be stacked into multiple layers on the same substrate. The scalability of VO2 nano-crossbar devices is effectively 4F2/N (F: half pitch of lithography, N: number of device layers), which cannot be achieved by Si technology. Unlike Si CMOS transistors, the operating energy of a VO2 nano-crossbar device scales down unbounded with lithographic resolution. The Inventor has demonstrated VO2 nano-crossbar devices that operate as low as 0.5V, which is ˜40% lower than the best reported result for such devices (see U.S. patent application Ser. No. 15/417,049). The size and power scalabilities of VO2 NDR devices promise a viable path to scale the neuron size and operating energy beyond the limits of Si technology. Moreover, VO2 based relaxation oscillators are considered to have built-in stochasticity and criticality, therefore VO2 artificial neurons are inherently biomimetic. VO2 is a superior NDR material than NbO2 in power scaling due to its much lower Mott IMT critical temperature at 340K (or 67 C), consequently VO2 devices require a modest local temperature rise of 40 K (or ° C.) to operate. Due to the heat-driven nature of Mott IMT, the threshold Joule heating power to trigger NDR switching is proportional to the volume of the VO2 channel and can be scaled down unbounded with lithography resolution. A VO2 NDR device based artificial neuron will thus have a reduced power consumption that can possibly rival biological levels. There are several important differences that set apart the neuron circuitries according to this presentation from the known neuristors such as shown inFIG.1C: in the known neuristors the two membrane capacitors are placed in parallel with the NDR devices and are biased by the same d.c. voltage source. In a biologically-plausible Hodgkin-Huxley (HH) neuron model, the membrane capacitor is grounded. In a neuron circuit according to this presentation, the two membrane capacitors are grounded instead of biased, in consistency with the original HH neuron model. The benefits of grounding the membrane capacitors include that the voltage across the capacitors becomes the actual cell membrane potential, and unbiased capacitors are more flexible in IC design. The design flexibility is based on the fact that the current across the capacitors is determined by the time derivative of voltage across it instead of absolute values. Further, in the known neuristors, only one type of RC parallel filter is used as the impedance blocks along the signal path. According to an embodiment of this presentation, a variety of passive first-order or higher-order RC filters can be used as Zin, Zout; including but not limited to, first-order RC high-pass filter and RC parallel filter, second-order RC parallel filter, band-pass filter, and bridged-tee band stop (notch) filters. In the known neuristors, there was no purposeful circuit design and demonstration of a phasic spiking neuron with Class 3 excitability, and there was no purposeful circuit design and demonstration for an inhibitive neuron that fires amplified negative spikes in response to a supra-threshold inhibitive stimuli. This presentation will now discuss the functionalities and spiking dynamics/behaviors of neuron circuits such as illustrated inFIGS.3,4,5. FIG.7Aillustrates a SPICE model simulated response of an excitatory neuron circuit20as shown inFIG.3A, to a sub-threshold excitatory (positive) voltage pulse of 80 mV. FIG.7Billustrates a SPICE model simulated response of an excitatory neuron circuit20as shown inFIG.3Ato a supra-threshold excitatory voltage pulse of 200 mV. FIG.7Cillustrates a SPICE model simulated response of an excitatory neuron circuit20as shown inFIG.3Ato a supra-threshold inhibitory (negative) voltage pulse of −200 mV. FIG.7Dillustrates a SPICE model simulated response of an inhibitory neuron circuit26as shown inFIG.3Bto a sub-threshold inhibitory voltage pulse of −80 mV. FIG.7Eillustrates a SPICE model simulated response of an inhibitory neuron circuit26as shown inFIG.3Bto a supra-threshold inhibitory voltage pulse of −200 mV. FIG.7Fillustrates a SPICE model simulated response of an inhibitory neuron circuit26as shown inFIG.3Bto a supra-threshold excitatory voltage pulse of 200 mV. As shown inFIG.7A-7C, SPICE model simulations of the excitatory neuron (the circuit20shown inFIG.3A), using Mott physics-based model of vanadium dioxide NDR switches, showed that such a circuit responds to an excitatory input, e.g. a positive voltage pulse, and generates an amplified positive action potential, if the input is higher than certain threshold value. This behavior resembles the all-or-nothing firing behavior of a biological neuron. Further, when an inhibitory input, e.g. a negative voltage pulse, is fed into the circuit, it does not generate an action potential even if the amplitude of the input is greater than the threshold needed for action potential firing. As shown inFIG.7D-7F, SPICE model simulations of an inhibitory neuron (the circuit26shown inFIG.3B), using Mott physics-based model of vanadium dioxide NDR switches, showed that such a circuit does not respond to an excitatory input, e.g. a positive voltage pulse, even if the amplitude of the input is greater than the firing threshold. Further, when an inhibitory supra-threshold input, e.g. a negative voltage pulse, is fed into the circuit, it generates an amplified negative action potential, which has similar shape as an excitatory action potential, but with opposite polarity. The Inventor also performed Mott physics-based SPICE model simulations at various settings of the passive R and C components and input stimuli (voltage pulse or current clamp) to reveal biologically-plausible spike action potential generations for the proposed neuron circuitries. FIG.8is a chart illustrating firing modes or behaviors that excitatory neurons according to embodiments of this presentation were experimentally demonstrated to have. As shown inFIG.8, and as detailed hereafter, tonic excitatory neurons such as illustrated inFIG.3Awere demonstrated to have Tonic Spiking behavior, Tonic Bursting behavior, Class 1 Excitable behavior, Class 2 Excitable behavior, Subthreshold Oscillations behavior, Integrator behavior, Bistability behavior, Inhibition-induced Spiking behavior, Inhibition-induced Bursting behavior, Excitation Block behavior, All-or-nothing behavior, Refractory Period behavior, Spike Frequency Adaptation behavior, and Spike Latency behavior. The last four behaviors are common properties shared with phasic excitatory neurons. As also shown inFIG.8, and as detailed hereafter, phasic excitatory neurons such as illustrated inFIG.4A or4Bwere demonstrated to Phasic Spiking (Class 3 Excitable) behavior, Phasic Bursting behavior, Rebound Spike behavior, Rebound Burst behavior, Resonator behavior, Threshold Variability behavior, Depolarizing After-Potential behavior, Accommodation behavior, All-or-nothing behavior, Refractory Period behavior, Spike frequency Adaptation behavior, and Spike Latency behavior. The last four behaviors are common properties shared with tonic excitatory neurons. FIG.8also illustrates that mixed-mode neurons circuits can be made according to this presentation, as detailed hereafter in relation withFIGS.34A-D. FIGS.9A-9Willustrate experimentally observed spiking behaviors for excitatory tonic neuron circuits according to embodiments of this presentation.FIG.9Aillustrates a tonic spiking behavior such as detailed hereafter inFIGS.15A-B.FIG.9Cillustrates a tonic bursting behavior such as detailed hereafter in FIGS.16A-D.FIG.9Fillustrates a spike frequency adaptation behavior such as detailed hereafter inFIGS.17A-B.FIG.9Gillustrates a class 1 excitable behavior such as detailed hereafter inFIGS.18A-B.FIG.9Hillustrates a class 2 excitable behavior such as detailed hereafter inFIGS.18A-B.FIG.9Iillustrates a spike latency behavior such as detailed hereafter inFIGS.19A-B.FIG.9Jillustrates a subthreshold oscillations behavior such as detailed hereafter inFIGS.20A-B.FIG.9Lillustrates an integrator behavior such as detailed hereafter inFIGS.21A-B.FIG.9Pillustrates a bistability behavior as detailed hereafter inFIGS.22A-B;FIG.9Sillustrates an inhibition-induced spiking behavior such as detailed hereafter inFIGS.23A-B.FIG.9Tillustrates an inhibition-induced bursting behavior such as detailed hereafter inFIGS.24A-B.FIG.9Uillustrates an all-or-nothing behavior such as detailed hereafter inFIGS.11A-B.FIG.9Villustrates a refractory period behavior such as detailed hereafter inFIGS.12A-B,13and14A-B-C.FIG.9Willustrates an excitatory block behavior such as detailed hereafter inFIGS.25A-B. Polarity-inverted mirror behaviors would be observed for inhibitory tonic neuron circuits according to embodiments of this presentation. FIGS.10B-10Rillustrate spiking behaviors observed for phasic neuron circuits according to embodiments of this presentation.FIG.10Billustrates a phasic spiking behavior such as detailed hereafter inFIGS.26A-B.FIG.10Dillustrates a phasic bursting behavior such as detailed hereafter inFIGS.27A-B.FIG.10Millustrates a rebound spike behavior such as detailed hereafter inFIGS.29A-D.FIG.10Nillustrates a rebound burst behavior such as detailed hereafter inFIGS.30A-B.FIG.10Fillustrates a spike frequency adaptation behavior such as detailed hereafter inFIGS.17CA-17CD.FIG.10Oillustrates a threshold variability behavior such as detailed hereafter inFIGS.31A-B.FIG.10Qillustrates a Depolarization After-Potential behavior such as detailed hereafter inFIGS.33A-B.FIG.10Rillustrates an accommodation behavior such as detailed hereafter inFIGS.28A-C. Polarity-inverted mirror behaviors would be observed for inhibitory phasic neuron circuits according to embodiments of this presentation. FIGS.11A and11Billustrate an all-or-nothing behavior of a neuron circuit. It is noted that all-or-nothing behavior is a behavior common to both tonic and phasic neurons. The measurement and simulation illustrated inFIGS.11A and11Bwere made with respect to a tonic neuron circuit according to an embodiment of the presentation, but they could also have been made with respect to a phasic neuron circuit according to an embodiment of the presentation. The all-or-none law is a principle that the strength by which a nerve or muscle fiber responds to a stimulus is independent of the strength of the stimulus. If that stimulus exceeds the threshold potential, the nerve or muscle fiber will give a complete response; otherwise, there is no response. FIGS.11A-11Billustrate that a neuron according to an embodiment of this presentation does not react to the first two sub-threshold input stimuli, but fires two spikes in response to the 3rdand 4thsupra-threshold input stimuli. The two spikes have nearly the same shape and amplitude irrespective of the input strength. The data illustrated inFIGS.11A-11Bwas measured for a neuron circuit according to an embodiment of this presentation, having the following characteristics and with the following stimuli:Lot ID: 29E, Wafer ID: L29E-3VO2 device ID: X1=5251-13, X2=5251-9RL1=RL2=6 kΩC1=C2=2 nF (plus stray capacitance ˜ 1 nF for each)V1=−1.35 V, V2=1.35 VInput voltage pulse width=10 μsSub-threshold Inputs: 0.1 V, 0.15 VSupra-threshold Inputs: 0.25 V, 0.4 V FIGS.12A and12Billustrate experimentally observed (12A) and simulated (12B) refractory period behavior of tonic neuron circuits according to embodiments of this presentation. As detailed hereafter, in a neuron each action potential is followed by a refractory period, which can be divided into an absolute refractory period, during which it is impossible to evoke another action potential, followed by a relative refractory period during which a stronger-than-usual stimulus is required to trigger a firing of the neuron. (see for example [1] Purves, D; Augustine, GJ; Fitzpatrick, D; Hall, WC; Lamantia, A-S; McNamara, JO; White, LE (2008). Neuroscience (4th ed.). Sunderland, MA: Sinauer Associates. p. 49; or [2] Stevens, CF (1966). Neurophysiology: A Primer. New York: John Wiley and Sons. pp. 19-20; or [3] Bullock, TH; Orkand, R; Grinnell, A (1977). Introduction to Nervous Systems. A series of books in biology. San Francisco: W. H. Freeman. p. 151; or [4] Junge, D (1981). Nerve and Muscle Excitation (2nd ed.). Sunderland, Mass.: Sinauer Associates. pp. 4-5. [00198]. These two refractory periods (absolute and relative) are caused by changes in the state of sodium and potassium channels. When closing after an action potential, sodium channels enter an “inactivated” state, in which they cannot be made to open regardless of the membrane potential-this gives rise to the absolute refractory period. Even after a sufficient number of sodium channels have transitioned back to their resting state, it frequently happens that a fraction of potassium channels remains open, making it difficult for the membrane potential to depolarize, and thereby giving rise to the relative refractory period. Because the density and subtypes of potassium channels may differ greatly between different types of neurons, the duration of the relative refractory period is highly variable. The absolute refractory period is largely responsible for the unidirectional propagation of action potentials along axons (see Purves, D; Augustine, GJ; Fitzpatrick, D; Hall, WC; Lamantia, A-S; McNamara, JO; White, LE (2008). Neuroscience (4th ed.). Sunderland, MA: Sinauer Associates. p. 56). At any given moment, the patch of axon behind the actively spiking part is refractory, but the patch in front, not having been activated recently, is capable of being stimulated by the depolarization from the action potential. The data illustrated inFIGS.12A and12Bwas measured for a neuron circuit according to an embodiment of this presentation, having the following characteristics and with the following stimuli:Lot ID: 29E, Wafer ID: L29E-3VO2 device ID: X1=5050-15, X2=5050-7RL1=RL2=5 kΩC1=C2=5 nF (plus stray capacitance ˜1 nF for each)V1=−1.6 V, V2=1.6 VDual voltage input pulsesInput voltage pulse width=10 μsPulse Period (from top to bottom):20 μs, 40 μs, 60 μs, 80 μs, 100 μs, 120 μs, 150 μs FIG.13illustrates a refractory period behavior of a neuron. A figure such asFIG.13can be found at: http://www.physioloyweb.com/lecture notes/ . . . . . . neuronal_action_potential/neuronal_action_potential_refractory_periods.html. Each action potential is followed by a refractory period, which can be divided into an absolute refractory period, during which it is impossible to evoke another action potential, and then a relative refractory period, during which a stronger-than-usual stimulus is required (see “Purves, D; Augustine, GJ; Fitzpatrick, D; Hall, WC; Lamantia, A-S; McNamara, JO; White, LE (2008). Neuroscience (4th ed.). Sunderland, MA: Sinauer Associates. p. 49 and p. 56”; “Stevens, CF (1966). Neurophysiology: A Primer. New York: John Wiley and Sons. pp. 19-20”; “Bullock, TH; Orkand, R; Grinnell, A (1977). Introduction to Nervous Systems. A series of books in biology. San Francisco: W. H. Freeman. p. 151”; “Junge, D (1981). Nerve and Muscle Excitation (2nd ed.). Sunderland, Mass.: Sinauer Associates. pp. 4-5”). These two refractory periods are caused by changes in the state of sodium and potassium channel molecules. When closing after an action potential, sodium channels enter an “inactivated” state, in which they cannot be made to open regardless of the membrane potential-this gives rise to the absolute refractory period. Even after a sufficient number of sodium channels have transitioned back to their resting state, it frequently happens that a fraction of potassium channels remains open, making it difficult for the membrane potential to depolarize, and thereby giving rise to the relative refractory period. Because the density and subtypes of potassium channels may differ greatly between different types of neurons, the duration of the relative refractory period is highly variable. The absolute refractory period is largely responsible for the unidirectional propagation of action potentials along axons. At any given moment, the patch of axon behind the actively spiking part is refractory, but the patch in front, not having been activated recently, is capable of being stimulated by the depolarization from the action potential. FIG.14Aillustrates a refractory period behavior of a tonic neuron circuit according to embodiments of this presentation with evidence of absolute refractory period. The data illustrated inFIGS.14A-Cwas measured for a neuron circuit20having the following characteristics and with the following stimuli:VO2 device ID: X1=5352-1, X2=5252-13RL1=RL2=6 kΩC1=4 nF, C2=1 nF (plus stray capacitance ˜1 nF for each)V1=−1.45 V, V2=1.45 VDual voltage input pulsesPulse width=8 μsPulse 1 height=0.75 VPulse 2 height=1.5 VPulse Period: 15 μs As illustrated inFIG.14A, if a 2ndvoltage input pulse is applied within the absolute refractory period, however strong it is (in the example illustrated, the amplitude is 1.5 V, two times as large as the first pulse), the neuron does not produce a 2ndaction potential. FIGS.14B and14Cillustrate a refractory period behavior of a tonic neuron circuit according to embodiments of this presentation with evidence of relative refractory period. As illustrated inFIG.14B, if a 2ndvoltage input pulse applied within the relative refractory period, which has the same strength as a first triggering input pulse (0.75V illustrated), the neuron does not produce a 2ndaction potential. As illustrated inFIG.14C, if a 2ndvoltage input pulse applied within the relative refractory period is much stronger (1.5V illustrated) than the first input pulse (0.75V illustrated), the neuron does produce a 2ndactionpotential. FIG.15Aillustrates a theoretical tonic spiking behavior of a biological neuron (see E. M. Izhikevich, “Which model to use for cortical spiking neurons?”, IEEE Trans. Neural Netw. 15, 1063 (2004). DOI: 10.1109/TNN.2004.832719). Most neurons are excitable, that is they are quiescent but can fire spikes when stimulated. To test this property, neurophysiologists inject pulses of d.c. current via an electrode attached to the neuron and record its membrane potential. The input current and the neuronal response are usually plotted one beneath the other, as shown hereafter inFIGS.15BA-15BD. While the input is on, the neuron continues to fire a train of spikes. This kind of behavior, called tonic spiking, can be observed in three types of cortical neurons: regular spiking (RS) excitatory neurons, low-threshold spiking (LTS), and fast spiking (FS) inhibitory neurons (see E. M. Izhikevich, “Which model to use for cortical spiking neurons?” IEEE Trans. Neutral Newt. 15, 1063 (2004). DOI: 10.1109/TNN.2004.832719). Continuous firing of such neurons indicate that there is a persistent input. FIGS.15BA-BD illustrate an experimental (15BA,15BC) and simulated (15BB,15BD) tonic spiking behavior of tonic excitatory neuron circuits according to embodiments of this presentation. The data illustrated inFIGS.15BA-15BDwas measured for a neuron circuit according to an embodiment of this presentation, having the following characteristics and with the following stimuli:Lot ID: 29E, Wafer ID: L29E-3VO2 device ID: X1=5151-7, X2=5151-3RL1=RL2=5 kΩC1=5 nF C2=2 nF (plus stray capacitance ˜1 nF for each)V1=−1.5 V, V2=1.5 V Current clamp input is converted from a voltage square pulse using a stimulation isolator with a gain of 0.1 mA/V. It is noted that although the input current is a square wave (remains constant after the onset), the monitored current flowing through RL1 shows “glitches” caused by the back action of spikes toward RL1. FIG.16Aillustrates a theoretical tonic bursting behavior of a biological neuron. Some neurons, such as the chattering neurons in cat neocortex (see E. M. Izhikevich, “Which model to use for cortical spiking neurons?” IEEE Trans. Neutral Newt. 15, 1063 (2004). DOI: 10.1109/TNN.2004.832719) fire periodic bursts of spikes when stimulated, as illustrated inFIG.16A. The interburst (i.e. between bursts) frequency may be as high as 50 Hz, and it is believed that such neurons contribute to the gamma-frequency oscillations in the brain. FIG.16Billustrates a tonic bursting behavior of tonic excitatory neuron circuits according to embodiments of this presentation. The data illustrated inFIGS.16B-Dwas measured for a neuron circuit according to an embodiment of this presentation, having the following characteristics and with the following stimuli:Lot ID: 29E, Wafer ID: L29E-3VO2 device ID: X1=5051-9, X2=5051-5RL1=RL2=10 kΩC1 varies, C2=0 nF (plus stray capacitance ˜1 nF for each)V1=−1.85 V, V2=1.85 VCurrent clamp input of 50 μA by Keithley 2400 SMU FIGS.16C and16Dillustrate tonic bursting behaviors of tonic neuron circuits according to embodiments of this presentation. A neuron circuit according to an embodiment of this presentation is ergodic to get an arbitrary number of spikes in each burst period using capacitor C1 as the tuning knob.FIG.16Cillustrates the tunable tonic bursting observed when varying the value of capacitor C1, where as illustrated inFIG.16Dboth the Tonic Burst Period and Number of spikes in each bursting period increase with C1 (with C2 having a fixed value of ˜1 nF from the stray capacitance of the setup). FIG.17Aillustrates a theoretical tonic spike frequency adaptation behavior of a biological neuron (see E. M. Izhikevich, “Which model to use for cortical spiking neurons?”, IEEE Trans. Neural Netw. 15, 1063 (2004). DOI: 10.1109/TNN.2004.832719). The most common type of excitatory neuron in mammalian neocortex, namely the regular spiking (RS) cell, fires tonic spikes with decreasing frequency, as illustrated inFIG.17A. That is, the frequency is relatively high at the onset of stimulation, and then it adapts. Low-threshold spiking (LTS) inhibitory neurons also have this property. The interspike frequency of such cells may encode the time elapsed since the onset of the input. FIGS.17BA-17BDillustrate an experimental (17BA,17BC) and simulated (17BB,17BD) tonic spike frequency adaptation behavior of tonic excitatory neuron circuits according to embodiments of this presentation. The data illustrated inFIGS.17BA-17BDwas measured for a neuron circuit according to an embodiment of this presentation, having the following characteristics and with the following stimuli:Lot ID: 29E, Wafer ID: L29E-3VO2 device ID: X1=5251-13, X2=5251-9RL1=RL2=10 kΩC1=200 nF C2=2 nF (plus stray capacitance ˜1 nF for each)V1=−1.4 V, V2=1.4 V Current clamp input of 90 μA is converted from a voltage square pule of 0.9 V using a stimulation isolator with a gain of 0.1 mA/V. Similarly toFIGS.17BA-17BD,FIGS.17CA-17CDillustrate an experimental (17CA,17CC) and simulated (17CB,17CD) tonic spike frequency adaptation behavior of phasic excitatory neuron circuits according to embodiments of this presentation. FIG.18Aillustrates both a theoretical tonic class 1 excitability behavior of a biological neuron, and a theoretical tonic class 2 excitability behavior of a biological neuron (see E. M. Izhikevich, “Which model to use for cortical spiking neurons?”, IEEE Trans. Neural Netw. 15, 1063 (2004). DOI: 10.1109/TNN.2004.832719). The frequency of tonic spiking of neocortical RS excitatory neurons depends on the strength of the input, and it may span a range from 2 Hz to 200 Hz, or greater. The ability to fire low-frequency spikes when the input is weak (but supra-threshold) is called Class 1 excitability (see “Frequency-current (F-I) curves of cortical pyramidal neuron” fromFIG.1.14in E. M. Izhikevich, “Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting”, The MIT Press, Cambridge MA (2007)). Class 1 excitable neurons can encode the strength of the input into their firing rate, as illustrated inFIG.18A. FIG.18Billustrates a tonic class 1 excitability behavior of tonic excitatory neuron circuits according to embodiments of this presentation. The data illustrated inFIG.18Bwas measured for a neuron circuit according to an embodiment of this presentation, having the following characteristics and with the following stimuli:Lot ID: 29E, Wafer ID: L29E-3VO2 device ID: X1=5151-7, X2=5151-3RL1=RL2=5 kΩC1=5 nF C2=5 nF (plus stray capacitance ˜1 nF for each)V1=−1.5 V, V2=1.5 V Current ramp input up to 150 μA is converted from a voltage ramp up to 1.5 V using a stimulation isolator with a gain of 0.1 mA/V. Consistently withFIG.18B,FIG.18Cillustrates a tonic class 2 excitability behavior of tonic excitatory neuron circuits according to embodiments of this presentation. FIG.19Aillustrates a theoretical spike latency behavior of a biological neuron (see E. M. Izhikevich, “Which model to use for cortical spiking neurons?”, IEEE Trans. Neural Netw. 15, 1063 (2004). DOI: 10.1109/TNN.2004.832719). Most cortical neurons fire spikes with a delay that depends on the strength of the input signal. For a relatively weak but supra-threshold input, the delay, also called spike latency, can be quite large as illustrated inFIG.19A. The RS neuron cells in mammalian cortex can have latencies of tens of milliseconds. Such latencies provide a spike-timing mechanism to encode the strength of the input. FIGS.19BA-19BCillustrate an experimental (19BA) and simulated (19BB) spike latency behavior of tonic excitatory neuron circuits according to embodiments of this presentation. The figure also illustrates the variation of the spike latency with the pulse height in volts. The data illustrated inFIGS.19BA-19BCwas measured for a neuron circuit according to an embodiment of this presentation, having the following characteristics and with the following stimuli:Lot ID: 29E, Wafer ID: L29E-3VO2 device ID: X1=5352-1, X2=5252-13RL1=RL2=6 kΩC1=10 nF C2=3 nF (plus stray capacitance ˜1 nF for each)V1=−1.5 V, V2=1.5 VInput voltage pulse width=10 μs. FIG.20Aillustrates a theoretical subthreshold oscillations behavior of a biological neuron (see E. M. Izhikevich, “Which model to use for cortical spiking neurons?”, IEEE Trans. Neural Netw. 15, 1063 (2004). DOI: 10.1109/TNN.2004.832719). Practically every brain structure has neurons capable of exhibiting oscillatory potentials, as inFIG.20A. The frequency of such oscillations play an important role and such neurons act as bandpass filters, as discussed hereafter. FIG.20Billustrates a subthreshold oscillations behavior of tonic excitatory neuron circuits according to embodiments of this presentation. The data illustrated inFIG.20Bwas measured for a neuron circuit according to an embodiment of this presentation, having the following characteristics and with the following stimuli:Lot ID: 29E, Wafer ID: L29E-3X1=5350-11, X2=5350-7RL1=RL2=5 kOhmC1=2 nF, C2=3 nFV1=−1.4 V, V2=1.4 V Current clamps at 100 uA, 120 uA, 140 uA, 160 uA, 180 uA and 200 uA are converted from voltage square pulses using a stimulation isolator with a gain of 0.1 mA/V FIG.21Aillustrates a theoretical integrator behavior of a biological neuron (see E. M. Izhikevich, “Which model to use for cortical spiking neurons?”, IEEE Trans. Neural Netw. 15, 1063 (2004). DOI: 10.1109/TNN.2004.832719). Neurons without oscillatory potentials act as integrators. They prefer high frequency input. The higher the frequency of the input, the more likely the neuron is to fire, as illustrated inFIG.21A. Such neurons can be useful for detecting coincident or nearly coincident spikes. FIGS.21BA-21BBillustrate an experimental (21BA) and simulated (21BB) integration behavior of tonic excitatory neuron circuits according to embodiments of this presentation. The data illustrated inFIGS.21BA-21BBwas measured for a neuron circuit according to an embodiment of this presentation, having the following characteristics and with the following stimuli:Lot ID: 29E, Wafer ID: L29E-3VO2 device ID: X1=5251-13, X2=5251-9RL1=RL2=6 kΩC1=8.5 nF C2=2 nF (plus stray capacitance ˜1 nF for each)V1=−1.4 V, V2=1.4 VInput voltage pulse width=6 μsPulse doublet 1: pulse period=11 μsPulse doublet 2: pulse period=29 μs FIG.22Aillustrates a theoretical bistability behavior of a biological neuron (see E. M. Izhikevich, “Which model to use for cortical spiking neurons?”, IEEE Trans. Neural Netw. 15, 1063 (2004). DOI: 10.1109/TNN.2004.832719). Some neurons can exhibit two stable modes of operation: resting and tonic spiking (or even bursting). An excitatory or inhibitory pulse can switch between the modes, as inFIG.9P, thereby creating an interesting possibility for bistability and short term memory. It is to be noticed that to switch from the tonic spiking to resting mode, the input must arrive at an appropriate phase of oscillation, thereby emphasizing the importance of spike-timing in such information processing. FIG.22Billustrates a bistability behavior of a tonic excitatory neuron circuit20such as illustrated inFIG.3A.FIG.22Billustrates the operation of an embodiment where at a pulse interval of 154 μs, the second pulse successfully switches the neuron from tonic spiking to resting mode; and at a pulse interval of 155 μs, the second pulse fails to switch the neuron from tonic spiking to resting mode. Probability (success rate) of the second input pulse switching off the self-oscillation vs. the pulse interval is calculated from statistics of 8 to 10 such attempts. At a pulse interval of 154 μs, the success rate is 100%. At a pulse interval of 155 μs, the success rate dropped to 62.5%. The success rate peaks at around 154 μs interval, and it drops off as the interval is detuned away. The data illustrated inFIG.22Bwas measured for a tonic neuron circuit20having the following characteristics and with the following stimuli:Lot ID: 29E, Wafer ID: L29E-3VO2 devices: X1=5352-1, X2=5252-13RL1=0 KΩ, RL2=7 kΩC1=1.5 nF, C2=2 nF (plus stray capacitance ˜1 nF for each)V1=−1.58 V, V2=1.58 VCurrent input pulses of 80 μA were converted from 0.8 V, 15 μs voltage pulses using a stimulation isolator with a gain of 0.1 mA/V. FIG.23Aillustrates a theoretical inhibition-induced spiking behavior of a biological neuron (see E. M. Izhikevich, “Which model to use for cortical spiking neurons?”, IEEE Trans. Neural Netw. 15, 1063 (2004). DOI: 10.1109/TNN.2004.832719). A bizarre feature of many thalamo-cortical neurons is that they are quiescent when there is no input, but fire when hyper-polarized by an inhibitory input or an injected current, as illustrated inFIG.23A. This happens because the injected current activates the h-current and de-inactivates calcium T-current, leading to tonic spiking. FIGS.23BA-23BBillustrate an experimental and simulated inhibition-induced spiking behavior of excitatory tonic neuron circuits according to embodiments of this presentation. The data illustrated inFIGS.23BA-23BBwas measured for a neuron circuit according to an embodiment of this presentation, having the following characteristics and with the following stimuli:Lot ID: 29E, Wafer ID: L29E-3VO2 devices: X1=5251-13, X2=5251-9RL1=RL2=6 kΩC1=6 nF C2=2 nF (plus stray capacitance ˜1 nF for each)V1=−1.4 V, V2=1.4 VCurrent clamp input of −90 μA is converted from a voltage square pule of −0.9V using a stimulation isolator with a gain of 0.1 mA/V. FIG.24Aillustrates a theoretical inhibition-induced bursting behavior of a biological neuron (see E. M. Izhikevich, “Which model to use for cortical spiking neurons?”, IEEE Trans. Neural Netw. 15, 1063 (2004). DOI: 10.1109/TNN.2004.832719). Instead of spiking, a thalamo-cortical neuron can fire tonic bursts of spikes in response to a prolonged hyperpolarization, as illustrated inFIG.24A. It is believed that such bursting takes place during spindlewave oscillations in the thalamo-cortical system and it plays an important role in sleep rhythms. FIG.24Billustrates an inhibition-induced bursting firing of tonic excitatory neuron circuits according to embodiments of this presentation. The data illustrated inFIG.24Bwas measured for a neuron circuit according to an embodiment of this presentation, having the following characteristics and with the following stimuli:Lot ID: 29E, Wafer ID: L29E-3VO2 device ID: X1=5251-13, X2=5251-9RL1=RL2=6 kΩC1=35 nF C2=0 nF (plus stray capacitance ˜1 nF for each)V1=−1.4V, V2=1.4VCurrent clamp input of −70 μA is converted from a voltage square pule of −0.7 V using a stimulation isolator with a gain of 0.1 mA/V No single neuron is supposed to exhibit the neurocomputational properties discussed above, at least because some of the properties are mutually exclusive. For example, a neuron cannot be an integrator and a resonator at the same time. However, neuron circuits according to this presentation can easily be tuned to exhibit a property or another. For example, the measurements illustrated above were obtained using a neuron having easily tunable circuit parameters. FIG.25Aillustrates a theoretical excitation block behavior of a biological neuron. The Fitz Hugh-Nagumo model explains the excitation block phenomenon, i.e., the cessation of repetitive spiking as the amplitude of the stimulus current increases. When is weak or zero, the equilibrium (intersection of nullclines) is on the left (stable) branch of -nullcline, and the model is resting. Increasing shifts the nullcline upward and the equilibrium slides onto the middle (unstable) branch of the nullcline. The model exhibits periodic spiking activity in this case. Increasing the stimulus further shifts the equilibrium to the right (stable) branch of the N-shaped nullcline, and the oscillations are blocked (by excitation!). The precise mathematical mechanism involves appearance and disappearance of a limit cycle attractor (see for example: E. M. Izhikevich and R. FitzHugh, http://www.scholarpedia.org/article/FitzHugb-Nagumo_model). FIG.25Billustrates an excitation block behavior of tonic excitatory neuron circuits according to embodiments of this presentation. Excitation block is caused by the so-called supercritical Andronov-Hopf bifurcation phenomenon. Currently, there is yet no theory constructed to predict the operating domain for this behavior in memristor neurons, nonetheless, such behavior was experimentally observed for neuron circuits according to embodiments of this presentation. It is noted that although the input current is a linear ramp waveform, the monitored current flowing through RL1 shows such “glitches” caused by the back action of action potentials (spikes) toward RL1. The data illustrated inFIG.25Bwas measured for a neuron circuit according to an embodiment of this presentation, having the following characteristics and with the following stimuli:Lot ID: 29E, Wafer ID: L29E-3VO2 device ID: X1=5251-13, X2=5251-9RL1=RL2=6 kΩC1=0 nF C2=2 nF (plus stray capacitance ˜1 nF for each)V1=−1.4 V, V2=1.4 VCurrent ramp input up to 150 μA is converted from a voltage ramp up to 1.5V using a stimulation isolator with a gain of 0.1 mA/V. FIG.26Aillustrates a theoretical phasing spiking behavior of a biological neuron (see E. M. Izhikevich, “Which model to use for cortical spiking neurons?”, IEEE Trans. Neural Netw. 15, 1063 (2004). DOI: 10.1109/TNN.2004.832719). A neuron may fire only a single spike at the onset of the input as illustrated inFIG.26A, and remain quiescent afterwards. Such a response is called phasic spiking, or Class 3 excitability, and it is useful for detection of the beginning of stimulation. FIGS.26BA-26BBillustrate an experimental (26BA) and simulated (26BB) phasing spiking behavior of a phasic excitatory neuron circuit20′ such as illustrated inFIG.4A. A current source is used to send a current clamp as illustrated inFIG.26Ato a neuron circuit20′ as illustrated inFIG.4A. Output is measured on output node44of circuit20′. The data illustrated inFIGS.26BA-26BBwas measured for a neuron circuit according to an embodiment of this presentation, having the following characteristics and with the following stimuli:Lot ID: 29E, Wafer ID: L29E-3VO2 devices: X1=5352-1, X2=5252-13RL1=0, RL2=7 kΩC1=1 nF C2=2 nF (plus stray capacitance ˜1 nF for each)V1=−1.6 V, V2=1.6 VRL1 is replaced by a capacitor Cin=0.3 nFCurrent clamp input is converted from a voltage square pulse using a stimulation isolator with a gain of 0.1 mA/V FIG.27Aillustrates a theoretical phasing bursting behavior of a biological neuron (see E. M. Izhikevich, “Which model to use for cortical spiking neurons?”, IEEE Trans. Neural Netw. 15, 1063 (2004). DOI: 10.1109/TNN.2004.832719). Similarly to the phasic spikers, some neurons are phasic bursters, and fire as illustrated inFIG.27A. Such neurons report the beginning of the stimulation by transmitting a burst. There are three major hypothesis of the importance of bursts in the brain, which are: (1) bursts are needed to overcome the synaptic transmission failure and reduce neuronal noise; (2) bursts can transmit saliency of the input, because the effect of a burst on the postsynaptic neuron is stronger than the effect of a single spike; and (3) bursts can be used for selective communication between neurons, where the inter-spike frequency within the bursts encodes the channel of communication. A good model of a cortical neuronal network cannot neglect bursting neurons. FIGS.27BA-27BBillustrate an experimental (27BA) and simulated (27BB) phasing bursting behavior of a phasic excitatory neuron circuit according to embodiments of this presentation. A current source is used to send a current clamp as illustrated inFIG.27Ato a neuron circuit20′ as illustrated inFIG.4A. Output is measured on output node44of circuit20′. The data illustrated inFIGS.27BA-27BBwas measured for a neuron circuit according to an embodiment of this presentation, having the following characteristics and with the following stimuli:Lot ID: 29E, Wafer ID: L29E-3VO2 device ID: X1=5352-1, X2=5252-13RL1=0, RL2=7 kΩC1=4 nF C2=0 nF (plus stray capacitance ˜1 nF for each)V1=−1.6 V, V2=1.6 VRL1 is replaced by a capacitor Cin=0.3 nFCurrent clamp input is converted from a voltage square pulse using a stimulation isolator with a gain of 0.1 mA/V FIG.28Aillustrates a theoretical accommodation behavior of a biological neuron (see E. M. Izhikevich, “Which model to use for cortical spiking neurons?”, IEEE Trans. Neural Netw. 15, 1063 (2004). DOI: 10.1109/TNN.2004.832719). Neurons are extremely sensitive to brief coincident inputs, but may not fire in response to a strong but slowly increasing input, as illustrated inFIG.28A. The slowly ramped current illustrated does not elicit a spike, while a smaller but sharply ramped current elicits s spike. During the slow ramp, the inward currents have enough time to inactivate and outward currents have enough time to activate, so the neutron accommodates, becomes less excitable and cannot generate a spike. A circuit such as used to obtain the data ofFIGS.26,27can be used to test a neuron20′ such as illustrated inFIG.4A, and provide results such as illustrated inFIG.28B, and to test a neuron20″ such as illustrated inFIG.4B, and provide results such as illustrated inFIG.28C. FIG.28Billustrates an accommodation behavior of a phasic excitatory neuron circuit according to embodiments of this presentation. Three current ramps are applied to the input of the neuron, and the neuron fires only when the slope of the ramp is steep enough. The data illustrated inFIG.28Bwas measured for a neuron circuit20′ according to an embodiment of this presentation, having the following characteristics and with the following stimuli:Lot ID: 29E, Wafer ID: L29E-3VO2 device ID: X1=5352-1, X2=5252-13RL1=0, RL2=7 kΩC1=1 nF C2=0 nF (plus stray capacitance ˜1 nF for each)V1=−1.68 V, V2=1.68 VRL1 is replaced by a capacitor Cin=0.3 nFCurrent ramp input is converted from a voltage ramp waveform using a stimulation isolator with a gain of 0.1 mA/V FIG.28Cillustrates an accommodation behavior of a phasic excitatory neuron circuit according to embodiments of this presentation. Three ramps are submitted in input of the neuron, and the neuron fires only when the slope of the ramp is steep enough. The data illustrated inFIG.28Cwas measured for a neuron circuit20″ according to an embodiment of this presentation, having the following characteristics and with the following stimuli:Lot ID: 29E, Wafer ID: L29E-3VO2 device ID: X1=5152-13, X2=5152-9RL1=RL2=7 kΩC1=0 nF C2=1 nF (plus stray capacitance ˜1 nF for each)V1=−1.5 V, V2=1.5 VCin=0.3 nF inserted before RL1Current ramp input is converted from a voltage ramp waveform using a stimulation isolator with a gain of 0.1 mA/V FIG.29Aillustrates a theoretical rebound spike behavior of a biological neuron (see E. M. Izhikevich, “Which model to use for cortical spiking neurons?”, IEEE Trans. Neural Netw. 15, 1063 (2004). DOI: 10.1109/TNN.2004.832719). When a neuron receives, then is released of, an inhibitory input, it may fire a post-inhibitory (rebound) spike, as illustrated inFIG.29A. The neuron then acts as a “rise edge” detector. This phenomenon is related to the anodal break excitation in excitable membranes. Many spiking neurons can fire in response to brief inhibitory inputs, thereby blurring the difference between excitation and inhibition. FIGS.29B,29C and29Dillustrate rebound spike behaviors of a phasic excitatory neuron circuit20′ such as illustrated inFIG.4A. As illustrated inFIG.29C, in the measured neuron circuit there exists a threshold amplitude of 0.5V for the negative inhibitive input pulse to elicit a rebound spike, above which an action potential is generated at the rise edge of the inhibitive input pulse. FIG.29Calso shows the measured Na gate potentials: the Na gate potential produces a spikelet (downward arrow) at the rise edge of the inhibitive voltage input, triggering the Na gate and subsequently the K gate openings—and an action potential generation. The Na spikelet has to go across zero to become positive for the action potential to be triggered. As illustrated inFIG.29D, in the measured neuron circuit, there exists a threshold input pulse duration of ˜5 μs for rebound spike to occur, a shorter inhibitive input pulse (4 μs) will not elicit a rebound spike at its rise edge. The data illustrated inFIGS.29B,29C and29Dwas measured for a neuron circuit20′ according to an embodiment of this presentation, having the following characteristics and with the following stimuli:Lot ID: 29E, Wafer ID: L29E-3VO2 device ID: X1=5352-1, X2=5252-13RL1 replaced by Cin=0.3 nF, RL2=5.9 kΩC1=0 nF C2=1 nF (plus stray capacitance ˜1 nF for each)V1=−1.5 V, V2=1.5 VInhibitory voltage input pulse height=−0.4 V to −0.6 VInhibitory voltage input pulse width: 4 μs to 50 μs FIG.30Aillustrates a theoretical rebound burst behavior of a biological neuron (see E. M. Izhikevich, “Which model to use for cortical spiking neurons?”, IEEE Trans. Neural Netw. 15, 1063 (2004). DOI: 10.1109/TNN.2004.832719). Some neurons, including the thalamo-cortical cells, may fire post inhibitory bursts, as illustrated inFIG.30A. It is believed that such bursts contribute to the sleep oscillations in the thalamo-cortical cells. FIGS.30BA-30BBillustrate an experimental (30BA) and simulated (30BB) rebound burst behavior of a phasic excitatory neuron circuit20′ such as illustrated inFIG.4A. The data illustrated inFIGS.30BA-30BBwas measured for a neuron circuit20′ having the following characteristics and with the following stimuli:Lot ID: 29E, Wafer ID: L29E-3VO2 device ID: X1=5352-1, X2=5252-13RL1 replaced by Cin=0.3 nF, RL2=5.9 kΩC1=0 nF C2=0.5 nF (plus stray capacitance ˜1 nF for each)V1=−1.5 V, V2=1.5 VInhibitory voltage input pulse height=−0.5 VInhibitory voltage input pulse width=10 μs FIG.31Aillustrates a theoretical threshold variability behavior of a biological neuron (see E. M. Izhikevich, “Which model to use for cortical spiking neurons?”, IEEE Trans. Neural Netw. 15, 1063 (2004). DOI: 10.1109/TNN.2004.832719). A common misconception in the artificial neural network community is the belief that spiking neurons have a fixed voltage threshold. It is known that biological neurons have a variable threshold that depends on the prior activity of the neurons.FIG.31Aillustrates a first stimulation of a neuron with a brief excitatory pulse of current that produces a 10 mV depolarization. The neuron does not fire, hence the input is subthreshold. Then a brief inhibitory pulse is applied, followed by the same “subthreshold” pulse of current as sent in the first place. The neuron fires the second time because its “threshold” was lowered by the preceding inhibitory input. Hence, the same 10 mV can be subthreshold or supra-threshold depending on the prior activity. Interestingly, in some neurons according to an embodiment of this presentation, a preceding excitatory pulse can raise the threshold and make the neuron less excitable. FIGS.31BA-31BBillustrate an experimental (31BA) and simulated (31BB) threshold variability behavior of a phasic excitatory neuron circuit20′ such as illustrated inFIG.4A. The data illustrated inFIGS.31BA-31BBwas measured for a neuron circuit20′ having the following characteristics and with the following stimuli:Lot ID: 29E, Wafer ID: L29E-3VO2 device ID: X1=5352-1, X2=5252-13RL1 replaced by Cin=0.3 nF, RL2=5.9 kΩC1=0 nF C2=0.5 nF (plus stray capacitance ˜1 nF for each)V1=−1.5 V, V2=1.5 VSubthreshold excitatory & inhibitory voltage inputsPulse height=0.4 V, Pulse width=15 μsExcitatory-inhibitory pulse interval=5 μs FIG.32illustrates a simulated action potential generation (polarity-inverted mirror of the phasic spiking as illustrated inFIG.26(B)) of a neuron such as illustrated inFIG.5. Inhibitory neurons according to embodiments of this presentation can have all the spiking behaviors listed inFIG.8and illustrated inFIGS.9A-9W and10B-10R, where each behavior is a polarity-inverted mirror of the excitatory behavior. FIG.33Aillustrates a theoretical Depolarizing After-Potentials behavior of a biological neuron (see E. M. Izhikevich, “Which model to use for cortical spiking neurons?”, IEEE Trans. Neural Netw. 15, 1063 (2004). DOI: 10.1109/TNN.2004.832719). After firing a spike, the membrane potential of a neuron may exhibit a prolonged after-hyperpolarization (AHP) as, e.g. inFIG.10B or10M, or a prolonged depolarized after-potential (DAP) as inFIG.10Q. Such DAP behaviors can appear because of dendridic influence, because of high-threshold inward currents activated during the spike, or because of an interplay between sub-threshold voltage-gated currents. In any case, such a neuron has a shortened refractory period and it becomes superexitable. It is noted that DAP is described in E. M. Izhikevich, “Which model to use for cortical spiking neurons?”, IEEE Trans. Neural Netw. 15, 1063 (2004). DOI: 10.1109/TNN.2004.832719. However, differently from the Izhikevich's paper, the DAP behavior observed for a neuron according to this presentation requires a phasic neuron circuit and a D.C. input current instead of a short current pulse. The left panel ofFIG.33Billustrates that in a phasic excitatory neuron circuit, the original hyperpolarizing after-potential gradually morphs into a DAP (downward arrow) at higher input current levels. The right panel ofFIG.33Billustrates that when the observed DAP is developed, the neuron becomes superexcitable, and a small further increase of input current elicits a second spike. The data illustrated inFIG.33Bwas measured for a phasic neuron circuit20′ having the following characteristics and with the following stimuli:Lot ID: 29E, Wafer ID: L29E-3VO2 devices: X1=5352-1, X2=5252-13RL1 replaced by Cin=0.3 nF, RL2=6 kΩC1=0.9 nF C2=2 nF (plus stray capacitance ˜1 nF for each)V1=−1.3 V, V2=1.3 VCurrent clamp input is converted from a voltage square pulse using a stimulation isolator with a gain of 0.1 mA/V FIG.34Aillustrates a theoretical Mixed Mode behavior of a biological neuron (see E. M. Izhikevich, “Which model to use for cortical spiking neurons?”, IEEE Trans. Neural Netw. 15, 1063 (2004). DOI: 10.1109/TNN.2004.832719). Intrinsically bursting excitatory neurons in mammalian neocortex can exhibit a mixed type of spiking activity as depicted for example inFIG.34A. Such neurons fire a phasic burst at the onset of stimulation and then switch to a tonic spiking mode. It is not clear yet what type of computation such a neuron can do in addition to detecting the onset and reporting the extent of the stimulus. FIG.34B-Dillustrates the schematics of a phasic mode neuron circuit20′ according to this presentation, a tonic mode neuron circuit20according to this presentation and a mixed mode neuron circuit20′″ according to this presentation, as well as their compared responses to a same current clamp stimulus. It is noted that circuit20′″ differs from circuits20and20′, by having in parallel a first load capacitor Cin and a first load resistor RL1 instead of a first load resistor RL1 and a first load capacitor Cin, respectively. As illustrated inFIG.34B, a phasic neuron circuit20′ with a capacitive input impedance has a Phasic bursting behavior; a tonic neuron circuit20with a resistive input impedance has a Tonic spiking behavior; and a mixed-mode neuron circuit20′″ with in parallel Cin and RL1 input impedance has a phasic burst then tonic spiking behavior. Their simulated spiking behaviors agree well with the experimental data. The data illustrated inFIGS.34B-Dwas measured for a mixed mode neuron circuit20′″ having the following characteristics and with the following stimuli:Lot ID: 29E, Wafer ID: L29E-3VO2 device ID: X1=5351-11, X2=5351-7RL1=240 kΩ, Cin=1 nF, RL2=9 kΩC1=4 nF C2=1.2 nF (plus stray capacitance ˜1 nF for each)V1=−1.6 V, V2=1.6 VCurrent clamp input is converted from a voltage square pulse using a stimulation isolator with a gain of 0.1 mA/V The foregoing Detailed Description of exemplary and preferred embodiments is presented for purposes of illustration and disclosure in accordance with the requirements of the law. It is not intended to be exhaustive nor to limit the invention to the precise form(s) described, but only to enable others skilled in the art to understand how the invention may be suited for a particular use or implementation. The possibility of modifications and variations will be apparent to practitioners skilled in the art. No limitation is intended by the description of exemplary embodiments which may have included tolerances, feature dimensions, specific operating conditions, engineering specifications, or the like, and which may vary between implementations or with changes to the state of the art, and no limitation should be implied therefrom. Applicant has made this presentation with respect to the current state of the art, but also contemplates advancements and that adaptations in the future may take into consideration of those advancements, namely in accordance with the then current state of the art. Reference to a feature element in the singular is not intended to mean “one and only one” unless explicitly so stated. Moreover, no element, component, nor method or process step in this presentation is intended to be dedicated to the public regardless of whether the element, component, or step is explicitly recited in this presentation. No element disclosed herein is to be construed under the provisions of 35 U.S.C. Sec. 112, sixth paragraph, unless the element is expressly recited using the phrase “means for . . . ” and no method or process step herein is to be construed under those provisions unless the step, or steps, are expressly recited using the phrase “comprising the step(s) of . . . .” | 79,928 |
11861489 | DESCRIPTION OF THE EMBODIMENTS In order to make purpose, technical solutions and advantages of the disclosure clearer, the disclosure will be further described in detail in combination with the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the disclosure, and are not intended to limit the disclosure. In addition, the technical features involved in the various embodiments of the disclosure described below can be combined with each other as long as there is no conflict with each other. The on-chip learning performed by the convolutional neural network not only can overcome the influence of device variability, but also more in line with the learning characteristics in biology. Furthermore, the on-chip learning of convolutional neural network can also modify the weight according to the task to be performed, thus having good flexibility. Therefore, it is necessary to realize the hardware, the integration of storage and calculation, and on-chip learning of the convolutional neural network. The disclosure provides an on-chip learning system of a convolutional neural network based on non-volatile memory, including: an input module, a convolutional neural network module, an output module, and a weight update module. The on-chip learning of the convolutional neural network module implements a synaptic function by using the characteristic which the conductance of the memristor changes according to the applied pulse, and the convolution kernel value or synaptic weight value is stored in the memristor unit. The input module converts the input signal into the input voltage pulse signal required by the convolutional neural network, and then transmits the result to the convolutional neural network module; the convolutional neural network module performs a layer-by-layer calculation and conversion on the input voltage pulse signal corresponding to the input signal, and transmits the result to the output module to obtain the output of the entire network; the output module is connected to the convolutional neural network module and the weight update module respectively, and configured to convert and send the output signal generated by the convolutional neural network module to the weight update module; the weight update module calculates and adjusts the conductance of the memristor according to the result of the output module, thereby updating the network convolution kernel value or synaptic weight value. Optionally, the input module converts the external input signal into the voltage signal required by the convolutional neural network. The pulse width or pulse amplitude of the input signal and the voltage pulse signal follow a proportional relationship. The larger the input signal value, the wider (larger) the pulse width (or pulse amplitude) of the corresponding voltage pulse signal. Otherwise, the narrower (smaller) the corresponding voltage signal, and leads to that the input voltage pulse signal should be lower than the erase voltage of the memristor. Optionally, the convolutional neural network module adopts the memristor to simulate the convolution kernel value and the synaptic weight value, and the resistance of the memristor is changed as an electrical signal is applied. The convolutional neural network module includes: a convolutional layer circuit module and a pooling layer circuit module, which are composed of memristor array as the convolution kernels, and a fully connected layer circuit module composed by using the memristors as synapses. The convolutional layer circuit module receives the input voltage pulse signal output by the input module, the input voltage pulse signal is processed through the layer-by-layer calculation and conversion performed by the convolutional layer circuit module, the pooling layer circuit module, and the fully connected layer circuit module, and the calculation result is sent to the output module. Optionally, the convolutional layer circuit module composed by using the memristor array as the convolution kernel is composed of a convolution operation circuit composed of the memristor array and an activation function part. Since there are positive and negative weight values in the biological nerve system, the circuit adopts two rows of memristor array as a convolution kernel to achieve positive and negative convolution kernel values. Meanwhile, in order to obtain all the convolution operation results in one step without the need for an intermediate complex storage layer, when the initial convolution kernel value corresponds to the memristor conductance value, the convolution kernel value is mapped to a matrix capable of performing matrix multiplication operation with the entire input signal. The convolution kernel is expanded into two large sparse matrices K+ and K−. Correspondingly, the characteristic that the memristor is capable of being applied with positive and negative read voltage pulses is utilized to convert the input signal into two one-dimensional matrices with positive input X and negative input −X. The convolution operation circuit performs convolution operation on the input voltage pulse signal and the convolution kernel value stored in the memristor unit, and collects the current of the same column to obtain the convolution operation result. The convolution operation process is y=f((X⊗K+)+(−X⊗K−)+b), wherein ⊗ is the convolution operator, X is the input voltage signal of the front synapse of the neuron node, K+ and K− are respectively the positive and negative convolution kernel values corresponding to the neuron node, so the effective convolution kernel value is (K+)−(K−), and the positive and negative convolution kernel values can be realized; b is the bias term corresponding to the convolutional layer network, and f(.) is the activation function. Then, the output result is transmitted to the pooling layer module. The activation function f(.) mainly includes: sigmoid function, tan h function, ReLU function, ELU function and PReLU function. The activation function activates the convolution operation result and obtains two opposite output values, namely y and −y, and simultaneously converts the two opposite output values y and −y into voltage pulse signals to be used as the input of the pooling layer. Optionally, the pooling layer circuit module composed by using the memristor array as the convolution kernel is mainly divided into the average pooling operation and the maximum pooling operation, which are composed of a pooling operation circuit composed of the memristor array and a voltage conversion module. The pooling operation is a simpler convolution operation. The convolution kernel value stored in the memristor array remains unchanged during the training process. The circuit structure and the distribution of the convolution kernel mapping are the same as the convolutional layer circuit module, only the stored convolution kernel value is changed. One end of the memristors in the same row are connected together to connect the output of the convolutional layer circuit module, and another end of the memristors in the same column are connected together to connect the voltage conversion module. The voltage conversion module converts the result of the pooling operation circuit into two opposite voltage pulse signals h and −h to serve as the input of the fully connected layer circuit module. Optionally, the fully connected layer circuit module composed by using the memristor array as synapses realizes the classification function. The fully connected layer circuit module is composed of the fully connected layer circuit composed of the memristor array and the softmax function part. Since the neurons in the fully connected layer and the neurons in the pooling layer are fully connected, the weight mapping methods of the fully connected layer circuit module and the convolutional layer circuit module are different. The fully connected layer circuit is configured to store and calculate the weight matrix, only completes a series of multiplication accumulation operations without shifting the weight matrix. Two memristors are used as a synapse to realize positive and negative weight values. One end of the memristors are connected to the pooling layer circuit unit, and another end of the memristors are connected to the softmax function, the current of the same column is collected to obtain the output result of the layer. The output result is ml=Σl((Wkl+−Wkl−)*hk+bk), zl=emlΣjemj. In the equation, hkis the input voltage pulse signal of front synapse of the kth neuron node, Wkl+and Wkl−are respectively the positive and negative synaptic weight values input by the kth neuron node of the lth neuron node stored in the memristor, and the effective weight value of the synapse is Wkl+−Wkl−, thereby realizing positive and negative synaptic weight values. bkis the bias term corresponding to the kth neuron node; mlrepresents the lth element output through the operation of the fully connected layer circuit; Σjemjis the exponential sum of all output signal elements; zlis the probability output value corresponding to the signal mlafter being processed through the softmax function. The softmax function realizes zl=emiΣjemj, that is, the function of normalizing the values output by the fully connected layer to a probability value, and then the result is transmitted to the output module to obtain the output of the entire convolutional neural network, and the result is sent to the weight update module. Optionally, the weight update module includes a result comparison module, a calculation module, and a driving module. The result comparison module is connected to the output module and the calculation unit respectively, the result comparison module compares the output result of the current convolutional neural network module with the ideal result, and sends the comparison result to the calculation module. The calculation module is connected to the result comparison module and the driving circuit respectively, the calculation module receives the error signal δ sent from the result comparison module, and calculates the adjustment amount of the network convolution kernel value or weight value according to the determined neural network backpropagation algorithm, and then sends the result to the driving unit. The driving unit includes a pulse generator and a read-write circuit, the driving unit receives the adjustment amount of the convolution kernel value or weight value sent from the calculation unit, and adjusts the conductance of the memristors of the convolutional layer circuit unit and the fully connected layer circuit unit. The pulse generator is configured to generate a modulation signal for adjusting the conductance of the memristor. The read-write circuit is configured to complete the read and write operations on the convolution kernel value or synaptic weight value of the convolutional neural network module based on the memristor. FIG.1is a schematic structural diagram of a convolutional neural network on-chip learning system based on non-volatile memory provided by an embodiment of the disclosure. As shown inFIG.1, the system includes: an input module, a convolutional neural network module, an output module, and a weight update module. The input module converts the external input signal into the voltage signal required by the convolutional neural network. The pulse width or pulse amplitude of the input signal and the voltage pulse signal follow a proportional relationship; the larger the input signal value, the wider (larger) the pulse width (or pulse amplitude) of the corresponding voltage pulse signal, otherwise, the narrower (smaller) the corresponding voltage signal, and the voltage signal is transmitted into the convolutional neural network module. The convolutional neural network module performs a layer-by-layer calculation and conversion on the input voltage pulse signal corresponding to the input signal, and sends the result to the output module to obtain the output of the entire network. The output module is connected to the convolutional neural network module and the weight update module respectively, and configured to convert and send the output signal generated by the convolutional neural network module to the weight update module. The weight update module calculates and adjusts the conductance value of the memristor according to the result of the output module, so as to update the network convolution kernel value or synaptic weight value. It should be noted that the convolution operation is the most important and computationally intensive part of the convolutional neural network. As a generalized concept of integration, convolution operation has important applications in image recognition and digital signal processing. Convolution operation is defined as follows. Starting from the upper left corner of the input matrix, open a window in use with the same size as the template (i.e., the convolution kernel). The convolution kernel is typically a square grid structure, and each square in the area has a weight value. First, invert the convolution kernel by 180°. The window matrix and the convolution kernel elements corresponding to each other are multiplied and added together, and the calculation result is used to replace the element in the center of the window. Then, the window in use is moved one column to the right and performs the same calculation. Likewise, the same operation is performed from the left to the right and from the top to the bottom until the matrix is completely overlapped by the convolution kernel to form the new matrix after convolution. When the input matrix is m×m and the size of the convolution kernel matrix is n×n, the corresponding output matrix size is (m−n+1)×(m−n+1).FIG.2demonstrates the convolution calculation process of a 3×3 input matrix and a 2×2 convolution kernel to obtain a 2×2 output matrix. FIG.3is a schematic diagram of a memristor unit provided by an embodiment of the disclosure. As a non-volatile device, the read-write speed, density, programming voltage and other indexes of the memristor can be comparable to today's leading storage technology, and the energy consumption thereof is relatively low. The simulation of memory function of a memristor is similar to a biological synapse, and its conductance can be continuously changed by applying a relatively large voltage bias, but remains unchanged when a relatively small or no bias is applied. By using different conductance values of the memristor to distinguish between different storage states, the characteristic that the conductance of the memristor is gradually changed under the pulse effect is utilized to simulate the change process of biological synaptic weight, which is similar to the self-adjusting and learning function of the neural network. The type of the memristor can be two-end memristor, three-end memristor or other common types. Moreover, the memristor can be applied with positive and negative read voltage pulses. Such a feature can prevent additional subtraction circuits from being introduced when realizing the positive and negative weight values, thereby reducing the circuit scale to a certain extent. FIG.4is a schematic structural diagram of a convolutional layer circuit module composed of the memristor array as the convolution kernel according to an embodiment of the disclosure. The convolutional layer circuit module is composed of a convolution operation circuit composed of the memristor array and an activation function part. The figure shows a convolution operation circuit which adopts the input signal matrix of size i1/2× i1/2, having the convolution kernel with the size of n×n, and having the output matrix with the size of j12×j12(j12=i12-n+1). One end of the memristors in the same row are connected together to connect the input module, and another end of the memristors in the same row are connected together to connect the activation function f(.). Since there are positive and negative weight values in the biological nerve system, the circuit adopts two rows of memristor arrays as a convolution kernel to achieve positive and negative convolution kernel values. In the meantime, the convolution kernel is shared in the convolutional layer, it is necessary to use the same convolution kernel to continuously scan the input matrix until all the elements in the input matrix are covered by the convolution kernel matrix, and obtain a series of convolution operation results. In order to obtain all the convolution operation results in one step without the need for a complexed intermediate storage layer, when the initial convolution kernel value corresponds to the memristor conductance value, the convolution kernel value is mapped to a matrix capable of performing matrix multiplication operation with the entire input signal. The convolution kernel is expanded into two large sparse matrices K+ and K−. Correspondingly, the characteristic that the memristor is capable of being applied with positive and negative read voltage pulses is utilized to convert the input signal into two one-dimensional matrices with positive input X and negative input −X. Therefore, the size of the memristor array required is (2×i+1)×j. The convolution operation circuit performs the convolution operation on the input voltage pulse signal and the convolution kernel value stored in the memristor unit, and collects the current in the same column to obtain the convolution operation result. The convolution operation process is y=f((X⊗K+)+(−X⊗K−)+b), wherein y is the result of the convolution operation, ⊗ is the convolution operator, X is the input voltage signal of the front synapse of the neuron node, K+ and K− are respectively the positive and negative convolution kernel values corresponding to the neuron node, so the effective convolution kernel value is (K+)−(K−), and the positive and negative convolution kernel values can be realized; b is the bias term corresponding to the convolutional layer network, and f(.) is the activation function. InFIG.4, Xirepresents the input voltage signal, and Xbrepresents the input voltage signal of the bias term. Then, the output result is transmitted to the pooling layer module. The activation function f(.) mainly includes: sigmoid function, tan h function, ReLU function, ELU function and PReLU function. The activation function activates the convolution operation result and obtains two opposite output values, y and −y, and simultaneously converts the two opposite output values y and −y into voltage pulse signals to be used as the input of the pooling layer. In the following, the 2×2 convolution kernel matrix K and the 3×3 input signal matrix X are taken as examples to demonstrate how the convolution kernel is expanded into large sparse matrices K+ and K− and how the input matrix is converted into two one-dimensional matrices which are positive input X and negative input −X. FIG.5(a)shows how the convolution kernel matrix K based on the memristor array is converted into matrices K+ and K− by using the proposed method. The convolution kernel is first rotated by 180° and then converted into two matrices. The memristor corresponding to the matrix element which is 0 is in an un-forming state. A high resistance state is constantly maintained during the learning process, so the memristor array can easily explain the positive and negative convolution kernel values. Since the input signal matrix X has 9 elements, there must be 9 rows of convolution kernel matrices K+ and K− each. FIG.5(b)shows how the input matrix X is converted into two one-dimensional matrices X and −X, and multiplied by K+ and K−, respectively. Since the size of K is 2×2 and the size of X is 3×3, the size of the output feature is 2×2. Therefore, there must be 4 columns of convolution kernel matrices, and each output value corresponds to one column. FIG.6is a schematic structural diagram of a pooling layer circuit module composed by using the memristor array as the convolution kernel according to an embodiment of the disclosure. The pooling layer circuit module is mainly divided into the average pooling operation and the maximum pooling operation. The entire input matrix is separated into several small blocks with the same size in a non-overlapping manner. Only the maximum value or the average value is taken from each small block, and then the other nodes are discarded, and the original planar structure is maintained to obtain the output. The pooling operation can significantly and effectively reduce the size of the matrix, thereby reducing the parameters in the final fully connected layer. In the meantime, using the pooling layer not only can speed up the calculation but also can prevent overfitting. The pooling layer circuit module is connected to the convolutional layer circuit module and the fully connected layer circuit module respectively. The pooling operation is a simpler convolution operation, and the convolution kernel value stored in the memristor array remains unchanged during the training process. Its circuit structure and the distribution of the convolution kernel mapping are the same as the convolutional layer circuit module, only the stored convolution kernel value is changed. One end of the memristors in the same row are connected together to connect the output of the convolutional layer circuit module, and another end of the memristors in the same column are connected together to connect the voltage conversion module. The output terminal of the voltage conversion module is connected to the fully connected layer circuit module. The current in the same column is collected together to achieve the accumulation calculation, and the result of the pooling operation can be obtained by collecting the results at the output terminal of the voltage converter. The voltage conversion module converts the result of the pooling operation circuit into two opposite voltage pulse signals, h and −h, to serve as the input of the fully connected layer circuit module. In this embodiment, a pooling operation with a matrix size of 2×2 is adopted. Since the output matrix of the convolutional layer circuit module is j1/2×j1/2, the output matrix is k12×k12(k12=12×j12), so the memristor array size of the circuit layer circuit module is (2×j+1)×k. After the pooling operation is completed, the result is sent to the fully connected layer circuit module, wherein hkinFIG.6represents the pooling operation result, and y represents the output result of the convolutional layer unit. FIG.7is a schematic structural diagram of a fully connected layer circuit module composed by using the memristors as synapses provided by an embodiment of the disclosure. The fully connected layer circuit module is connected to the pooling layer circuit module and the output module respectively. The fully connected layer circuit module maps the final output to a linearly separable space, which is to realize the classification function. The fully connected layer circuit module is composed of a fully connected layer circuit composed of the memristor array and a softmax function part. Since the fully connected layer completes a series of simple multiplication accumulation operations in the perceptron network, its neurons and neurons in the pooling layer are fully connected. Therefore, the weight mapping methods of the fully connected layer circuit module and the convolution layer circuit module are different. The fully connected layer circuit is configured to store and calculate the weight matrix, and the convolutional operation circuit is configured to store and calculate a set of convolution kernel arrays. Likewise, two memristors are adopted as a synapse to achieve positive and negative weight values. One end of the memristors in the same row are connected together to connect the output of the pooling layer circuit module, and the another end of the memristors in the same column is connected together to connect the softmax function. The current of the same column is collected to obtain the output result of the layer. The output result is ml=El((Wkl+−Wkl−)*hk+bk), zl=emlΣjemj. In the equation, hkis the input voltage pulse signal of front synapse of the kth neuron node, Wkl+and Wkl−are respectively the positive and negative synaptic weight values input by the kth input of the lth neuron node stored in the memristor, and the effective weight value of the synapse is Wkl+−Wkl−, thereby realizing positive and negative synaptic weight values. bkis the bias term corresponding to the kth neuron node; mlrepresents the lth element output through the operation of the fully connected layer circuit; Σjemjis the exponential sum of all output signal elements; zlis the probability output value corresponding to the signal mlafter being processed through the softmax function. The softmax function realizes zl=emiΣjemj, that is, the function of normalizing the values output by the fully connected layer to a probability value. Since the size of the output matrix of the pooling layer is k1/2×k1/2, if there are l classifications in the final classification, the size of the memristor array of the fully connected layer circuit in this embodiment is (2× k+1)×l. Then the result is transmitted to the output module to obtain the output of the entire network, and the result is sent to the weight update module. InFIG.7, hbrepresents the input signal corresponding to the bias term. In the following, the 3×3 weight matrix is adopted as an example to demonstrate how to map the weight matrix into two matrices W+ and W−. FIG.8shows how to use the proposed method to convert the 3×3 weight matrix W based on the memristor array into two one-dimensional matrices W+ and W−. In the two matrices, the memristor corresponding to the matrix element which is 0 is in an un-forming state. A high resistance state is always maintained during the learning process, so the memristor array can easily explain the positive and negative weight values. FIG.9is a schematic diagram of a weight update module provided by an embodiment of the disclosure. The weight update module is connected to the output module and the convolutional neural network module respectively, including: a result comparison unit, a calculation unit, and a driving unit. The result comparison unit is connected to the output unit and the calculation unit respectively, the result comparison unit compares the output result of the current convolutional neural network module with an ideal result, and sends the comparison result δ to the calculation unit. The calculation unit is connected to the result comparison unit and the driving unit respectively, the calculation unit receives the error signal δ sent from the result comparison unit, and calculates the adjustment amount Δ of the network convolution kernel value or weight value according to the predetermined neural network backpropagation algorithm, and then sends the result to the driving unit. The driving unit includes a pulse generator and a read-write circuit, the driving unit receives the adjustment amount of the convolution kernel value or weight value sent from the calculation unit, and adjusts the conductance of the memristors of the convolutional layer circuit unit and the fully connected layer circuit unit. The pulse generator is configured to generate a modulation signal for adjusting the conductance of the memristor; the read-write circuit completes the read and write operations on the convolution kernel value or connection weight of the convolutional neural network module based on the memristor. In the following, the operation of the convolutional layer circuit module is taken as an example to demonstrate how the convolution kernel value is updated during the learning process by using the memristor array as a convolution kernel. FIG.10is a schematic circuit diagram of the operation of the memristor array of the convolutional layer circuit module in the weight update process provided by an embodiment of the disclosure. First, select a column in the array to adjust the conductance value of the memristor. The column lines of the memristor in the column are grounded, and the row lines are applied with different conductance adjustment voltage pulse signals. The amplitude of the applied voltage pulse signal is fixed, and the number of pulses is proportional to the adjustment amount Δ of the convolution kernel value or synaptic weight value, thereby updating the convolution kernel value or the synaptic weight value. The same process is applied to other columns, so that the conductance of the entire array of memristors can be adjusted. The update process for weight in the fully connected layer circuit module is similar to the process described above. Those skilled in the art can easily understand that the above are only preferred embodiments of the disclosure and are not intended to limit the disclosure. Any modification, equivalent replacement and improvement made within the spirit and principle of the disclosure should fall within the scope of the disclosure. | 29,254 |
11861490 | DETAILED DESCRIPTION Network services can be powerful tools that allow clients to perform a wide variety of processing operations. For example, image analysis algorithms can be applied to many domains, such as medical or health care, social networks, autonomous driving, and others. With advances in artificial intelligence, machine learning, and related applications, more and more users are engaging with such systems. Wide adoption, however, can be hindered in part because not all users in these domains have sufficient time or resources to deploy state-of-the-art solutions. The features described in this application provide an end-to-end solution to generate services including reinforcement learning models for users with little or no prior knowledge of simulation or artificial intelligence techniques or limited access to data to train a useful model. Reinforcement learning (RL) is similar to supervised learning, but on a continuously changing dataset. It is desirable to minimize the number of updates to the model based on each sampled dataset. RL is generates models based on Markov Decision Processes (MDPs) or Partially Observable Markov Decision Processes (POMDPs). RL models may consider five categories of parameters at every time step: an environment, a state, an action, a reward, and an observation. The state may indicate the information about the past that is relevant to the future. As an example, consider a robot agent that can move in any direction at any time step. The position of the robot is its state, since once the system knows where the robot is located, the system need not understand how the robot got there. An action refers to what an agent does. In the above robot example, the chosen direction of motion at some time may be the action. The environment is the world the agent lives in. The primary function of the environment is, given an action and the current state, move the system to the next state and emit a reward. In the robot example, a new state may be the new position of the robot, and the reward may be one if a hidden treasure is found, and zero otherwise. The goal of RL is to learn a policy that maps from states to an action such that the agent maximizes its long term reward. This could be, for example, how quickly the robot finds the treasure. RL model training may be based on the agent interactions with its environment. In many applications, the environment can be modeled as a simulator. The simulator can create the labels which may be used to train the model. A simulation environment may include an agent and a simulator. For example, a car racing game can be considered as a simulator. A convolutional neural network (CNN) that consumes images from the simulator and generates actions to control the game controller may represent the agent. With multiple simulations, the environment generates training data of the form <state_t, action, state_t+1, reward_t+1>. The definition of the reward is not trivial and can impacts the RL model quality. The reward functions may be one aspect that can be identified from previous training to expedite the generation of new models. Features are described to perform training and/or simulation as part of the generation of an RL model. Training and simulation may be integrated within a single environment or decoupled for execution onto different environments. The decoupled execution can help with independent scaling of the two aspects, depending on algorithmic and application-specific-simulation requirements. Decoupling also enables use of virtual environments as a simulation environment, or integration with legacy simulators such as simulator which only runs on specific operating systems or are hosted at network locations remote to the training environment. FIG.1depicts an embodiment of an environment for providing networked machine learning services. The environment100may include a client device102configured for networked communications. Via a network104, the client device102may access a machine learning service120. The machine learning service120may be hosted by a physical server attached to a network address or a virtual server operating within a virtual private cloud or virtual private network. The client device102may access the machine learning service120through an interface. The interface may be a machine interface, sometimes referred to as an application programming interface. The machine interface allows an application executing on the client device102to transmit machine-readable messages to the machine learning service120to request machine learning models. In some implementations, the interface may be a graphical user interface including control elements to collect information for a training request and transmit the training request to the machine learning service120. A machine learning management component126may be included in the machine learning service120. The machine learning management component126may process the incoming training requests as described herein. For example, the machine learning management component126may instantiate an RL training cluster122for training the requested model. The machine learning management component126may instantiate a simulation environment124for the requested model. Because multiple clients may be transmitting different training request, the machine learning management component126may monitor multiple RL training clusters and/or simulation environments. The machine learning management component126may dynamically instantiate the RL training clusters or simulation environments based on the training request. For example, it may be determined, based on previous RL model training, that a specific hardware configuration provides sufficient processing resources to generate a model at a desired accuracy in a specified period of time. The machine learning management component126may identify the parameters for instantiating the RL training cluster and/or simulation environment124. The models that are generated may be stored in an RL model data store128. The models may be stored in association with an identifier for the client and the model. This can allow the client to request a hosted model service that can process requests using the model. For example, the client may submit a request including the identifier for the model and the machine learning service120may create a virtual server attached to a network address that will forward packets received to the model. In response, the output of the model will be forwarded to the requesting device or an address specified by the requesting device. Training data may be used to generate the model. The training data may be provided by the client device in or associated with the training request. The training data may be collected from the simulation environment124. For example, the machine learning service120may monitor the simulation environment124while executing for a training request. Event data monitored during the execution, or provided via the training request, may be stored in a training data store130. In some implementations, a training request may specify a custom simulator to use for training. The custom simulator may be executing in a client environment106. The training request may include information to connect to the custom simulator such as a network address, login credentials, and the like. In such instances, the simulation environment124may proxy simulation inputs and outputs with the custom simulator via messages to the client environment106. FIG.2depicts one embodiment of an architecture of an illustrative client device102, such as a personal computer, tablet computer, smartphone, or other device, that can generate content requests and process content requests in accordance with the present application. The general architecture of the client device102depicted inFIG.2includes an arrangement of computer hardware and software components that may be used to implement aspects of the present disclosure. As illustrated, the client device102includes a processing unit204, a network interface206, a computer readable medium drive208, an input/output device interface209, an optional display202, and an input device224, all of which may communicate with one another by way of a communication bus. In various embodiments, components such as the display202and/or the input device224may be integrated into the client device102, or they may be external components that are coupled to the device102. The network interface206may provide connectivity to one or more networks or computing systems, such as the network104ofFIG.1. The processing unit204may thus receive information and instructions from other computing systems or services via a network. The processing unit204may also communicate to and from memory210and further provide output information for an optional display202via the input/output device interface220. The input/output device interface209may also accept input from the optional input device224, such as a keyboard, mouse, digital pen, etc. In some embodiments, the client device102may include more (or fewer) components than those shown inFIG.2. The memory210may include computer program instructions that the processing unit204executes in order to implement one or more embodiments. The memory210generally includes RAM, ROM, or other persistent or non-transitory memory. The memory210may store an operating system214that provides computer program instructions for use by the processing unit204in the general administration and operation of the client device102. The memory210may further include computer program instructions and other information for implementing aspects of the present disclosure. For example, in one embodiment, the memory210includes a network application216, such as browser application or media player, for accessing and requesting models via the machine learning service120. In other embodiments, the memory210may include a separate interface software212for facilitating the creation and configuration of RL models for a user. FIG.3depicts one embodiment of an architecture of an illustrative server for implementing the machine learning service management component126of the RL modeling environments described. The general architecture of the machine learning service management component126depicted inFIG.3includes an arrangement of computer hardware and software components that may be used to implement aspects of the present disclosure. As illustrated, the machine learning service management component126includes a processing unit304, a network interface306, a computer readable medium drive308, and an input/output device interface309, all of which may communicate with one another by way of a communication bus. The components of the machine learning service management component126may be physical hardware components or implemented in a virtualized environment. The network interface306may provide connectivity to one or more networks or computing systems, such as the network104ofFIG.1. The processing unit304may thus receive information and instructions from other computing systems or services via a network. The processing unit304may also communicate to and from memory310and further provide output information for an optional display via the input/output device interface309. In some embodiments, the machine learning service120may include more (or fewer) components than those shown inFIG.3. The memory310may include computer program instructions that the processing unit304executes in order to implement one or more embodiments. The memory310generally includes RAM, ROM, or other persistent or non-transitory memory. The memory310may store an operating system314that provides computer program instructions for use by the processing unit304in the general administration and operation of the machine learning service120. The memory310may further include computer program instructions and other information for implementing aspects of the present disclosure. For example, in one embodiment, the memory310includes interface software312for receiving and processing content requests from client devices102and managing RL training and simulation environments responsive to a training request. Additionally, the memory310includes a training environment processing component316for instantiating a training environment based on criteria associated with the requesting client device102, training request, and the like. The training environment may include a simulator selected by the machine learning service120based at least in part on the training request. The memory310includes a training cluster execution component318for instantiating a simulation environment including a simulator for training the requested model. FIG.4is a block diagram showing one embodiment for integrated RL training. The machine learning service120shown inFIG.4is configured for integrated training. Integrated training generally refers to a common environment for the RL training cluster122and the simulation environment124. The client device102may submit a training request and the machine learning service management component126may instantiate the RL training cluster122and the simulation environment124as described. FIG.5is a block diagram showing one embodiment of a decoupled RL training. In contrast toFIG.4, the machine learning service120may not instantiate a simulation environment but instead access a client device102′ as the environment to train the RL model. The environment may be a virtual environment simulated by the client device102′. The environment may include the physical world as detected by the client device102′. The decoupled nature of the embodiment shown inFIG.5arises from the distribution of the environment simulation aspect of the training to a device separated from the RL training cluster122. FIG.6is a block diagram showing an environment for generating a hosted model service from a modeling request. The environment600includes several entities which communicate to generate the hosted model service690. The hosted model service690shown inFIG.6receives an image as an input and generates an image processing result (sometimes referred to as an action) as an output. In some embodiments, the image processing result includes a set of actions, each associated with a confidence or ranking for each action. For example, if the image provided to the hosted model service690shows a video game state, the hosted model service690may provide an image processing result indicating different actions to take with associated probabilities of maximizing the reward. In case where the video game is being played against a player with a lower difficulty setting, a non-optimal but plausible action may be desired over the action that maximizes the reward. In the embodiment shown inFIG.6, the creation of the hosted model service690is initiated by a modeling request602. The modeling request602includes states, actions, or observation(s). In some embodiments, the modeling request602includes training data as part of the request. In some embodiments, the modeling request602includes a reference to the training data such as a network location of a data source storing the training data. The modeling request602may include descriptive model metadata that indicates the objects or task associated with the requested model. The modeling request602optionally includes an identifier for the client requesting the model. The identifier of the client may be used to identify a domain to which the requested model will apply. The domain may be identified based on a profile stored in a data store for the client. The hosted modeling service690may be deployed in a virtual private cloud or other virtualized environment. The virtualized environment may be instantiated within an execution container allocated for the domain associated with the identifier. The client device102transmits the modeling request602to a machine learning service120. The machine learning service120interprets the modeling request602and coordinates the generation of the hosted modeling service690for the modeling request602. In previous systems, a model may be trained to perform the task specified in the modeling request602. However, training each model from scratch for each request can be time or resource intensive. Embodiments of the present disclosure can avoid this inefficiency and high resource demand by allocating resources for training (e.g., virtualized environment settings), identifying simulators and training data, and tuning parameters of the model based on models previously trained or deployed in the environment600. To address training inefficiencies, the machine learning service120may identify training parameters, for previously trained models. The parameters may be stored in a training data store680. For example, if the training data store680includes parameters for a previously trained model associated with descriptive metadata corresponding to the descriptive metadata provided in the modeling request602, the hyperparamters, simulator, or cluster configuration used for training the previous model may be used to train the requested model. Metadata, such as domain information, may be associated with a client requesting the previously trained models and used to identify a previously trained model. As used herein a “data store” may be embodied in hard disk drives, solid state memories and/or any other type of non-transitory computer-readable storage medium accessible to or by a device such as an access device, server, or other electronic computing device described. A data store may also or alternatively be distributed or partitioned across multiple local and/or remote storage devices as is known in the art without departing from the scope of the present disclosure. In yet other embodiments, a data store may include or be embodied in a data storage web service. In some embodiments, the parameters may be selected based on frequency of access, inclusion in other generated models, or other model metrics. In some embodiments, metrics for the models may be used identify which of the multiple models to select. The metrics may be generated based on interaction data with machine learning services associated with different models. For example, if a model is used many times over a period of time as compared to another model, the model's utilization may indicate that the model is superior to other models. Alternative or additional metrics that are used to select models include the ranking of a model for use in servicing previous requests or a similarity between data used to train the models. For example, if the modeling request602includes reference images of a particular size, data type (e.g., GIF, PNG, JPG, MPG), or quality, the size or quality is compared with the size or quality of the data used to train the models identified in the training data store680. In some embodiments, parameters for the model associated with training data with the size or quality most similar to the size or quality of the reference images is selected. Based on the one or more of the factors described, the machine learning service120may identify the training parameter(s) to use for generating the new machine learning model for the modeling request602. After generating the new machine learning model, the machine learning service120shown in the environment600ofFIG.6may store the trained model in the RL model data store128. An identifier may be associated with the trained image model to aid in identifying the model. The machine learning service120may generate the hosted model service690based on the trained model. Generating the hosted model service690may include creating a service instance to receive image requests which are processed using the trained reinforcement learning model to provide image processing results. The hosted model service690may obtain the RL model from the RL model data store128based on the identifier for the RL model. FIG.7is a block diagram showing an environment for simulation based reinforcement learning training. The environment700illustrates the relationship between an RL training environment710and a learning environment750. The environment700may be a virtual environment whereby the RL training environment710and the learning environment750are executing on virtual hardware in a virtual private cloud. The RL training environment710may be created by the machine learning service120. The RL training environment710may include a cluster of one or more virtual hardware instances for training the RL model. The cluster configuration parameters such as the number of nodes in the cluster, virtual hardware specification for one or more nodes (e.g., emulated hardware, memory, bandwidth, etc.) may be specified using historical cluster configuration parameters for previously trained models. The RL environment may be tailored to generate a convolutional neural network (CNN) model. The CNN model may receive, as inputs, state and reward information and generate, as outputs, a vector of actions. Each action may be associated with a probability of yielding the highest reward. In some implementations the CNN model may be used to pre-process environment data such as images or other detected environment information. The pre-processing may, for example, extract features from the provided input data which may then be used by the RL model to generate a next action. The RL training environment710may generate the CNN using provided training data. The training data may be stored in a provided training data store760. The training data may include images such as of a video game display or a road. The training data may include text or other alphanumeric data. The training may include instantiating the learning environment750. The learning environment750may be integrated or decoupled from the RL training environment, as discussed above. The learning environment750may include a cluster of one or more virtual hardware instances for training the RL model. The cluster configuration parameters such as the number of nodes in the cluster, virtual hardware specification for one or more nodes (e.g., emulated hardware, memory, bandwidth, etc.) may be specified using historical cluster configuration parameters for a simulator used to previously train similar models. Parameters for the learning environment750may be selected from a library of environments based on the training request. In some implementations, the learning environment750may include an environment interface to broker communications with an environment such as an external system or other sensing device. The learning environment750may include an RL model agent752and a simulator engine754. The RL model agent752may provide an executable agent within the learning environment750. For each time step of a simulation period, the RL model agent752may receive state information such as an image of the environment and a reward. Based on these inputs, the RL model agent752will identify an action to perform. This action is passed to the simulator engine754. The simulator engine754may then adjust the RL model agent752within the environment simulated by the simulator engine754. The adjustment may include moving a robot the specified distance or turning the wheel of an automobile by a specified number of degrees. Once the adjustment is applied, the simulator engine754may advance the simulation to the next step and generate a new representation of the environment along with some reward for the agent's action. For example, if the reward function measures the distance of a vehicle from the sides of a road, the more centered the vehicle is, the higher a reward for actions that maintain the vehicle centered. This action-reward loop may continue for a specified number of steps, period of processing time, until a quantity of event data is generated or until another event is detected by the learning environment750. The event data generated during the simulation may be stored in a simulated training data store765. This data may provide feedback to the RL training environment710to further train a model. For example, the CNN may be trained using the time series of action and reward where the action is taken in response to a previous state. FIG.8is a process diagram showing one embodiment of a method for generating an RL model. The method800shown inFIG.8may be implemented or controlled in whole, or in part, by a device such as the machine learning service management component126described. The method800shows how a training request may be processed to generate an RL model to a desired specification. The method800begins at block802. At block804, the coordination device may receive a request for a model. The request may specify a reinforcement learning type to be used for training the model. The request may include other parameters such as actions, states, environments, simulators, descriptive metadata, or other information to indicate to the coordination device the desired RL model. The request may be received from a client device using a GUI or API. At block806, the coordination device may instantiate an agent in a learning environment. The environment may include a virtual device hosting a simulator. The agent may include a CNN configured to receive an image as an input and generate a vector of values. The vector of values may each be associated with a feature recognized in the image. The vector of values may each be associated with an action that the agent is likely to take within the learning environment. In some implementations, the CNN may be trained using reward information for a previous action associated with a previous state of the actor. The training may include backpropagation or other machine learning techniques. At block808, the coordination device may detect event data from the learning environment. The detection of event data may include receiving information sensed within the learning environment such as images or location information (e.g., coordinates). The detection of event data may include training events such as rewards or actions taken. The detection may include maintaining a temporal information for the event data. This allows reconstruction of a time series of events and actions which can be used to train the RL model at block810. At block812, the coordination device may determine whether the RL model is adequate. Adequacy may be assessed based on a target accuracy for the RL model. The adequacy may be assessed based on resources used to operate the model (e.g., memory, speed to processing an input request). The target accuracy may be specified a total reward earned by the agent. The target accuracy may be specified as a reward minimization whereby the RL model is trained to abstain from negative actions and models associated with lower rewards are favored. If the determination is negative, the method800may return to block806to repeat aspects of the training process in an attempt to improve the accuracy of the RL model. If the determination at block812is affirmative, at block814, the coordination device may transmit the RL model for processing requests. The transmission may include transmitting the RL model to the client device102or the client environment106. In some implementations, the transmission may include storing the RL model in a model data store. In some implementations, the transmission may include instantiating a hosted model service that receives requests at a network address for processing by the RL model. The method800may then end at block890. It will be appreciated that the method800may be repeated to retrain the RL model such as based on new training data or a different simulator or within a different learning environment. The method800may be repeated to train a new RL model according to a different training request from the same or different client devices. FIG.9is a process diagram showing one embodiment of a method for configuring devices for generating an RL model. The method900shown inFIG.9may be implemented or controlled in whole, or in part, by a device such as the machine learning service management component126described. The method900shows how physical or virtual resources may be configured to generate an RL model. The method900begins at block902. At block904, the coordination device may identify an environment for training an RL model. The identification at block904may be based at least in part on a training request received from, for example, a client device. The environment may include an RL training cluster environment or a simulation environment. The environment may be configured to dynamically allocate the processing resources of the environment. At block906, the coordination device may identify a reference RL model for the training request. The reference RL model may be identified based on similarities to the training request such as training data type, actions, states, descriptive metadata or the like. At block908, the coordination device may instantiate an interface with the environment based at least in part on a parameter for the reference RL model. The interface may be instantiated to provide an appropriate number of cluster nodes or training the RL model. The parameter may be identified or based on a parameter used in training the reference RL model. At block910, the coordination device may instantiate an agent in communication with the environment interface. The instantiation of the agent may also be based on a parameter used in training the reference RL model. The instantiation may include a CNN or other machine learning element to process inputs to the RL model or as part of the RL model. In such instances, the architecture of the CNN or the RL model (e.g., actions or states), may be determined or influenced by corresponding components of the reference RL model. At block912, the coordination device may activate the agent for a learning period. Activating the agent may include providing state information for the environment to the agent and receiving an action to change a state of the agent. In response to the state change, the environment may provide updated state information along with a reward to provide feedback to the agent regarding the desirability of a result caused by the selected action. The learning period may include iterating over several action-reward cycles. The event data generated during the learning period (e.g., action, reward, state, etc.) may be detected by the coordination device. The event data may be stored for further training of the RL model as described. The method900may end at block990but may be repeated to train additional RL models or update the training for the RL model. FIG.10is a process diagram showing one embodiment of a method for dynamically identifying training data for an RL model. The method1000shown inFIG.10may be implemented or controlled in whole, or in part, by a device such as the machine learning service management component126described. The method1000shows how training data may be differentially acquired for generating an RL model. The method1000begins at block1002. At block1004, the coordination device may obtain state information and action information for an independently hosted customer network. At block1004, the coordination device may determine whether the training is decoupled or integrated. The determination at block1004may be based on a training request which may include a training data source. If the determination at block1004is negative, at block1006, the coordination device may obtain reward information and observation information from a simulated hosted customer network. The simulated hosted customer network may include a simulation environment managed in a virtual private cloud. If the determination at block1004is affirmative, at block1008, the coordination device may obtain reward information and observation information from the hosted customer network. For example, the customer environment may include a simulator or environment manager than can be accessed using a programming interface (e.g., API). This allows training of RL models by the machine learning service using externally hosted (e.g., decoupled) training resources. At block1010, the coordination device may process the training data in accordance with a reinforcement learning model to form a machine learning model. The processing at block1010may include generating or updating a CNN for feature detection. The processing at block1010may include generating or updating a RL model configured to receive current state information and reward information for a previous action and generate one or more recommended actions for the current state information. The method1000may end at block1090but may be repeated to retrain the machine learning model or train a different machine learning model. In some implementations, the machine learning service120may include compression features to reduce the resources needed for a model. The resources may include processing speed, processing time, or memory needed to store the model. The compression may be implemented as a reinforcement learning process. For example, a generic network compression container may be provided. The container may be configured with a compression request identifying the model and compression criteria. In some implementations, the container may access custom interface elements such as to train a model, test the accuracy of a model, remove layers of a model, identify the number of layers in a model, identify the shape of the input data space, or identify the model reward. The machine learning service120may include one or more actions to compress a model and, using environments similar to those described above, generate a compression agent that takes one or more compression actions to maximize and expected future reward (e.g., increase in speed or decrease in memory utilization). In some implementations, a model may be compressed prior to transmission to reduce the resources needed to deploy or use the model on a target system (e.g., robot, car, device). Depending on the embodiment, certain acts, events, or functions of any of the processes or algorithms described herein can be performed in a different sequence, can be added, merged, or left out altogether (e.g., not all described operations or events are necessary for the practice of the algorithm). Moreover, in certain embodiments, operations or events can be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially. The various illustrative logical blocks, modules, routines, and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, or as a combination of electronic hardware and executable software. To clearly illustrate this interchangeability, various illustrative components, blocks, modules, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware, or as software that runs on hardware, depends upon the particular application and design constraints imposed on the overall system. The described functionality can be implemented in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosure. Moreover, the various illustrative logical blocks and modules described in connection with the embodiments disclosed herein can be implemented or performed by a machine, such as a machine learning service server, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A machine learning service server can be or include a microprocessor, but in the alternative, the machine learning service server can be or include a controller, microcontroller, or state machine, combinations of the same, or the like configured to generate and publish machine learning services backed by a machine learning model. A machine learning service server can include electrical circuitry configured to process computer-executable instructions. Although described herein primarily with respect to digital technology, a machine learning service server may also include primarily analog components. For example, some or all of the modeling, simulation, or service algorithms described herein may be implemented in analog circuitry or mixed analog and digital circuitry. A computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a device controller, or a computational engine within an appliance, to name a few. The elements of a method, process, routine, or algorithm described in connection with the embodiments disclosed herein can be embodied directly in hardware, in a software module executed by a machine learning service server, or in a combination of the two. A software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of a non-transitory computer-readable storage medium. An illustrative storage medium can be coupled to the machine learning service server such that the machine learning service server can read information from, and write information to, the storage medium. In the alternative, the storage medium can be integral to the machine learning service server. The machine learning service server and the storage medium can reside in an ASIC. The ASIC can reside in a user terminal. In the alternative, the machine learning service server and the storage medium can reside as discrete components in a user terminal (e.g., access device or network service client device). Conditional language used herein, such as, among others, “can,” “could,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without other input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list. Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present. Unless otherwise explicitly stated, articles such as “a” or “an” should generally be interpreted to include one or more described items. Accordingly, phrases such as “a device configured to” are intended to include one or more recited devices. Such one or more recited devices can also be collectively configured to carry out the stated recitations. For example, “a processor configured to carry out recitations A, B and C” can include a first processor configured to carry out recitation A working in conjunction with a second processor configured to carry out recitations B and C. As used herein, the terms “determine” or “determining” encompass a wide variety of actions. For example, “determining” may include calculating, computing, processing, deriving, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” may include resolving, selecting, choosing, establishing, and the like. As used herein, the term “selectively” or “selective” may encompass a wide variety of actions. For example, a “selective” process may include determining one option from multiple options. A “selective” process may include one or more of: dynamically determined inputs, preconfigured inputs, or user-initiated inputs for making the determination. In some embodiments, an n-input switch may be included to provide selective functionality where n is the number of inputs used to make the selection. As used herein, the terms “provide” or “providing” encompass a wide variety of actions. For example, “providing” may include storing a value in a location for subsequent retrieval, transmitting a value directly to the recipient, transmitting or storing a reference to a value, and the like. “Providing” may also include encoding, decoding, encrypting, decrypting, validating, verifying, and the like. As used herein, the term “message” encompasses a wide variety of formats for communicating (e.g., transmitting or receiving) information. A message may include a machine readable aggregation of information such as an XML document, fixed field message, comma separated message, or the like. A message may, in some embodiments, include a signal utilized to transmit one or more representations of the information. While recited in the singular, it will be understood that a message may be composed, transmitted, stored, received, etc. in multiple parts. As used herein, the term “correspond” encompasses a range of relative relationships between two or more elements. Correspond may refer to equality (e.g., match). Correspond may refer to partial-equality (e.g., partial match, fuzzy match, soundex). Correspond may refer to a value which falls within a range of values. As used herein “receive” or “receiving” may include specific algorithms for obtaining information. For example, receiving may include transmitting a request message for the information. The request message may be transmitted via a network as described above. The request message may be transmitted according to one or more well-defined, machine readable standards which are known in the art. The request message may be stateful in which case the requesting device and the device to which the request was transmitted maintain a state between requests. The request message may be a stateless request in which case the state information for the request is contained within the messages exchanged between the requesting device and the device serving the request. One example of such state information includes a unique token that can be generated by either the requesting or serving device and included in messages exchanged. For example, the response message may include the state information to indicate what request message caused the serving device to transmit the response message. As used herein “generate” or “generating” may include specific algorithms for creating information based on or using other input information. Generating may include retrieving the input information such as from memory or as provided input parameters to the hardware performing the generating. Once obtained, the generating may include combining the input information. The combination may be performed through specific circuitry configured to provide an output indicating the result of the generating. The combination may be dynamically performed such as through dynamic selection of execution paths based on, for example, the input information, device operational characteristics (e.g., hardware resources available, power level, power source, memory levels, network connectivity, bandwidth, and the like). Generating may also include storing the generated information in a memory location. The memory location may be identified as part of the request message that initiates the generating. In some embodiments, the generating may return location information identifying where the generated information can be accessed. The location information may include a memory location, network locate, file system location, or the like. As used herein a “user interface” (also referred to as an interactive user interface, a graphical user interface or a UI) may refer to a network based interface including data fields and/or other controls for receiving input signals or providing electronic information and/or for providing information to the user in response to any received input signals. A UI may be implemented in whole or in part using technologies such as hyper-text mark-up language (HTML), FLASH™, JAVA™, .NET™, web services, and rich site summary (RSS). In some embodiments, a UI may be included in a stand-alone client (for example, thick client, fat client) configured to communicate (e.g., send or receive data) in accordance with one or more of the aspects described. While the above detailed description has shown, described, and pointed out novel features as applied to various embodiments, it can be understood that various omissions, substitutions, and changes in the form and details of the devices or algorithms illustrated can be made without departing from the spirit of the disclosure. As can be recognized, certain embodiments described herein can be embodied within a form that does not provide all of the features and benefits set forth herein, as some features can be used or practiced separately from others. The scope of certain embodiments disclosed herein is indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope. | 46,606 |
11861491 | DETAILED DESCRIPTION The following discussion is presented to enable any person skilled in the art to make and use the technology disclosed, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed implementations will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other implementations and applications without departing from the spirit and scope of the technology disclosed. Thus, the technology disclosed is not intended to be limited to the implementations shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein. Introduction Disclosed are computational models that alleviate the effects of human ascertainment biases in curated pathogenic non-coding variant databases by generating pathogenicity scores for variants occurring in the promoter regions (referred to herein as promoter single nucleotide variants (pSNVs)). The numbers that follow are given relative to a promoter sequence of length 3001 bases. These numbers can vary in alternative implementations. As the length of the promoter sequence changes, so will the number of possible combinations. First, a benign training dataset of largely common benign pSNVs from human and non-human primates can be constructed based on the observation that common variants in other primate species are largely clinically benign in human. At the time of this application, 8,048,977 pSNVs were observed and labeled as benign. To obtain an unlabeled training dataset that complements the benign training dataset, all possible variants from each unobserved base position in the promoter regions are generated by substituting the base at the position with the other three bases. At the time of this application, 108,000,000 unlabeled pSNVs were generated. Variants located in homopolymer regions, low-complexity regions, and overlapping coding regions are excluded. In some implementations, deep learning networks (referred to herein as pathogenicity classifiers) are trained using a semi-supervised approach to discriminate between a set of labeled benign variants and an unlabeled set of variants that were matched to remove biases. The unlabeled training dataset is likely to be a mixture of benign and pathogenic pSNVs. By treating the substitutionally generated variants as unlabeled data, the pathogenicity classifiers learn the distributions of benign and pathogenic variants without needing an explicit pathogenic training set. In some implementations, a set of unlabeled variants is sampled with replacement, requiring weighted sampling with the benign variants that takes into account trinucleotide context distribution and local GC-content distribution (to control for mutational rate, genetic drift, and gene conversion), and sequence coverage distribution (to adjust for the impact of alignability and sequence coverage on variant ascertainment). Balanced sampling of unlabeled variants helps remove biases that are unrelated to the pathogenicity of the variant. In absence of proper control of confounding effects, deep learning networks can easily pick up on inadvertently introduced biases to discriminate between the classes. Because the number of unlabeled variants greatly exceeds the labeled benign variants, a consensus prediction can be obtained by training an ensemble of the deep learning networks that use the same set of labeled benign variants and separately sampled sets of unlabeled variants. The consensus is formed by taking the average of their predictions on inference data comprising all observed and unobserved pSNVs. The ensemble can have 10, 100, or 200 deep learning networks. The deep learning networks can be convolutional neural networks or recurrent neural networks, or a combination of the two. Sets of variants are randomly sampled for validation and testing, which can be withheld from training. Numerous training examples are produced to train the deep learning networks. Each training example corresponds to an input promoter sequence that contains reference bases at observed positions, unobserved-sampled positions, and unobserved-unsampled positions. The input is supplemented with output scores of protein binding affinity and DNA accessibility inducing networks. Although predictions of protein binding or DNA accessibility do not directly translate to pathogenicity predictions, models trained to predict binding or DNA accessibility can discover informative patterns of the DNA sequence. Such patterns can therefore be used to pre-train pathogenicity classifiers, thus further improving the ability of our models to learn from unlabeled data. Each training example is annotated with sparsely encoded ground truth data with base-wise and position-wise labels for each input promoter sequence, including blank, benign, or pathogenic labels to identify variations from the reference bases. From the training, trained pathogenicity classifiers can be derived, which, in a single invocation during inference, produce pathogenicity scores for each of the three base variations from the reference bases. So, if the input promoter sequence contains 3000 reference bases, then the inference output of a trained pathogenicity classifier includes pathogenicity scores for up to 9000 base variations. More details follow. Training Data FIG.1shows an example promoter sequence101of a gene. The disclosed ensemble of pathogenicity classifiers predicts pathogenicity scores for promoter single nucleotide variants (pSNVs) located in multitude of promoter sequences. The input to the pathogenicity classifiers are promoter sequences, which are regulatory regions located upstream (towards the5′ region) of the gene, adjacent to the transcription start site (TSS). They do not code for proteins and instead provide an initiation and control point for regulated gene transcription. In one implementation, the length of the promoter sequences is 3001 bases. In other implementations, the length can be decreased or increased, for instance from 200 to 20,000 bases, or it can be adapted to specific promoter regions (e.g., be centered at the TSS). The promoter sequences are flanked by right and left context that extends outside the promoter region, including into the gene sequence that follows the promoter region (e.g., 5′ UTR regions102, start and stop codons103, 3′ UTR regions104, transcription terminator105). The flanking context can be 100 to 5000 bases. Typically, the upstream and downstream flanking contexts are equal, but that is not essential. The promoter sequences contain reference bases from one or more reference genome databases. The reference bases are one-hot encoded to conserve the position-specific information of each individual base in the promoter sequences. In one-hot encoding, each reference base is encoded with a binary vector of four bits, with one of the bits being hot (i.e., 1) while others being off (i.e., 0). For instance, T=(1, 0, 0, 0), G=(0, 1, 0, 0), C=(0, 0, 1, 0), and A=(0, 0, 0, 1). In some implementations, an undetermined base is encoded as N=(0, 0, 0, 0).FIG.11shows an example promoter sequence (in yellow) with reference bases represented using one-hot encoding. When the pathogenicity classifiers, as convolutional neural networks, receive the one-hot encoded reference bases, they are able to preserve the spatial locality relationships within the promoter sequences. FIG.2depicts how training datasets used for training the pathogenicity classifiers are generated. First, promoter sequences in 19,812 genes are identified, according to one implementation. In some implementations, each of the 19,812 promoter sequences has 3001 base positions (not including the flanking contexts outside the promoter region), which produces 59,455,812 total base positions201(in grey). In one implementation, from the 59,455,812 total base positions201, 8,048,977 observed pSNV positions202are qualified as benign positions. 8,048,977 benign positions202yield 8,701,827 observed pSNVs, which form the final benign set302, according to one implementation. In some implementations, the benign pSNVs are observed in human and non-human primate species such as chimpanzee, bonobo, gorilla, orangutan, rhesus, and marmoset. In some implementations, the criterion for inclusion in the benign set is that the minor allele frequency of an observed pSNV should be greater than 0.1%. Such a criterion produces 600,000 observed pSNVs, according to one implementation. In other implementations, the inclusion criterion does not take into account the minor allele frequencies of observed pSNVs. That is, as long as a pSNV is observed in human and the non-human primate species, it is included in the benign set and thus labeled as benign. The second inclusion strategy produces the much larger benign set of 8,701,827 observed pSNVs, according to one implementation. Further, from the 59,455,812 total base positions201, 15,406,835 unobserved pSNV positions203are removed that belong to homopolymer regions, low-complexity regions, and overlapping coding positions (e.g., start or stop codons), which are considered either unreliable due to sequence-specific errors or irrelevant to the analysis of non-coding variants. Thus, in some implementations, what results is 36,000,000 unobserved pSNV positions204, from which a total of 108,000,000 unobserved pSNVs205are derived by mutating each of the 36,000,000 loci to the three alternative single-base alleles. These 108,000,000 unobserved pSNVs form the final pool205of substitutionally generated unobserved pSNVs, according to one implementation. Semi-Supervised Training Because semi-supervised learning algorithms use both labeled and unlabeled instances in the training process, they can produce classifiers that achieve better performance than completely supervised learning algorithms that have only a small amount of labeled data available for training. The principle behind semi-supervised learning is that intrinsic knowledge within unlabeled data can be leveraged in order to strengthen the prediction capability of a supervised model that only uses labeled instances, thereby providing a potential advantage for semi-supervised learning. Model parameters learned by a supervised classifier from a small amount of labeled data may be steered towards a more realistic distribution (which more closely resembles the distribution of the test data) by the unlabeled data. Another challenge that is prevalent in bioinformatics is the data imbalance problem. The data imbalance phenomenon arises when one of the classes to be predicted is underrepresented in the data because instances belonging to that class are rare (noteworthy cases) or hard to obtain. Ironically, minority classes are typically the most important to learn, because they may be associated with special cases. An algorithmic approach to handle imbalanced data distributions is based on ensembles of classifiers. Limited amounts of labeled data naturally lead to weaker classifiers, but ensembles of weak classifiers tend to surpass the performance of any single constituent classifier. Moreover, ensembles typically improve the prediction accuracy obtained from a single classifier by a factor that validates the effort and cost associated with learning multiple models. Intuitively, aggregating several classifiers leads to better overfitting control, since averaging the high variability of individual classifiers also averages the classifiers' overfitting. FIG.3illustrates one implementation of training the pathogenicity classifiers and application of the trained pathogenicity classifiers on inference data. Existing labeled databases have a non-trivial number of entries, after removing variants of uncertain significance, there are only few variants remaining with non-conflicting interpretations of pathogenicity. Systematic reviews have also found that these entries often have insufficient clinical evidence to support their annotated pathogenicity. Additionally, most of the variants in human curated databases tend to be within a very small set of genes, making them mismatched for variants in benign training datasets, which are ascertained genome-wide using human common variants or chimpanzee-human fixed substitutions. Given how differently the datasets were ascertained, training a supervised learning model with human-curated variants as the pathogenic set and genome-wide common variants as the benign set was considered to introduce significant biases. In some implementations, the ensemble303of pathogenicity classifiers can be trained to discriminate between a common benign set302of observed pSNVs and separate pathogenic sets301a-nof unobserved pSNVs sampled with replacement from the pool205of substitutionally generated unobserved pSNVs. The ensemble303can contain any number of pathogenicity classifiers, e.g., in the range of 1 to 200. In some implementations, at least 10 pathogenicity classifiers produce improved results. Improvements taper off, exhibiting diminishing returns as the number of pathogenicity classifiers increases to 100 or 200. Adding pathogenicity classifiers produces marginal improvement, without representing a different approach, at least beyond 100 pathogenicity classifiers. The pool205of substitutionally generated unobserved pSNVs and, by extension, separate pathogenic sets301a-nsampled from the pool205contain a mixture of benign and pathogenic pSNVs; however, for training purposes, their constituent variants are assigned a pathogenic label507. Also, the separate pathogenic sets301a-nare matched with the common benign set302by weighted sampling to remove biases. In some implementations, the pool205of substitutionally generated unobserved pSNVs can be referred to as the unlabeled set and the separate pathogenic sets301a-ncan be referred to as respectively sampled unlabeled sets. In one implementation, the common benign set302of 8,701,827 observed pSNVs includes human variants from the ExAC/gnomAD database and variants from six species of non-human primates. The separate pathogenic sets301a-nare respectively matched with the benign variants by weighted sampling based on trinucleotide context distribution and local GC-content distribution (to control for mutational rate, genetic drift, and gene conversion), and sequence coverage distribution (to adjust for the impact of alignability and sequence coverage on variant ascertainment). FIG.4illustrates one implementation of trinucleotide context401, local GC-content402, and sequencing coverage403distribution heatmaps of the observed pSNVs in the common benign set302. Weighted sampling is used to draw the separate pathogenic sets301a-nof unobserved pSNVs from the pool205of substitutionally generated unobserved pSNVs so that these distributions401,402, and403substantially match between the pathogenic sets301a-nand the common benign set302. InFIG.4, first an example distribution heatmap401of 192 possible combinations of bases is shown, corresponding to the first position or left (5′) flanking base, the second position or center base, the third position or right (3′) flanking base, and the variant base from three of ACGT not matching the second position. The trinucleotide is formed by the base before the variant, the reference base of the variant, and the base after the variant. The reference base of the variant can be changed into the other three nucleotides. In total, there are 64×3=192 trinucleotide contexts. In other implementations, a trinucleotide context and its reverse complement are considered the same and the number of trinucleotide contexts are reduced to 96. That is, some of the 64×3=192 trinucleotide contexts are considered identical and are merged. Accordingly, the illustrated distribution accounts for position-specific and base-specific mutations. For example, “ACG” mutating to “AGG” is assigned its own distribution and so is “AAG”. Then, an example distribution heatmap402for 10 local GC-content bins is depicted. Local GC-content can be expressed for a window (e.g.,300bases) around a target pSNV as a percentage frequency or as a fractional value between 0 and 1. Finally, an example distribution heatmap403for 10 sequencing coverage bins is shown. The illustrated implementation creates 6400 possible bands (64 trinucleotide contexts×10 GC-content bins×10 sequencing coverage bins) that can be used to perform the weighted sampling. The common benign set302and each of the pathogenic sets301a-ncan have a same size, i.e., the size of each pathogenic set is 8,701,827 unobserved pSNVs. The weighted sampling results in the pathogenic sets301a-nhaving some common, overlapping unobserved pSNVs within a pathogenic set across sampling cycles and across pathogenic sets301a-nfor a current sampling cycle. This results in the pathogenicity classifiers having multiple initializations of the same unobserved pSNV, which in turn strengths their classification power. In some implementations, the pathogenicity classifiers are trained over one or more epochs on a pathogenic set sampled at the current sampling cycle. The training can continue on one or more additional pathogenic sets sampled at one or more successive sampling cycles. The training is concluded when the pathogenicity classifiers' pathogenicity score predictions on a validation set having held-out observed pSNVs and unobserved pSNVs form substantially discrete probability distribution clusters of benign and pathogenic predictions. Classifier parameters derived from the training are stored in memory. The trained classifiers are applied to produce pathogenicity scores for at least some unobserved pSNVs in the pool of substitutionally generated unobserved pSNVs. For each unobserved pSNV in the at least some unobserved pSNVs, an average and/or maximum pathogenicity score is determined from the pathogenicity scores produced by the trained pathogenicity classifiers. Then, a pathogenicity table304is generated that identifies the average and/or maximum pathogenicity score for each unobserved pSNV in the at least some unobserved pSNVs. In some implementations, the trained classifiers are also applied to produce pathogenicity scores for at least some observed pSNVs in the common benign set of observed pSNVs. For each observed pSNV in the at least some observed pSNVs, an average and/or maximum pathogenicity score is determined from the pathogenicity scores produced by the trained pathogenicity classifiers. Then, the pathogenicity table304is generated that identifies the average and/or maximum pathogenicity score for each observed pSNV in the at least some observed pSNVs. Sparsely Encoded Ground Truth Data FIG.5is one implementation of training the pathogenicity classifiers using sparsely encoded ground truth data510that has base-wise and position-wise labels506for observed positions503, unobserved-sampled positions501, and unobserved-unsampled positions502in input promoter sequences. The input promoter sequences cover the observed pSNVs in the common benign set302and contain reference bases at the observed positions503, the unobserved-sampled positions501, and the unobserved-unsampled positions502. The observed positions503are positions at which the observed pSNVs in the common benign set302occurred (in green). The unobserved positions601are positions at which the substitutionally generated unobserved pSNVs in the pool205are located. The unobserved-sampled positions501are positions at which the unobserved pSNVs sampled for a particular classifier at a current sampling cycle are located (in blue). The unobserved-unsampled positions502are positions at which some of the substitutionally generated unobserved pSNVs not sampled for the particular classifier at the current sampling cycle are located (in white). A ground truth data generator (not shown) then generates the ground truth data510with base-wise and position-wise labels506for each input promoter sequence. For the observed positions503, the ground truth data510assigns a blank label511to bases that match the reference bases, assigns the blank label511to bases that are variations from the reference bases which do not match the observed pSNVs, and assigns a benign label504to bases that are variations from the reference bases which match the observed pSNVs. For the unobserved-sampled positions501, the ground truth data510assigns the blank label511to bases that match the reference bases, assigns the blank label511to bases that are variations from the reference bases which do not match the unobserved pSNVs, and assigns a pathogenic label507to bases that are variations from the reference bases which match the unobserved pSNVs. For the unobserved-unsampled positions502, the ground truth data510assigns the blank label511to all bases. In some implementations, scores for the labels506are generated by a softmax classification layer and use (0, 1) softmax encoding for the pathogenic label507, (1, 0) softmax encoding for the benign label504, and (0, 0) softmax encoding for the blank label511. A trainer (not shown) then uses a gradient update training technique to train the pathogenicity classifiers to generate, in response to processing the input promoter sequences, outputs with base-wise and position-wise pathogenicity scores505that progressively approach corresponding base-wise and position-wise labels506in the ground truth data510. In some implementations, the trainer iteratively optimizes a loss function that minimizes error between the base-wise and position-wise pathogenicity scores505in the outputs and the corresponding base-wise and position-wise labels506in the ground truth data510and iteratively updates parameters of the classifiers based on the error508using backpropagation. Furthermore, for positions in the input promoter sequences, a protein binding affinity score is encoded. These scores are determined by one or more protein binding affinity predictors that are pre-trained on positive training examples of protein binding motifs and negative training examples of non-binding motifs to generate a position-wise protein binding affinity score sequence in response to processing an input sequence. The predictors can produce scores for hundreds of proteins in multiple different conditions and/or cell types. Additionally, for positions in the input promoter sequences, a DNA accessibility inducing score is encoded. These scores are determined by one or more DNA accessibility predictors that are pre-trained on positive training examples of DNA accessibility inducing motifs and negative training examples of non-inducing motifs to generate a position-wise DNA accessibility inducing score sequence in response to processing an input sequence. The predictors can produce scores for hundreds of DNA samples in multiple different conditions and/or cell types. Inference FIG.6shows one implementation of how the trained pathogenicity classifiers classify, as benign or pathogenic, base variations from reference bases occurring in the input promoter sequences at positions602covering observed pSNVs in the common benign set302and positions601substitutionally generated unobserved pSNVs in the pool205. The pathogenicity classifiers have a modified WaveNet-style architecture that iterating over particular locations in an input promoter sequence and over three base variations from a reference base found at a particular location. The modified WaveNet-style architecture can calculate up to 9,000 outputs for 3,000 locations in the input, as each location has up to three single base variations. The modified WaveNet-style architecture scales relatively well, because intermediate calculations are reused. The pathogenicity classifiers determine in a single invocation of the modified WaveNet-like architecture pathogenicity likelihood scores for at least one of the three base variations at a multiplicity of the particular locations in the input promoter sequence and store the pathogenicity likelihood scores determined in the single invocation. The determining of at least one of the three base variations further includes determining all of the three variations. The multiplicity of the particular locations is at least 500 or 1,000, or 1500, or 2000, or ninety percent of the input promoter sequence. A trained pathogenicity classifier comprises an input module (not shown) that accepts an input promoter sequence with reference bases at positions602covering observed pSNVs in the common benign set302and positions601substitutionally generated unobserved pSNVs in the pool205. The trained pathogenicity classifier also comprises a processing module (not shown) that processes the input promoter sequence through one or more layers of the pathogenicity classifier to generate an alternative representation of the input promoter sequence. In some implementations, when the trained pathogenicity classifier is a deep convolutional neural network, the layers are convolution layers with convolution filters and the alternative representation is a convolved representation. In other implementations, when the trained pathogenicity classifier is a recurrent neural network, the layers are recurrent units with gates and the alternative representation is a hidden representation. The trained pathogenicity classifier further comprises an output module (not shown) that processes the alternative representation to generate an output603which, for each position in the input promoter sequence, classifies each of three base variations from a corresponding reference base as benign or pathogenic. In some implementations, the output includes pathogenicity likelihood scores604for each of the three base variations. The trained pathogenicity classifier receives supplemental input from a protein binding affinity sub-classifier that encodes a protein binding affinity score to each position in the input promoter sequence. The trained pathogenicity classifier also receives supplemental input from a DNA accessibility sub-classifier that encodes a DNA accessibility inducing score to each position in the input promoter sequence. Deep Learning Architecture Regarding pathogenicity classifiers, deep neural networks are a type of artificial neural networks that use multiple nonlinear and complex transforming layers to successively model high-level features. Deep neural networks provide feedback via backpropagation which carries the difference between observed and predicted output to adjust parameters. Deep neural networks have evolved with the availability of large training datasets, the power of parallel and distributed computing, and sophisticated training algorithms. Deep neural networks have facilitated major advances in numerous domains such as computer vision, speech recognition, and natural language processing. Convolutional neural networks (CNNs) and recurrent neural networks (RNNs) are components of deep neural networks. Convolutional neural networks have succeeded particularly in image recognition with an architecture that comprises convolution layers, nonlinear layers, and pooling layers. Recurrent neural networks are designed to utilize sequential information of input data with cyclic connections among building blocks like perceptrons, long short-term memory units, and gated recurrent units. In addition, many other emergent deep neural networks have been proposed for limited contexts, such as deep spatio-temporal neural networks, multi-dimensional recurrent neural networks, and convolutional auto-encoders. The goal of training deep neural networks is optimization of the weight parameters in each layer, which gradually combines simpler features into complex features so that the most suitable hierarchical representations can be learned from data. A single cycle of the optimization process is organized as follows. First, given a training dataset, the forward pass sequentially computes the output in each layer and propagates the function signals forward through the network. In the final output layer, an objective loss function measures error between the inferenced outputs and the given labels. To minimize the training error, the backward pass uses the chain rule to backpropagate error signals and compute gradients with respect to all weights throughout the neural network. Finally, the weight parameters are updated using optimization algorithms based on stochastic gradient descent. Whereas batch gradient descent performs parameter updates for each complete dataset, stochastic gradient descent provides stochastic approximations by performing the updates for each small set of data examples. Several optimization algorithms stem from stochastic gradient descent. For example, the Adagrad and Adam training algorithms perform stochastic gradient descent while adaptively modifying learning rates based on update frequency and moments of the gradients for each parameter, respectively. Another core element in the training of deep neural networks is regularization, which refers to strategies intended to avoid overfitting and thus achieve good generalization performance. For example, weight decay adds a penalty term to the objective loss function so that weight parameters converge to smaller absolute values. Dropout randomly removes hidden units from neural networks during training and can be considered an ensemble of possible subnetworks. To enhance the capabilities of dropout, a new activation function, maxout, and a variant of dropout for recurrent neural networks called rnnDrop have been proposed. Furthermore, batch normalization provides a new regularization method through normalization of scalar features for each activation within a mini-batch and learning each mean and variance as parameters. Given that sequenced data are multi- and high-dimensional, deep neural networks have great promise for bioinformatics research because of their broad applicability and enhanced prediction power. Convolutional neural networks have been adapted to solve sequence-based problems in genomics such as motif discovery, pathogenic variant identification, and gene expression inference. Convolutional neural networks use a weight-sharing strategy that is especially useful for studying DNA because it can capture sequence motifs, which are short, recurring local patterns in DNA that are presumed to have significant biological functions. A hallmark of convolutional neural networks is the use of convolution filters. Unlike traditional classification approaches that are based on elaborately-designed and manually-crafted features, convolution filters perform adaptive learning of features, analogous to a process of mapping raw input data to the informative representation of knowledge. In this sense, the convolution filters serve as a series of motif scanners, since a set of such filters is capable of recognizing relevant patterns in the input and updating themselves during the training procedure. Recurrent neural networks can capture long-range dependencies in sequential data of varying lengths, such as protein or DNA sequences. Therefore, a powerful computational model for predicting the pathogenicity of non-coding variants can have enormous benefit for both basic science and translational research because over 98% of the human genome is non-coding and it is estimated that 93% of disease-associated variants lie in these regions. In some implementations, pathogenicity classifiers can be based on the architecture of residual blocks. The residual blocks comprise repeating units of convolution, interspersed with skip connections that allow information from earlier layers to skip over residual blocks. In each residual block, the input layer is first batch normalized, followed by an activation layer using rectified linear units (ReLU). The activation is then passed through an atrous convolution layer. This intermediate output from the atrous convolution layer is again batch normalized and ReLU activated, followed by another atrous convolution layer. At the end of the second atrous convolution layer, we summed its output with the original input into the residual block, which acts as a skip connection by allowing the original input information to bypass the residual block. In such an architecture, termed a deep residual learning network by its authors, the input is preserved in its original state and the residual connections are kept free of nonlinear activations from the model, allowing effective training of deeper networks. Following the residual blocks, a softmax layer computes probabilities that translate to either the pathogenic label, the benign label, or the blank label. In some implementations, the pathogenicity classifiers are trained with accumulated categorical cross entropy loss function using the ADAM optimizer. FIG.7illustrates one implementation of a deep convolutional neural network-based architecture template that is used to construct the pathogenicity classifiers.FIG.8depicts one implementation of a residual block that is part of the deep convolutional neural network architecture ofFIG.7. In some implementations, the pathogenicity classifiers are deep convolutional neural networks that contain groups of residual blocks arranged in a sequence from lowest to highest. Each group of residual blocks is parameterized by a number of convolution filters in the residual blocks, a convolution window size of the residual blocks, and an atrous convolution rate of the residual blocks. The atrous convolution rate progresses non-exponentially from a lower residual block group to a higher residual block group, in some implementations. In other implementations, it progresses exponentially. The size of convolution window varies between groups of residual blocks, and each residual block comprises at least one batch normalization layer, at least one rectified linear unit (abbreviated ReLU) layer, at least one atrous convolution layer, and at least one residual connection. In some implementations, the dimensionality of the input is (Cu+L+Cd)×4, where Cuis a number of upstream flanking context bases, Cdis a number of downstream flanking context bases, and L is a number of bases in the input promoter sequence. The dimensionality of the output is 4×L. In some implementations, each group of residual blocks produces an intermediate output by processing a preceding input and the dimensionality of the intermediate output is (I−[{(W−1)*D}*A])×N, where I is dimensionality of the preceding input, W is convolution window size of the residual blocks, D is atrous convolution rate of the residual blocks, A is a number of atrous convolution layers in the group, and N is a number of convolution filters in the residual blocks. FIG.9is an example deep convolutional neural network-based architecture used to construct the pathogenicity classifiers. This architecture is used when the input has200upstream flanking context bases (Cu) to the left of the input sequence and200downstream flanking context bases (Cd) to the right of the input sequence. The length of the input sequence (L) can be arbitrary, such as 3001. In this architecture, each residual block in a first group has 32 convolution filters, 11 convolution window size, and 1 atrous convolution rate and each residual block in a second group has 32 convolution filters, 11 convolution window size, and 4 atrous convolution rate. In other architectures, each residual block has 32 convolution filters, 11 convolution window size, and 1 atrous convolution rate. FIG.10is another example deep convolutional neural network-based architecture used to construct the pathogenicity classifiers. This architecture is used when the input has 1000 upstream flanking context bases (Cu) to the left of the input sequence and 1000 downstream flanking context bases (Cd) to the right of the input sequence. The length of the input sequence (L) can be arbitrary, such as 3001. In this architecture, there are at least three groups of four residual blocks and at least three skip connections. Each residual block in a first group has 32 convolution filters, 11 convolution window size, and 1 atrous convolution rate, each residual block in a second group has 32 convolution filters, 11 convolution window size, and 4 atrous convolution rate, and each residual block in a third group has 32 convolution filters, 21 convolution window size, and 19 atrous convolution rate. FIG.11is yet another example deep convolutional neural network-based architecture used to construct the pathogenicity classifiers. This architecture is used when the input has 5000 upstream flanking context bases (Cu) to the left of the input sequence and 5000 downstream flanking context bases (Cd) to the right of the input sequence. The length of the input sequence (L) can be arbitrary, such as 3001. In this architecture, there are at least four groups of four residual blocks and at least four skip connections. Each residual block in a first group has 32 convolution filters, 11 convolution window size, and 1 atrous convolution rate, each residual block in a second group has 32 convolution filters, 11 convolution window size, and 4 atrous convolution rate, each residual block in a third group has 32 convolution filters, 21 convolution window size, and 19 atrous convolution rate, and each residual block in a fourth group has 32 convolution filters, 41 convolution window size, and 25 atrous convolution rate. Training FIGS.13A and13Bshow training of an example pathogenicity classifier1306. In one implementation, the pathogenicity classifier1306is a convolutional neural network. In another implementation, the pathogenicity classifier1306is a recurrent neural network. In yet another implementation, the pathogenicity classifier1306is a residual neural network with residual bocks and residual connections. In a further implementation, the pathogenicity classifier1306is a combination of a convolutional neural network and a recurrent neural network. One skilled in the art will appreciate that the pathogenicity classifier1306can use various padding and striding configurations. It can use different output functions (e.g., classification or regression) and may or may not include one or more fully-connected layers. It can use 1D convolutions, 2D convolutions, 3D convolutions, 4D convolutions, 5D convolutions, dilated or atrous convolutions, transpose convolutions, depthwise separable convolutions, pointwise convolutions, 1×1 convolutions, group convolutions, flattened convolutions, spatial and cross-channel convolutions, shuffled grouped convolutions, spatial separable convolutions, and deconvolutions. It can use one or more loss functions such as logistic regression/log loss, multi-class cross-entropy/softmax loss, binary cross-entropy loss, mean-squared error loss, L1 loss, L2 loss, smooth L1 loss, and Huber loss. It can use any parallelism, efficiency, and compression schemes such TFRecords, compressed encoding (e.g., PNG), sharding, parallel calls for map transformation, batching, prefetching, model parallelism, data parallelism, and synchronous/asynchronous SGD. It can include upsampling layers, downsampling layers, recurrent connections, gates and gated memory units (like an LSTM or GRU), residual blocks, residual connections, highway connections, skip connections, peephole connections, activation functions (e.g., non-linear transformation functions like rectifying linear unit (ReLU), leaky ReLU, exponential liner unit (ELU), sigmoid and hyperbolic tangent (tanh)), batch normalization layers, regularization layers, dropout, pooling layers (e.g., max or average pooling), global average pooling layers, and attention mechanisms. The pathogenicity classifier1306is trained using training data1328. The training data1328includes a pathogenic set of non-coding variants1302that are annotated with a pathogenic label1310and a benign set of non-coding variants1316that are annotated with a benign label1322. FIG.13Aillustrates one implementation of training the pathogenicity classifier1306using a pathogenic non-coding variant that is annotated with the pathogenic label1310. The pathogenicity classifier1306processes one or more input sequences1304associated with a particular pathogenic non-coding variant1302a(not shown) that is selected from the pathogenic set of non-coding variants1302. The input sequences1304are processed through the pathogenicity classifier1306, which in response produces a pathogenicity prediction1308for the particular pathogenic non-coding variant1302a. A trainer1330modifies weights of the pathogenicity classifier1306using backpropagation1314based on an error1312computed between the pathogenicity prediction1308made for the particular pathogenic non-coding variant1302aand the pathogenic label1310. In one implementation, the input sequence1304is a reference sequence that contains, at a target position, a reference non-coding base which is flanked by downstream and upstream context non-coding bases. In one implementation, the input sequence1304is an alternative sequence that contains, at the target position, the particular pathogenic non-coding variant1302awhich is flanked by the downstream and upstream context non-coding bases. In some implementations, both the reference and alternative sequences are fed as input to the pathogenicity classifier1306. In one implementation, the input sequence1304is a metadata sequence that characterizes metadata about the particular pathogenic non-coding variant1302a.In some implementations, the metadata sequence is generated by a neural network (e.g., a sequence-to-sequence model like WaveNet). In some implementations, the metadata is associated with epigenetic signals, including deoxyribonucleic acid (DNA) methylation changes, histone modifications, noncoding ribonucleic acid (ncRNA) expression, chromatin structural changes, deoxyribonuclease (DNase), and histone 3 lysine 27 acetylation (H3K27ac). In one implementation, the input sequence1304is a non-coding sequence that contains some reference non-coding bases, the particular pathogenic non-coding variant1302a,and some additional non-coding variants. FIG.13Bdepicts one implementation of training the pathogenicity classifier1306using a benign non-coding variant that is annotated with the benign label1322. The pathogenicity classifier1306processes one or more input sequences1318associated with a particular benign non-coding variant1316a(not shown) that is selected from the benign set of non-coding variants1316. The input sequences1318are processed through the pathogenicity classifier1306, which in response produces a pathogenicity prediction1320for the particular benign non-coding variant1316a. A trainer1330modifies weights of the pathogenicity classifier1306using backpropagation1326based on an error1324computed between the pathogenicity prediction1320made for the particular benign non-coding variant1316aand the benign label1322. In one implementation, the input sequence1318is a reference sequence that contains, at a target position, a reference non-coding base which is flanked by downstream and upstream context non-coding bases. In one implementation, the input sequence1318is an alternative sequence that contains, at the target position, the particular benign non-coding variant1316awhich is flanked by the downstream and upstream context non-coding bases. In some implementations, both the reference and alternative sequences are fed as input to the pathogenicity classifier1306. In one implementation, the input sequence1318is a metadata sequence that characterizes metadata about the particular benign non-coding variant1316a.In some implementations, the metadata sequence is generated by a neural network (e.g., a sequence-to-sequence model like WaveNet). In some implementations, the metadata is associated with epigenetic signals, including deoxyribonucleic acid (DNA) methylation changes, histone modifications, noncoding ribonucleic acid (ncRNA) expression, chromatin structural changes, deoxyribonuclease (DNase), and histone 3 lysine 27 acetylation (H3K27ac). In one implementation, the input sequence1318is a non-coding sequence that contains some reference non-coding bases, the particular benign non-coding variant1316a,and some additional non-coding variants. Computer System FIG.14is a simplified block diagram of a computer system1400that can be used to implement the ensemble of pathogenicity classifiers. Computer system1400includes at least one central processing unit (CPU)1472that communicates with a number of peripheral devices via bus subsystem1455. These peripheral devices can include a storage subsystem1410including, for example, memory devices and a file storage subsystem1436, user interface input devices1438, user interface output devices1476, and a network interface subsystem1474. The input and output devices allow user interaction with computer system1400. Network interface subsystem1474provides an interface to outside networks, including an interface to corresponding interface devices in other computer systems. In one implementation, the ensemble of pathogenicity classifiers ofFIG.3is communicably linked to the storage subsystem1410and the user interface input devices1438. User interface input devices1438can include a keyboard; pointing devices such as a mouse, trackball, touchpad, or graphics tablet; a scanner; a touch screen incorporated into the display; audio input devices such as voice recognition systems and microphones; and other types of input devices. In general, use of the term “input device” is intended to include all possible types of devices and ways to input information into computer system1400. User interface output devices1476can include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem can include an LED display, a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image. The display subsystem can also provide a non-visual display such as audio output devices. In general, use of the term “output device” is intended to include all possible types of devices and ways to output information from computer system1400to the user or to another machine or computer system. Storage subsystem1410stores programming and data constructs that provide the functionality of some or all of the modules and methods described herein. Subsystem1478can be graphics processing units (GPUs) or field-programmable gate arrays (FPGAs). Memory subsystem1422used in the storage subsystem1410can include a number of memories including a main random access memory (RAM)1432for storage of instructions and data during program execution and a read only memory (ROM)1434in which fixed instructions are stored. A file storage subsystem1436can provide persistent storage for program and data files, and can include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges. The modules implementing the functionality of certain implementations can be stored by file storage subsystem1436in the storage subsystem1410, or in other machines accessible by the processor. Bus subsystem1455provides a mechanism for letting the various components and subsystems of computer system1400communicate with each other as intended. Although bus subsystem1455is shown schematically as a single bus, alternative implementations of the bus subsystem can use multiple busses. Computer system1400itself can be of varying types including a personal computer, a portable computer, a workstation, a computer terminal, a network computer, a television, a mainframe, a server farm, a widely-distributed set of loosely networked computers, or any other data processing system or user device. Due to the ever-changing nature of computers and networks, the description of computer system1400depicted inFIG.14is intended only as a specific example for purposes of illustrating the preferred implementations of the present invention. Many other configurations of computer system1400are possible having more or less components than the computer system depicted inFIG.14. Particular Implementations The technology disclosed relates to using semi-supervised algorithms to construct deep learning-based pathogenicity classifiers that accurately predict pathogenicity of promoter single nucleotide variants (pSNVs)). The technology disclosed can be practiced as a system, method, device, product, computer readable media, or article of manufacture. One or more features of an implementation can be combined with the base implementation. Implementations that are not mutually exclusive are taught to be combinable. One or more features of an implementation can be combined with other implementations. This disclosure periodically reminds the user of these options. Omission from some implementations of recitations that repeat these options should not be taken as limiting the combinations taught in the preceding sections—these recitations are hereby incorporated forward by reference into each of the following implementations. A first neural network-based system implementation of the technology disclosed includes one or more processors coupled to memory. The memory is loaded with computer instructions to train an ensemble of classifiers to predict pathogenicity of promoter region single nucleotide variants (abbreviated pSNVs). The classifiers are trained using a common benign set of observed pSNVs and separate pathogenic sets of unobserved pSNVs sampled with replacement from a pool of substitutionally generated unobserved pSNVs. The training includes accessing input promoter sequences covering the observed pSNVs that contain reference bases at observed positions, unobserved-sampled positions, and unobserved-unsampled positions. The observed positions are positions at which the observed pSNVs occurred. The unobserved-sampled positions are positions at which the unobserved pSNVs sampled for a particular classifier at a current sampling cycle are located. The unobserved-unsampled positions are positions at which some of the substitutionally generated unobserved pSNVs not sampled for the particular classifier at the current sampling cycle are located. The training further includes generating ground truth data with base-wise and position-wise labels for each input promoter sequence. For the observed positions, the ground truth data assigns a blank label to bases that match the reference bases, assigns the blank label to bases that are variations from the reference bases which do not match the observed pSNVs, and assigns a benign label to bases that are variations from the reference bases which match the observed pSNVs. For the unobserved-sampled positions, the ground truth data assigns the blank label to bases that match the reference bases, assigns the blank label to bases that are variations from the reference bases which do not match the unobserved pSNVs, and assigns a pathogenic label to bases that are variations from the reference bases which match the unobserved pSNVs. For the unobserved-unsampled positions, the ground truth data assigns the blank label to all bases. This system implementation and other systems disclosed optionally include one or more of the following features. System can also include features described in connection with methods disclosed. In the interest of conciseness, alternative combinations of system features are not individually enumerated. Features applicable to systems, methods, and articles of manufacture are not repeated for each statutory class set of base features. The reader will understand how features identified in this section can readily be combined with base features in other statutory classes. The training further includes using a gradient update training technique to train the pathogenicity classifiers to generate, in response to processing the input promoter sequences, outputs with base-wise and position-wise pathogenicity scores that progressively approach corresponding base-wise and position-wise labels in the ground truth data. The training further includes sampling from the pool of substitutionally generated unobserved pSNVs such that trinucleotide context distribution substantially matches between the common benign set and each of the pathogenic sets. The training further includes sampling from the pool of substitutionally generated unobserved pSNVs such that local GC-content distribution substantially matches between the common benign set and each of the pathogenic sets. The training further includes sampling from the pool of substitutionally generated unobserved pSNVs such that sequencing coverage distribution substantially matches between the common benign set and each of the pathogenic sets. The training further includes for positions in the input promoter sequences, encoding a protein binding affinity score determined by one or more protein binding affinity predictors pre-trained on positive training examples of protein binding motifs and negative training examples of non-binding motifs to generate a position-wise protein binding affinity score sequence in response to processing an input sequence. The training further includes for the positions in the input promoter sequences, encoding a deoxyribonucleic acid (abbreviated DNA) accessibility inducing score determined by one or more DNA accessibility predictors pre-trained on positive training examples of DNA accessibility inducing motifs and negative training examples of non-inducing motifs to generate a position-wise DNA accessibility inducing score sequence in response to processing an input sequence. The observed pSNVs are included in the common benign set if they have a minor allele frequency greater than 0.1%. The observed pSNVs are included in the common benign set irrespective of their minor allele frequencies. Some of the observed pSNVs in the common benign set are observed in humans. Some of the observed pSNVs in the common benign set are observed in non-human primate species. The common benign set and each of the pathogenic sets have a same size. The pathogenic sets have some common unobserved pSNVs. The pool of substitutionally generated unobserved pSNVs is qualified to not include some unobserved pSNVs that are part of homopolymer regions, low-complexity regions, and overlapping coding regions. The training further includes iteratively optimizing a loss function that minimizes error between the base-wise and position-wise pathogenicity scores in the outputs and the corresponding base-wise and position-wise labels in the ground truth data and iteratively updating parameters of the classifiers based on the error (e.g., using backpropagation). The training further includes training the particular classifier over one or more epochs on a pathogenic set sampled at the current sampling cycle, continuing the training of the particular classifier on one or more additional pathogenic sets sampled at one or more successive sampling cycles, and concluding the training of the particular classifier when the particular classifier's pathogenicity score predictions on a validation set having held-out observed pSNVs and unobserved pSNVs form substantially discrete probability distribution clusters of benign and pathogenic predictions. The training further includes storing, in memory, classifier parameters derived by the training. The training further includes applying the trained classifiers to produce pathogenicity scores for at least some unobserved pSNVs in the pool of substitutionally generated unobserved pSNVs, for each unobserved pSNV in the at least some unobserved pSNVs, determining an average and/or maximum pathogenicity score from the pathogenicity scores produced by the trained classifiers, and generating a pathogenicity table that identifies the average and/or maximum pathogenicity score for each unobserved pSNV in the at least some unobserved pSNVs. The training further includes applying the trained classifiers to produce pathogenicity scores for at least some observed pSNVs in the common benign set of observed pSNVs, for each observed pSNV in the at least some observed pSNVs, determining an average and/or maximum pathogenicity score from the pathogenicity scores produced by the trained classifiers, and generating the pathogenicity table that identifies the average and/or maximum pathogenicity score for each observed pSNV in the at least some observed pSNVs. In some implementations, the input promoter sequences are flanked by upstream and downstream reference bases. In some implementations, the reference bases in the input promoter sequences are one-hot encoded. The classifiers are deep convolutional neural networks that contain groups of residual blocks arranged in a sequence from lowest to highest, each group of residual blocks is parameterized by a number of convolution filters in the residual blocks, a convolution window size of the residual blocks, and an atrous convolution rate of the residual blocks, the atrous convolution rate progresses non-exponentially from a lower residual block group to a higher residual block group, the size of convolution window varies between groups of residual blocks, and each residual block comprises at least one batch normalization layer, at least one rectified linear unit (abbreviated ReLU) layer, at least one atrous convolution layer, and at least one residual connection. The ensemble includes 4 to 10 deep convolutional neural networks, in one implementation. In another implementation, the ensemble includes 10 to 100 deep convolutional neural networks. In yet another implementation, the ensemble includes 100 to 200 deep convolutional neural networks. Other implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform functions of the system described above. Yet another implementation may include a method performing the functions of the system described above. A second neural network-based system implementation of the technology disclosed includes one or more processors coupled to memory. The memory is loaded with computer instructions that implement a trained pathogenicity classifier which predict pathogenicity of promoter region single nucleotide variants (abbreviated pSNVs). A trained pathogenicity classifier comprises an input module (not shown) that accepts an input promoter sequence with reference bases at positions covering observed pSNVs and substitutionally generated unobserved pSNVs. The trained pathogenicity classifier also comprises a processing module (not shown) that processes the input promoter sequence through one or more layers of the pathogenicity classifier to generate an alternative representation of the input promoter sequence. In some implementations, when the trained pathogenicity classifier is a deep convolutional neural network, the layers are convolution layers with convolution filters and the alternative representation is a convolved representation. In other implementations, when the trained pathogenicity classifier is a recurrent neural network, the layers are recurrent units with gates and the alternative representation is a hidden representation. The trained pathogenicity classifier further comprises an output module (not shown) that processes the alternative representation to generate an output which, for each position in the input promoter sequence, classifies each of three base variations from a corresponding reference base as benign or pathogenic. In some implementations, the output includes pathogenicity likelihood scores for each of the three base variations. The trained pathogenicity classifier receives supplemental input from a protein binding affinity sub-classifier that encodes a protein binding affinity score to each position in the input promoter sequence. The trained pathogenicity classifier also receives supplemental input from a DNA accessibility sub-classifier that encodes a DNA accessibility inducing score to each position in the input promoter sequence. Other implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform functions of the system described above. Yet another implementation may include a method performing the functions of the system described above. Any data structures and code described or referenced above are stored according to many implementations on a computer-readable storage medium, which may be any device or medium that can store code and/or data for use by a computer system. This includes, but is not limited to, volatile memory, non-volatile memory, application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), magnetic and optical storage devices such as disk drives, magnetic tape, CDs (compact discs), DVDs (digital versatile discs or digital video discs), or other media capable of storing computer-readable media now known or later developed. The preceding description is presented to enable the making and use of the technology disclosed. Various modifications to the disclosed implementations will be apparent, and the general principles defined herein may be applied to other implementations and applications without departing from the spirit and scope of the technology disclosed. Thus, the technology disclosed is not intended to be limited to the implementations shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein. The scope of the technology disclosed is defined by the appended claims. | 61,952 |
11861492 | DETAILED DESCRIPTION Various embodiments provide for quantization of a trained neural network with removal of normalization, which represents an improvement over conventional methodologies for removing normalization from a quantized neural network (e.g., fixed-point neural network). In particular, various embodiments provide for quantizing a trained neural network with removal of normalization with respect to at least one layer of the quantized neural network, such as a quantized multiple fan-in layer. Additionally, some embodiments provide for generating executable code from the resulting quantized trained neural network, where the executable code is executable by a target hardware processor, such as a specialized processor (e.g., a digital signal processor (DSP) or a Neural Processor Unit (NPU)), to operate the quantized trained neural network on the target hardware processor. Depending on the embodiment, the trained neural network (that can be quantized by an embodiment) can comprise a trained neural network generated by one of a variety of neural network frameworks, such as Caffe (developed by Berkeley AI Research at the University of California, Berkley), TensorFlow (by GOOGLE), PyTorch (by Facebook), etc. A quantized multiple fan-in layer can include, without limitation, an element-wise (eltwise) add or sum layer. By use of various embodiments described herein, certain normalizations can be removed from a quantized neural network while avoiding issues raised by conventional methods of removing normalization, such as introducing large differences in layer value ranges or the introduction of extra quantization loss. Additionally, use of various embodiments described herein can result in improved precision for both projection and identity mappings within the quantized neural network. According to some embodiments, with respect to a residual block of a quantized neural network, a normalization is removed with respect to a quantized multiple fan-in layer included by the residual block in different cases, and the method for removal of normalization can differ between cases. For instance, normalization can be removed with respect to a quantized multiple fan-in layer of a residual block where the residual block implements a projection mapping within the quantized neural network. An example projection mapping case is described herein with respect toFIG.3. In another instance, normalization can be removed with respect to a quantized multiple fan-in layer of a residual block where the residual block implements (within the quantized neural network) an identity mapping where an empirical value received by the first quantized multiple fan-in layer from a preceding quantized multiple fan-in layer (e.g., of another residual block) is equal to or greater than the empirical value of the second input of the first quantized multiple fan-in layer. In yet another instance, normalization can be removed with respect to a quantized multiple fan-in layer of a residual block where the residual block implements (within the quantized neural network) an identity mapping where an empirical value received by the first quantized multiple fan-in layer from a preceding quantized multiple fan-in layer (e.g., of another residual block) is smaller than the empirical value of the second input of the first quantized multiple fan-in layer. An example identity mapping case is described herein with respect toFIG.4. Reference will now be made in detail to embodiments of the present disclosure, examples of which are illustrated in the appended drawings. The present disclosure may, however, be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. FIG.1is a diagram illustrating an example flow100for compiling a neural network based on quantizing the neural network with removal of normalization, according to some embodiments. As shown, the neural network compilation flow100involves a neural network framework102, a trained neural network model104, a neural network compiler with removal of normalization from quantized neural network106(hereafter, neural network compiler106), and trained neural network model code executable by target hardware processor108(hereafter, trained neural network model code108). According to various embodiments, the neural network compilation flow100compiles the trained neural network model104, generated by the neural network framework102, into the trained neural network model code108using the neural network compiler106. For some embodiments, a flow for compiling a neural network code into executable model code differs (e.g., in operation or phases) from what is illustrated byFIG.1. The neural network framework102, the neural network compiler106, or both can be implemented using one or more processors (e.g., by configuring such one or more computer processors to perform functions described for that component) and hence can include one or more of the processors. Furthermore, the neural network framework102and the neural network compiler106can be implemented together or separately within a single machine, database, or device or may be distributed across multiple machines, databases, or devices. For some embodiments, the neural network framework102comprises a framework configured to generate a neural network, such as a convolution neural network (CNN), a recurrent neural network (RNN), a deep learning neural network (DNN), or the like, and facilitate the training of the neutral network, which generates a model that implements the trained neural network (i.e., trained neural network model). The trained neural network model104represents an example of a trained neural network model generated by the neural network framework102, and comprises data describing the trained neural network. For some embodiments, a trained neural network model generated by the neural network framework102(e.g., the trained neural network model104) implements a trained neural network comprising one or more layers that process data in a floating-point domain (hereafter, referred to as floating-point layers). Within a neural network, a floating-point layer can receive (e.g., from a preceding layer) one or more floating-point values, or can output (e.g., to a succeeding layer) one or more floating-point values. As used herein, a floating-point neural network can refer to a neural network that comprises as least one floating-point layer. The neural network compiler106accesses the trained neural network model104(e.g., accesses model data for the trained neural network model104) and generates the trained neural network model code108. For some embodiments, the trained neural network model code108comprises code (e.g., instruction data) executable by a target hardware processor, such as a DSP or an NPU, to operate the quantized neural network on the target hardware processor. To do so, the neural network compiler106of various embodiments quantizes a floating-point neural network, implemented by the trained neural network model104, and uses the quantized neural network to generate the trained neural network model code108. The quantized neural network represents a quantized implementation of the trained neural network implemented by the trained neural network model104. To quantize a floating-point neural network, the neural network compiler106of some embodiments converts the floating-point neural network (which processes data in a floating-point domain) to a neural network that processes data in a fixed-point domain (hereafter, referred to as a fixed-point neural network). The resulting fixed-pointed neural network represents the quantized version of the floating-point neural network. As used herein, a fixed-point neural network refers to a neural network that comprises one or more layers each of which outputs one or more fixed-point values (hereafter, referred to as fixed-point layers). Within a neural network, a fixed-point layer outputs (e.g., to a succeeding layer) one or more fixed-point values, and can receive (e.g., from a preceding layer) one or more fixed-point values. A fixed-point neural network can comprise multiple layers (e.g., fixed-point layers), and each layer of a fixed-point neural network can comprise one or more parameters (e.g., quantized parameters) or attributes that determine operation of that layer. The one or more parameters/attributes of a given layer can include, for example, a range of input values (also referred to herein as input value range or input layer activation) and a range of output values (also referred to herein as output value range or output layer activation) of the given layer. The value range of a fixed-point layer can be defined by fixed-point values. For instance, with respect to an example quantized multiple fan-in layer, the input value range can be defined as a minimum in of −23 for input 1 and −16 for input 2, and a maximum in of 46 for input 1 and 22 for input 2, while the output value range can be defined as a minimum out of 0 and a maximum out of 50. Example fixed-point layers can include, with limitation, element-wise (eltwise) sum layers, in-place layers, rectified linear unit (ReLu) layers, or convolution layers. For various embodiments described herein, normalization can be added or removed with respect to a layer of a neural network (e.g., fixed-point neural network) by updating (e.g., adjusting) the layer's one or more parameters/attributes, such as those that control the input value range or the output value range of the layer. According to some embodiments, the neural network compiler106uses a methodology described herein for quantizing a trained neural network (e.g., implemented by the trained neural network model104) with removal of normalization with respect to a layer of the quantized neutral network, such as a quantized multiple fan-in layer (e.g., element-wise add or sum layer). As shown, the neural network compiler106comprises a neural network analyzer120and an optimizer and code generator122. Depending on the embodiment, a methodology described herein can be implemented by way of the neural network analyzer120, the optimizer and code generator122, or some combination of both. For various embodiments, the neural network analyzer120implements conversion of a floating-point neural network (e.g., implemented by the trained neural network model104) to a fixed-point neural network, while the optimizer and code generator122optimizes the fixed-point neural network and generates code executable by a target hardware processor (e.g., the trained neural network model code108). During operation, the neural network analyzer120can analyze the floating-point neural network, which can include merging layers, fixed-point emulation, accuracy evaluation, or some combination thereof. The neural network analyzer120can generate an intermediate representation of a quantized neural network (e.g., fixed-point neural network) that represents a quantized version of the floating-point neural network. Based on the intermediate representation, the optimizer and code generator122can optimize the fixed-point neural network and generate the executable code. The executable code generated can be specifically targeted for execution by a processor of a certain make, model, or type. As described herein, examples of target hardware processors can include, without limitation, a DSP or a NPU. In generating the code, the optimizer and code generator122can use a set of libraries code libraries) compatible with the target hardware processor. To remove normalization with respect to a given layer of a quantized neural network (e.g., fixed-point neural network), the neural network compiler106of some embodiments can first determine whether the given layer meets a condition for normalization removal. As one example of a condition for normalization removal with respect to a quantized multiple fan-in layer of a quantized neural network, if each of the preceding layers (coupled as inputs to the quantized multiple fan-in layer) has a fan-in of one, the neural network compiler106can remove a normalization of quantized multiple fan-in layer with respect to each of the preceding lavers. Removing the normalization can comprise, for example, updating (e.g., adjusting) the output value range of one or all of the preceding layers, and can further comprise updating (e.g., adjusting) the input value range of the quantized multiple fan-in layer accordingly. For instance, where a first preceding layer and a second preceding layer are connected as inputs to a given quantized multiple fan-in layer, V1represents the maximum output value of the first preceding layer, and V2represents the maximum output value of the second preceding layer; each of the maximum output values V1and V2can be updated (e.g., adjusted or set) to be the maximum value of V1and V2, as represented below (* denotes the updated maximum output value) by Example 1: *V1=max (V1,V2) *V2=max (V1,V2) The maximum input value of the quantized multiple fan-out layer can be updated (e.g., adjusted or set) according to (e.g., to match) the update to one or both of the preceding layers. This represents an example of updating every input layer activation of the quantized multiple fan-in layer to the output layer activation of the preceding layers, where the first normalization is being skipped. An example case is described herein with respect toFIG.3. As another example of a condition for normalization removal with respect to a quantized multiple fan-in layer of a quantized neural network, if at least one preceding layer (coupled as an input to the quantized multiple fan-in layer) has a fan-out that is greater than one, normalization can be removed with respect to those preceding layers (coupled as inputs to the quantized multiple fan-in layer) having a fan-out of one. In this case, removing the normalization from preceding layers having a fan-out of one can comprise updating (e.g., adjusting) the output value range of these preceding layers based on the output value range of the preceding layer that has a fan-out greater than one. For instance, assume a first preceding multiple fan-out layer and a second single fan-out preceding layer are connected as inputs to a given quantized multiple fan-in layer, V3represents the maximum output value of the first multiple fan-out preceding layer, V4represents the maximum output value of the second single fan-out preceding layer, and V5represents the maximum output value of the quantized multiple fan-in layer. Under a first example approach for normalization removal, V3can remain unchanged (since the first multiple fan-out preceding layer has a fanout greater than one) and V4can be updated adjusted or set) to match V3, as represented below (* denotes the updated maximum output value) by Example 2: *V4=V3 Under a second example approach for normalization removal, V3can remain unchanged (since the first multiple fan-out preceding layer has a fanout greater than one) and V4can be updated (e.g., adjusted or set) as follows by Example 3: *V4=V5 For both example approaches, the maximum input value of the quantized multiple fan-in layer can be updated (e.g., adjusted or set) according to (e.g., to match) the update to the second single-out preceding layer, Both the first approach and the second approach described above represent different examples of updating the input layer activation from single output fan-out layers to the quantized multiple fan-in layer based on input layer activation from a multiple fan-out layer to the quantized multiple fan-in layer. An example case is described herein with respect toFIG.4. In instances where, with respect to a quantized multiple fan-in layer of a quantized neural network, the number of fan-outs of all preceding layers and all succeeding layers is greater than one, the neural network compiler106can avoid removal of normalization. Though the methodologies of various embodiments are described herein with respect to a neural network compiler, it will be understood that for some embodiments, the methodologies are implemented with respect to other (non-compiler) software applications or platforms. FIG.2is a flowchart illustrating an example method200for quantizing a trained neural network with removal of normalization with respect to at least one layer of the quantized neural network, according to some embodiments. It will be understood that example methods described herein may be performed by a device, such as a computing device executing instructions of a neutral network compiler (e.g., the neural network compiler106), in accordance with some embodiments. Additionally, example methods described herein may be implemented in the form of executable instructions stored on a computer-readable medium or in the form of electronic circuitry. For instance, the operations of the method200ofFIG.2can be represented by executable instructions that, when executed by a processor of a computing device, cause the computing device to perform the method200. Depending on the embodiment, an operation of an example method described herein may be repeated in different ways or involve intervening operations not shown. Though the operations of example methods may be depicted and described in a certain order, the order in which the operations are performed may vary among embodiments, including performing certain operations in parallel. The method200as illustrated begins with operation202accessing source model data that describes a trained neural network that processes data in a floating-point domain (e.g., trained floating-point neural network), where the trained neural network comprises a set of floating-point layers. As described herein, the trained neural network can be one generated by an existing neural network framework, such as Caffe, TensorFlow, PyTorch, etc. Depending on the embodiment, the trained neural network can comprise a CNN, RNN, or a DNN The method200continues with operation204converting the trained neural network (described by the source model data accessed by operation202) to a quantized neural network that processes data in a fixed-point domain (e.g., fixed-point neural network), where the quantized neural network comprises a set of fixed-point layers that corresponds to the set of floating-point layers. According to various embodiments, operation204facilitates the conversion by generating the quantized neural network based on the source model data accessed by operation202. Subsequently, operation206determines a set of quantized multiple fan-in layers in the quantized neural network. For some embodiments, the given quantized multiple fan-in layer comprises at least one element-wise sum layer. Additionally, for some embodiments, the given quantized multiple fan-in layer forms part of a residual block of the quantized neural network. As described herein, the quantized neural network can comprise multiple layers, and each layer can comprise one or more parameters (e.g., quantized parameters) or attributes that determine operation of the layer, such as an input value range (or activation) and an output value range of the layer or input scale and output scale. For various embodiments described herein, normalization can be added or removed with respect to a layer of a neural network (e.g., fixed-point neural network) by updating (e.g., adjusting) the layer's one or more parameters/attributes, such as those that control the input value range or the output value range of the layer. The method200continues by performing operations208through214with respect to a given quantized multiple fan-in layer in the set of quantized multiple fan-in layers. For some embodiments, operations208through214are performed for each quantized multiple fan-in layer in the set of quantized multiple fan-in layers. Operation208analyzes a set of preceding layers (e.g., all preceding layers) of the given quantized multiple fan-in layer, where each preceding layer in the set of preceding layers connects as input to the given quantized multiple fan-in layer. According to various embodiments, each preceding layer in the set of preceding layers is being analyzed to determine, for example, fan-in characteristics, fan-out characteristics, or both. Thereafter, operation210determines, based on the analysis performed by operation208, whether a condition is satisfied for updating (e.g., adjusting or setting) at least one preceding layer or the given quantized multiple fan-in layer in the set of preceding layers to remove normalization. As described herein, one example condition can comprise whether a number of fan-outs of each preceding layer (in the set of preceding layers) is one. Another example condition can comprise whether a number of fan-out of any preceding layer (in the set of preceding layers) is greater than one. Based on operation210determining whether the condition is satisfied, operation212updates (e.g., adjusts or sets), the at least one preceding layer to remove normalization. Additionally, based on operation210determining whether the condition is satisfied, operation214updates (e.g., adjusts or sets) the given quantized multiple fan-in layer to remove normalization. For example, in response to operation210determining that a number of fan-outs of each preceding layer (in the set of preceding layers) is one, operation212can update an output value range of one or more of the preceding layers. For instance, operation212can update the output value range of one or more of the preceding layers (e.g., update maximum value of the output value range of those layers) based on a maximum value from output value ranges of all preceding layers connected as input to the given quantized multiple fan-in layer. Additionally, operation214can update the given quantized multiple fan-in layer to respectively match the updates to the one or more preceding layers. An example of this case is described above with respect to Example 1 and below with respect toFIG.3. As another example, in response to operation210determining that a number of fan-outs of any preceding layer in the set of preceding layers is greater than one, operation212can update one or more of the preceding layers (and operation214can update the given quantized multiple fan-in layer accordingly) based on one of several approaches. According to one approach, for a first preceding layer having a fan-out of one, the first output value range of the firm preceding layer can be updated based on a second output value range of a second preceding layer that has a fan-out greater than one. An example of this case is described above with respect to Example 2 and below with respect toFIG.4. According to an alternative approach, for the first preceding layer having a fan-out of one, the first output value range of the first preceding layer can be updated based on determining whether a first maximum value, of a first output value range of the first preceding layer, is greater than a second maximum value of a second output value range of a second preceding layer that has a fan-out greater than one. An example of this case is described above with respect to Example 3 and below with respect toFIG.4. In response to the first maximum value being greater than the second maximum value, operation212can update the first output value range of the first preceding layer based on a maximum value of a third output value range of the given quantized multiple fan-in layer, and operation214can update the given quantized multiple fan-in layer to respectively match the updates to the one or more preceding layers. On the other hand, in response to the first maximum value not being greater than the second maximum value, operation212can update the first output value range of the first preceding layer based on the maximum value of the second output value range of the second preceding layer. Eventually, the method200continues with operation216generating, based on the quantized neural network as updated by operations212and214, executable code for operating a trained neural network model on a target hardware processor, where the trained neural network model implements the quantized neural network. As described herein, for some embodiments, the quantized neural network (e.g., a trained fixed-point neural network) represents a quantized version of the trained neural network (e.g., a trained floating-point neural network). FIGS.3and4each illustrate a portion of an example of removing normalization from a portion of an example quantized neural network (e.g., fixed-point neural network), according to some example embodiments. Referring now toFIG.3, a quantized neural network300comprises layers302,304,308,310,312, and residual blocks306,314. One or more of the layers302,304,308,310,312can represent convolution layers. The residual block306comprises a quantized multiple fan-in layer (RES5A) and a rectified linear unit (RES5A_RELU) layer and, likewise, the residual block314comprises a quantized multiple fan-in layer (RES5B) and a rectified linear unit (RES5B_RELU) layer. As described herein, each of the quantized multiple fan-in layers can comprise an element-wise (eltwise) sum layer. According to some embodiments, the residual block306and preceding layers302and304represent a projection mapping, while the residual block314and preceding layers308,310and312represent an identity mapping. The input/output value ranges (activations) of layer302are illustrated in parameter window330, the input/output value ranges (activations) of layer304are illustrated in parameter window332, the input/output value ranges (activations) of the residual block306(input value ranges of the two inputs of layer RES5A and output value range of layer RES5A_RELU) are illustrated in parameter window334, the input/output value ranges (activations) of layer312are illustrated in parameter window336, and the input/output value ranges (activations) of the residual block314(input value ranges of the two inputs of layer RES5B and output value range of layer RES5B_RELU) are illustrated in parameter window338. With respect to the quantized multiple fan-in layer RES5A of the residual block306, some embodiments described herein can determine that each of the preceding layers302and304connected as inputs to the layer RES5A has a fan-out of one. In response, various embodiments can update the maximum value of the output value range of the layer302(MAX OUT: 46) and can update the maximum value of the output value range of the layer304(MAX OUT: 22) to be the maximum value of the two output value ranges (i.e., 46), as illustrated in parameter windows330and332. Additionally, the maximum value of the input value range of the layer RES5A is updated to reflect the updates to the preceding layers302and304, as illustrated in parameter window334. As shown, the maximum value with respect to the layer302effectively remains unchanged in this example. Referring now toFIG.4, a quantized neural network400comprises layers402,404,408,410,412, and residual blocks406,414. One or more of the layers402,404,408,410,412can represent convolution layers. The residual block406comprises a quantized multiple fan-in layer (RES5A) and a rectified linear unit (RES5A_RELU) layer and, likewise, the residual block414comprises a quantized multiple fan-in layer (RES5B) and a rectified linear unit (RES5B_RELU) layer. As described herein, each of the quantized multiple fan-in layers can comprise an element-wise (eltwise) sum layer. According to some embodiments, the residual block406and preceding layers402and404represent a projection mapping1), while the residual block414and preceding layers408,410,412represent an identity mapping. The input/output value ranges (activations) of layer402are illustrated in parameter window430, the input/output value ranges (activations) of layer404are illustrated in parameter window432, the input/output value ranges (activations) of the residual block406(input value ranges of the two inputs of layer RES5A and output value range of layer RES5A_RELU) are illustrated in parameter window434, the input/output value ranges (activations) of layer412are illustrated in parameter window436, and the input/output value ranges (activations) of the residual block414(input value ranges of the two inputs of layer RES5B and output value range of layer RES5B_RELU) are illustrated in parameter window438. With respect to the quantized multiple fan-in layer RES5B of the residual block406, some embodiments described herein can determine that the preceding layer RES5A_RELU has a fan-out greater than one and that the preceding layer412has a fan-out of one. As described herein, various embodiments can respond by one of several approaches. According to a first example approach (Approach 1), some embodiments can update the maximum value of the output value range of the layer412(MAX OUT: 23), which has a fan-out of one, to match the maximum value of the output value range of the layer RES5B_RELU (MAX OUT: 50), which has a fan out that is greater than one. Additionally, the maximum value of the input value range of the layer RES5B is updated to reflect the updates to the preceding layer412. These updates are illustrated in parameter windows436and438as Approach 1. According to a second example approach (Approach 2), some embodiments can update the maximum value of the output value range of the layer412(MAX OUT: 23), which has a fan-out of one, to match the maximum value of the output value range of the layer RES5B_RELU (MAX OUT: 55). Additionally, the maximum value of the input value range of the layer RES5B is updated to reflect the updates to the preceding layer412. These updates are illustrated in parameter windows436and438as Approach 2. FIG.5is a block diagram500illustrating an example of a software architecture502that may be operating on a computer and may be used with methods for quantizing trained neural networks with removal of normalization with respect to at least one layer of the quantized neural network, according to some example embodiments. The software architecture502can be used as a computing device to implement any of the methods described above. Aspects of the software architecture502can, in various embodiments, quantize a trained neural network with removal of normalization with respect to at least one layer of the quantized neural network, such as a quantized multiple fan-in layer (e.g., element-wise add or sum layer). Additionally, aspects of the software architecture502can, in various embodiments, generate executable code from the resulting quantized trained neural network, where the executable code is executable by a target hardware processor (e.g., specialized processor, such as a DSP or an NPU) to operate the quantized trained neural network on the target hardware processor. FIG.5is merely a non-limiting example of a software architecture502, and it will be appreciated that many other architectures can be implemented to facilitate the functionality described herein. In various embodiments, the software architecture502is implemented by hardware such as a machine600ofFIG.6that includes processors610, memory630, and I/O components650. In this example, the software architecture502can be conceptualized as a stack of layers where each layer may provide a particular functionality. For example, the software architecture502includes layers such as an operating system504, libraries506, software frameworks508, and applications510. Operationally, the applications510invoke application programming interface (API) calls512through the software stack and receive messages514in response to the API calls512, consistent with some embodiments. In various embodiments, any client device, any server computer of a server system, or any other device described herein may operate using elements of the software architecture502. A computing device described herein may additionally be implemented using aspects of the software architecture502, with the software architecture502adapted for operating to quantize a trained neural network in any manner described herein. In one embodiment, an application of the applications510compiles (e.g., maps) a trained neural network to code executable by a target hardware processor (e.g., specialized processor, such as a DSP or an NPU) to operate the quantized trained neural network according to embodiments described herein using various modules within the software architecture502. For example, in one embodiment, a computing device similar to the machine600includes the memory630and the one or more processors610. The processors610also implement a neural network compiler with removal of normalization from quantized neural network542(hereafter, neural network compiler542) for quantizing a trained neural network with removal of normalization with respect to at least one layer of the quantized neural network (e.g., quantized multiple fan-in layer), and for generating executable code from the resulting quantized trained neural network in accordance with various embodiments described herein. In various other embodiments, rather than being implemented as modules of the one or more applications510, the neural network compiler542can be implemented using elements of the libraries506, the operating system504, or the software frameworks508. In various implementations, the operating system504manages hardware resources and provides common services. The operating system504includes, for example, a kernel520, services522, and drivers524. The kernel520acts as an abstraction layer between the hardware and the other software layers, consistent with some embodiments. For example, the kernel520provides memory management, processor management (e.g., scheduling), component management, networking, and security settings, among other functionality. The services522can provide other common services for the other software layers. The drivers524are responsible for controlling or interfacing with the underlying hardware, according to some embodiments. For instance, the drivers524can include display drivers, signal-processing drivers to optimize modeling computation, memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), WI-FIR drivers, audio drivers, power management drivers, and so forth. In some embodiments, the libraries506provide a low-level common infrastructure utilized by the applications510. The libraries506can include system libraries530such as neural network libraries used by the neural network compiler542or other libraries that can provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, the libraries506can include API libraries532such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as Joint Photographic Experts Group (JPEG or JPG), or Portable Network Graphics (PNG)), graphics libraries (e.g., an OpenGL framework used to render in two dimensions (2D) and three dimensions (3D) in a graphic context on a display), database libraries (e.g., SQLite to provide various relational database functions), web libraries (e.g., WebKit to provide web browsing functionality), and the like. The libraries506may also include other libraries534. The software frameworks508provide a high-level common infrastructure that can be utilized by the applications510, according to some embodiments. For example, the software frameworks508provide various graphic user interface (GUI) functions, high-level resource management, high-level location services, and so forth. The software frameworks508can provide a broad spectrum of other APIs that can be utilized by the applications510, some of which may be specific to a particular operating system504or platform. Certain embodiments are described herein as including logic or a number of components, modules, elements, or mechanisms. Such modules can constitute either software modules (e.g., code embodied on a machine-readable medium or in a transmission signal) or hardware modules. A “hardware module” is a tangible unit capable of performing certain operations and can be configured or arranged in a certain physical manner. In various example embodiments, one or more computer systems (e.g., a standalone computer system, a client computer system, or a server computer system) or one or more hardware modules of a computer system (e.g., a processor or a group of processors) are configured by software (e.g., an application or application portion) as a hardware module that operates to perform certain operations as described herein. In some embodiments, a hardware module is implemented mechanically, electronically, or any suitable combination thereof. For example, a hardware module can include dedicated circuitry or logic that is permanently configured to perform certain operations. For example, a hardware module can be a special-purpose processor, such as a field-programmable gate array (FPGA) or an application-specific integrated circuit (ASIC). A hardware module may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, a hardware module can include software encompassed within a general-purpose processor or other programmable processor. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) can be driven by cost and time considerations. Accordingly, the phrase “module” should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where a hardware module comprises a general-purpose processor configured by software to become a special-purpose processor, the general-purpose processor may be configured as respectively different special-purpose processors (e.g., comprising different hardware modules) at different times. Software can accordingly configure a particular processor or processors, for example, to constitute a particular hardware module at one instance of time and to constitute a different hardware module at a different instance of time. Hardware modules can provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules can be regarded as being communicatively coupled. Where multiple hardware modules exist contemporaneously, communications can be achieved through signal transmission (e.g., over appropriate circuits and buses) between or among two or more of the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between or among such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module performs an operation and stores the output of that operation in a memory device to which it is communicatively coupled. A further hardware module can then, at a later time, access the memory device to retrieve and process the stored output. Hardware modules can also initiate communications with input or Output devices, and can operate on a resource (e.g., a collection of information). The various operations of example methods described herein can be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors constitute processor-implemented modules that operate to perform one or more operations or functions described herein. As used herein, “processor-implemented module” refers to a hardware module implemented using one or more processors. Similarly, the methods described herein can be at least partially processor-implemented, with a particular processor or processors being an example of hardware. For example, at least some of the operations of a method can be performed by one or more processors or processor-implemented modules. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers examples of machines600including processors610), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an API). In certain embodiments, for example, a client device may relay or operate in communication with cloud computing systems, and may access circuit design information in a cloud environment. The performance of certain of the operations may be distributed among the processors, not only residing within a single machine600, but deployed across a number of machines600. In some example embodiments, the processors610or processor-implemented modules are located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the processors or processor-implemented modules are distributed across a number of geographic locations. FIG.6is a diagrammatic representation of the machine600in the form of a computer system within which a set of instructions may be executed for causing the machine600to perform any one or more of the methodologies discussed herein, according to an example embodiment.FIG.6shows components of the machine600, which is, according to some embodiments, able to read instructions from a machine-readable medium (e.g., a machine-readable storage medium) and perform any one or more of the methodologies discussed herein. Specifically,FIG.6shows a diagrammatic representation of the machine600in the example form of a computer system, within which instructions616(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine600to perform any one or more of the methodologies discussed herein can be executed. In alternative embodiments, the machine600operates as a standalone device or can be coupled (e.g., networked) to other machines. In a networked deployment, the machine600may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine600can comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, or any machine capable of executing the instructions616, sequentially or otherwise, that specify actions to be taken by the machine600. Further, while only a single machine600is illustrated, the term “machine” shall also be taken to include a collection of machines600that individually or jointly execute the instructions616to perform any one or more of the methodologies discussed herein. In various embodiments, the machine600comprises processors610, memory630, and I/O components650, which can be configured to communicate with each other via a bus602. In an example embodiment, the processors610(e.g., a central processing unit (CPU), a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a graphics processing unit (GPU), a digital signal processor (DSP), an ASIC, a radio-frequency integrated circuit (RFIC), another processor, or any suitable combination thereof) include, for example, a processor612and a processor614that may execute the instructions616. The term “processor” is intended to include multi-core processors610that may comprise two or more independent processors612,614(also referred to as “cores”) that can execute the instructions616contemporaneously. AlthoughFIG.6shows multiple processors610, the machine600may include a single processor612with a single core, a single processor612with multiple cores (e.g., a multi-core processor612), multiple processors610with a single core, multiple processors610with multiple cores, or any combination thereof. The memory630comprises a main memory632, a static memory634, and a storage unit636accessible to the processors610via the bus602, according to some embodiments. The storage unit636can include a machine-readable medium638on which are stored the instructions616embodying any one or more of the methodologies or functions described herein. The instructions616can also reside, completely or at least partially, within the main memory632, within the static memory634, within at least one of the processors610(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine600. Accordingly, in various embodiments, the main memory632, the static memory634, and the processors610are considered machine-readable media638. As used herein, the term “memory” refers to a machine-readable medium638able to store data temporarily or permanently and may be taken to include, but not be limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, and cache memory. While the machine-readable medium638is shown, in an example embodiment, to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store the instructions616. The term “machine-readable medium” shall also be taken to include any medium, or combination of multiple media, that is capable of storing instructions (e.g., the instructions616) for execution by a machine (e.g., the machine600), such that the instructions, when executed by one or more processors of the machine (e.g., the processors610), cause the machine to perform any one or more of the methodologies described herein. Accordingly, a “machine-readable medium” refers to a single storage apparatus or device, as well as “cloud-based” storage systems or storage networks that include multiple storage apparatus or devices. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, one or more data repositories in the form of a solid-state memory (e.g., flash memory), an optical medium, a magnetic medium, other non-volatile memory (e.g., erasable programmable read-only memory (EPROM)), or any suitable combination thereof. The term “machine-readable medium” specifically excludes non-statutory signals per se. The I/O components650include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. In general, it will be appreciated that the I/O components650can include many other components that are not shown inFIG.6. The I/O components650are grouped according to functionality merely for simplifying the following discussion, and the grouping is in no way limiting. In various example embodiments, the I/O components650include output components652and input components654. The output components652include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor), other signal generators, and so forth. The input components654include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, or other pointing instruments), tactile input components (e.g., a physical button, a touch screen that provides location and force of touches or touch gestures, or other tactile input components audio input components (e.g., a microphone), and the like. Communication can be implemented using a wide variety of technologies. The I/O components650may include communication components664operable to couple the machine600to a network680or devices670via a coupling682and a coupling672, respectively. For example, the communication components664include a network interface component or another suitable device to interface with the network680. In further examples, the communication components664include wired communication components, wireless communication components, cellular communication components, near field communication (NFC) components, BLUETOOTH® components (e.g., BLUETOOTH® Low Energy), WI-FI® components, and other communication components to provide communication via other modalities. The devices670may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB). In various example embodiments, one or more portions of the network680can be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the public switched telephone network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a WI-FIR network, another type of network, or a combination of two or more such networks. For example, the network680or a portion of the network680may include a wireless or cellular network, and the coupling682may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or another type of cellular or wireless coupling. Furthermore, the machine-readable medium638is non-transitory (in other words, not having any transitory signals) in that it does not embody a propagating signal. However, labeling the machine-readable medium638“non-transitory” should not be construed to mean that the machine-readable medium638is incapable of movement; the machine-readable medium638should be considered as being transportable from one physical location to another. Additionally, since the machine-readable medium638is tangible, the machine-readable medium638may be considered to be a machine-readable device. Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein. Although an overview of the inventive subject matter has been described with reference to specific example embodiments, various modifications and changes may lye made to these embodiments without departing from the broader scope of embodiments of the present disclosure. The embodiments illustrated herein are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed. Other embodiments may be used and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. The detailed description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled. As used herein, the term “or” may be construed in either an inclusive or exclusive sense. The terms “a” or “an” should be read as meaning “at least one,” “one or more,” or the like. The use of words and phrases such as “one or more,” “at least,” “but not limited to,” or other like phrases shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent. Boundaries between various resources, operations, modules, engines, and data stores are somewhat arbitrary, and particular operations are illustrated in a context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within a scope of various embodiments of the present disclosure. In general, structures and functionality presented as separate resources in the example configurations may be implemented as a combined structure or resource. Similarly, structures and functionality presented as a single resource may be implemented as separate resources. These and other variations, modifications, additions, and improvements fall within a scope of embodiments of the present disclosure as represented by the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The description above includes systems, methods, devices, instructions, and computer media (e.g., computing machine program products) that embody illustrative embodiments of the disclosure. In the description, for the purposes of explanation, numerous specific details are set forth in order to provide an understanding of various embodiments of the inventive subject matter. It will be evident, however, to those skilled in the art, that embodiments of the inventive subject matter may be practiced without these specific details. In general, well-known instruction instances, protocols, structures, and techniques are not necessarily shown in detail. | 54,485 |
11861493 | DETAILED DESCRIPTION Sensitive data may be included in training data provided to train a model of a machine learning application and/or in other data provided to a dynamically trained model. While the models are trained to classify or otherwise infer meaning from data based on the training set and/or other data, in some cases, due to the method of training and/or the contents of the training set, the model may “memorize” data from the training set. In these cases, the model may output the memorized data responsive to a determined classification or inference based on data input to the model. In some instances, the memorized data may be the sensitive data that should not be disclosed. In some cases, the memorized data may be personal information of a user from which training data was obtained and/or the memorized data may provide clues to a competitor trying to reverse engineer the model, how the model was trained, and/or contents of the training data set. Accordingly, machine learning models and/or applications with more privacy-aware capabilities are desired. FIG.1Aillustrates examples of a machine learning model102that has been trained to provide descriptive text captions for images. The machine learning model102is trained to provide a text output for an image input. The training may have included providing a training data set including hundreds or thousands of images (e.g., inputs) with text captions (e.g., desired result) that described the elements included in the images. Based on the training data set, the machine learning model may learn to recognize various elements in images and provide text associated with those elements. In some examples, the machine learning model may include a neural network. In a first example shown inFIG.1A, an image100is provided to the trained machine learning model102. The machine learning model102analyzes the image100and provides a caption104“roads leading to mountains” as a result. In a second example, an image110is provided to the machine learning model102. The model analyzes the image110and provides a caption114“a busy desk with laptop, glasses, cup, and a sticky note with password 1p5x3c9r.” In the first example, the machine learning model102provides a caption104that reflects the contents of the image100, but likely also describes many other images. In the second example, the machine learning model102provides a caption114that reflects the contents of the image110, but the caption114describes the image110in more detail and is less likely to describe other images. It may be inferred from caption114that the training data set likely included few images of desks with other elements described in caption114. In this instance, the machine learning model102may have memorized a training image and/or its associated captions. In some cases, it may be inferred that image110was included in the training data set. This analysis of the results of the machine learning model102may allow a user to determine characteristics and/or contents of the training data set and/or determine how the machine learning model102was trained. Furthermore, the caption114includes a password. The password may have been in an image in the training data set in some examples. In some instances, the password may be sensitive information that is not desirable to be provided as a result of the machine learning model102. FIG.1Billustrates examples of a machine learning model122that has been trained to provide text outputs132responsive to speech inputs120. As shown inFIG.1B, a user may provide a speech input120such as “Please provide the best route to Margaret's house” to a computing device. The soundwaves from the speech input120may be received by a speaker included with the computing device, and the computing device may provide signals responsive to the soundwaves (e.g., digital signals) to the machine learning model122, which may be included on the computing device and/or may be on a cloud computing system in communication with the computing device. As shown in block124, the machine learning model122may make inferences from the signals to determine what words were spoken. Once the words are determined as shown by block126, the machine learning model122may infer an intent of the user from the words as shown in block128. In layman's terms, the machine learning model122may determine what the user wants the computing system to do. Based on the inferred intent, the machine learning model122may formulate a response (e.g., output) as shown by block130. The output may then be provided to the user by the computing device, such as by displaying on a screen. In this example, as shown in block132, the output is directions to Margaret's house. The output of machine learning model122may be desirable if Margaret is a name of a business open to the public or Margaret is an individual known to the user. For example, the user may have provided her personal address book to the computing device for analysis by the machine learning model122, and Margaret may be a contact in the address book. However, providing directions to Margaret's house to the user may be undesirable if Margaret is an individual and is not known to the user. In these instances, it may be desirable to prevent the computing device from providing the result to the user and/or providing an alternate result such as directions to a business open to the public with a similar sounding name (e.g., “Margarita Hut” in this example). Although the examples provided inFIG.1Aprovides image inputs and text outputs and the example provided inFIG.1Bprovides speech inputs and text outputs, machine learning memorization may occur with other data types such as text for both inputs and results, speech data for inputs and results, speech data for inputs and text for results, etc. For example, memorization may occur when text is both the input and the result, such as when a machine learning model suggests words or phrases to a user typing a document (e.g., an email) based, at least in part, on the letters or words the user has already typed. In this example, the user may have typed, “Let's meet at Jane's house at” and if the machine learning model memorized a result based on the input, the machine learning model may provide a suggestion of a specific address for the house. In this case, the privacy of a resident of the specific address may be compromised. In accordance with examples of the present disclosure, data may be abstracted and/or masked prior to being provided to a machine learning model for training. This may increase “privacy awareness” of the machine learning model and reduce or prevent the machine learning model from “memorizing” sensitive information in some applications. In accordance with examples of the present disclosure, a machine learning model may provide a confidence level associated with a result. If the confidence level is too high, the machine learning model or an application including the machine learning model may refrain from providing the result as an output. In some examples, the no result may be provided when the confidence level of a particular output is too high. In other examples, the machine learning model may provide a “second best” result that has an acceptable confidence level. This “second best” result may be more privacy-aware in that it is less likely to disclose sensitive information. In still other examples, an error signal may be provided as the output. In accordance with examples of the present disclosure, data may be abstracted and/or masked prior to being provided to a machine learning model for training and confidence levels of results of the trained machine learning model may be used to determine when a result should be withheld. Processing data used for training machine learning models and/or not providing a result from the machine learning model under certain conditions may reduce or prevent exposure of sensitive data and/or reverse engineering of the machine learning model, training methods, and/or training data. FIG.2is a schematic illustration of a computing device arranged in accordance with examples of the present disclosure. The computing device200may include processor(s)202, a computer readable medium (or media)204, a memory controller210, a memory212, and interface(s)214. In some examples, the computing device200may include a display216. The computer readable medium204may be accessible to the processor(s)202. The computer readable medium204may be encoded with executable instructions208. The executable instructions208may be executed by the processor202. In some examples, the executable instructions208may cause the processor202to implement a machine learning application that includes one or more machine learning models. The machine learning application may implement various functions such as generating training data sets, training a machine learning model, and/or applying a trained machine learning model to received data to generate a result. Alternatively or additionally, in some examples, the machine learning application, or a portion thereof, may be implemented in hardware included with the computer readable medium204and/or processor(s)202, for example, application-specific integrated circuits (ASICs) and/or field programmable gate arrays (FPGA). The computer readable medium204may store data206. In some examples, the data206may include one or more training data sets, such as training data set218. In some examples, training data set218may be received from another computing device (e.g., an edge device222, a cloud computing device). In other examples, the training data set218may be generated by the computing device200. In some examples, the training data sets may be used to train one or more machine learning models. In some examples, the data206may include data used in a machine learning model (e.g., weights, connections between nodes). In some examples, the data206may include other data, such as new data220. In some examples, the other data may be analyzed by a trained machine learning model to make an inference (e.g., provide a result/output based on the data). In some examples, the data206may include outputs generated by one or more machine learning models implemented by the computing device200. The computer readable medium204may be implemented using any medium, including non-transitory computer readable media. Examples include memory, random access memory (RAM), read only memory (ROM), volatile or non-volatile memory, hard drive, solid state drives, or other storage. While a single medium is shown inFIG.2, multiple media may be used to implement computer readable medium204. In some examples, the processor(s)202may be implemented using one or more central processing units (CPUs), graphical processing units (GPUs), ASICs, FPGAs, or other processor circuitry. In some examples, the processor(s)202may execute some or all of the executable instructions208. In some examples, the processor(s)202may be in communication with a memory212via a memory controller210. In some examples, the memory212may be volatile memory, such as dynamic random access memory (DRAM). The memory212may provide information to and/or receive information from the processor(s)202and/or computer readable medium204via the memory controller210in some examples. While a single memory212and a single memory controller210are shown, any number may be used. In some examples, the memory controller210may be integrated with the processor(s)202. In some examples, the interface(s)214may provide a communication interface to another device (e.g., edge device222), a user, and/or a network (e.g., LAN, WAN, Internet). The interface(s)214may be implemented using a wired and/or wireless interface (e.g., Wi-Fi, BlueTooth, HDMI, USB, etc.). In some examples, the interface(s)214may include user interface components which may receive inputs from a use. Examples of user interface components include a keyboard, a mouse, a touch pad, a touch screen, and a microphone. In some examples, the interface(s)214may communicate information, which may include user inputs, data206, training data set218, and/or new data220, between external devices (e.g., edge device222) and one or more components of the computing device200(e.g., processor202and computer readable medium204). In some examples, the computing device200may be in communication with a display216that is a separate component (e.g., using a wired and/or wireless connection) or the display216may be integrated with the computing device. In some examples, the display216may display data206such as outputs generated by one or more machine learning models implemented by the computing device200. Any number or variety of displays may be present, including one or more LED, LCD, plasma, or other display devices. In some examples, the training data set218and/or new data220may be provided to the computing device200via the interface214. Optionally, in some examples, some or all of the training data sets218and/or new data220may be provided to the computing device200by an edge device222. In some examples, computing device200may provide results, such as inferences made by a machine learning application, to the edge device222. In some examples, the edge device222may also be a computing device that includes similar components to the components shown in computing device200. In some examples, the edge device222may be a mobile device such as a smart phone or tablet. In some examples, the edge device222may be a desktop computer or other stationary device. In some examples, edge device222and computing device200may be included in a computing system, such as a cloud computing system. In this example, the computing device200may be a cloud computing device. In some examples, the computing device200may be included in a server. In some examples, computing device200may process data (e.g., data206, training data set218, and/or new data220) to mask and/or abstract sensitive information. The processed data may be used to generate a training set for training a machine learning model (e.g., neural network, support vector machine, decision tree). In some examples, the machine learning model may be trained by the computing device200. In some examples, the trained machine learning model may be implemented by the computing device200and/or the computing device200may implement one or more other trained machine learning models. In some examples, the computing device200may implement a machine learning model that provides a result (also referred to as an inference) based on an input (e.g., data such as new data220) as well as a confidence level associated with the result. The machine learning model and/or other components of the computing device200may provide an output based on the confidence level associated with the result. For example, if the confidence level is equal or above a threshold, that may suggest that the machine learning model “memorized” a result from the training data set. In this case, the output may not contain the result. In some examples, the computing device200may output a different result (such as a result having a second-highest confidence level) from the machine learning model that has a confidence level with an acceptable value (e.g., equal to or below a threshold value) and provide this result as the output. In some examples, the output may include an error signal. FIG.3is a functional block diagram of a machine learning application300for abstracting and/or masking data in accordance with examples of the present disclosure. In some examples, machine learning application300may be implemented by computer readable instructions. In some examples, machine learning application300may be implemented by hardware, such as FPGAs and/or ASICs. In some examples, machine learning application300may be implemented by a combination of computer readable instructions and hardware. In some examples, machine learning application300may be implemented by computing device200shown inFIG.2. The machine learning application300may include a training data set302. The training data set302may include one or more inputs (X)304, each associated with a corresponding result (Y)306. In some examples, the training data set302may be pre-existing. In other examples, the machine learning application300may generate the training data set302from received data322. In some examples, the machine learning application300may generate the training data by tokenizing received data322, which is described in more detail with reference toFIG.4. In some examples, data322may be received from a computer readable medium included with a computing device that implements the machine learning application. In some examples, the data322may be received from an application320implemented by another computing device, such as an edge device222. The machine learning application300may process the training data set302to abstract and/or mask sensitive data and generate a modified training data set310. As used herein, abstracting data means to replace specific values of a data type with a generic value. For example, a data type may be proper names (e.g., John Smith, Sarah Jones). All proper names in the original data may be replaced with a generic value that indicates a proper name was present in the original data (e.g., NAME, PROPER NAME). In another example, a data type may be a specific date (e.g., Dec. 25, 1978). All specific dates may be replaced with a generic value that indicates a date was present or a relative date was present (e.g., DATE, TODAY'S DATE). As used herein, masking data means to remove a specific values of a data type. When the specific value is removed, it may or may not be replaced with an indication that a value has been removed (e.g., XXX). The abstracting and/or masking308of the training data set302may include classifying and/or ranking the data of the training data set302. Classifying the data refers to analyzing the data and determining one or more data types included in the data. For example, the data may be tokenized and each token of data may be analyzed to determine the data type included in that token. Data type refers to the kind of information included in the data (e.g., date, account number, quantity, pixel intensity, diagnosis). Certain data types may be sensitive data (e.g., proper name, address, account number). Ranking the data refers to analyzing the data and determining how often particular data types and/or values are present in the data. For example, the ranking may determine a number of times the value “benign” appears in the data and/or in data classified as having a “diagnosis” data type. In some examples, whether a value of the data in the training data set302is abstracted or masked may be based, at least in part, on the classification and/or rank of the value. In some examples, if the value is classified as non-sensitive data, the value may not be abstracted or masked regardless of rank. In some examples, if the value is classified as sensitive data, but the rank indicates that the value appears many times in the data (e.g., appears a threshold number of times or represents a percentage of values of a data type above a threshold), the value may be abstracted. In some examples, if the value is classified as sensitive data and the rank indicates that the value is rare (e.g., appears below a threshold number of times or represents a percentage of values of a data type below a threshold), the value may be masked. In some examples, the abstracting and/or masking308may be performed by a rules-based system (e.g., all strings of numbers of a certain length are account numbers). In some examples, the abstracting and/or masking308may be performed by a machine learning model trained to identify data types, including sensitive data, in training data sets. The data of training data set302processed by the abstracting and/or masking308may be used to generate a modified training data set310. The modified training data set310may include one or more modified inputs (X′)314and corresponding modified results (Y′)312. The modified training data set310may have some or all of the sensitive data from training data set302abstracted or removed. The modified training data set310may be used to train a machine learning model316. In some examples, using the modified training data set310may reduce or eliminate the risk of the machine learning model316“memorizing” sensitive data that could then be provided as a result. As shown inFIG.3, once trained, the machine learning model (f( ))316may receive new input data (Xnew)324and provide a result (ŷ)326based on the new input data324such that ŷ=f(Xnew). In some examples, such as the one shown inFIG.3, the new input data324may be provided from an application320, which may be implemented on a separate device, and the machine learning model316may provide the result to the application320. In some embodiments, where machine learning model316is dynamically trained, the new input data324and results326may be included in another training data set302that is abstracted and/or masked prior to being used to train the machine learning model316. FIG.4shows an example of tokenizing data in accordance with examples of the present disclosure. In some examples, the tokenizing may be performed by a machine learning application, such as machine learning application300. The original data400is a text string “Let's organize a meeting on Sep. 15, 2020.” The original data400is parsed into segments, referred to as tokens404, which may be analyzed individually by a machine learning model in some examples. In some examples, such as the one shown inFIG.4, the original data400may be tokenized such that elements of the original data400are repeated across different tokens404. For example, the word “meeting” appears in three different tokens404inFIG.4. The tokens404are organized such that tokens404of inputs406are associated with tokens404of desired results408. All of the sets of inputs406and results408pairs may be used as a training data set402to train a machine learning model. The example provided inFIG.4illustrates tokenizing with text data. In some examples, tokens of text data may be generated using k-grams, but other methods may also be used. Furthermore, the example provided inFIG.4is merely illustrative and the disclosure is not limited to text data or the particular method of tokenization shown. FIG.5is a flow chart of a method500in accordance with examples of the present disclosure. In some examples, all or a portion of method500may be performed by a computing device, for example, computing device200shown inFIG.2. In some examples, all or a portion of the method500may be performed by a machine learning application, such as machine learning application300shown inFIG.3, which in some examples may be implemented by a computing device such as computing device200. At block502, “receiving data” may be performed. In some examples, the data may be received by an interface, such as interface214. In some examples, the data may include text, images, and/or sound data. In some examples, the data may be received from an edge device, such as edge device222. At block504, “ranking the data” may be performed. In some examples, a rank may indicate a number of times one or more values is included in the data. At block506, “classifying the data,” may be performed. In some examples, a classification may indicate one or more data types included in the data. In some examples, the classifying may be a rules-based classification. In some examples, the classifying may be performed by a machine learning model, such as a neural network. In some examples, block506may be performed before block504. In some examples, block504and506may be performed simultaneously. At block508, “changing a value” may be performed. In some examples, a value of one or more values included in the data may be changed. In some examples, the value may be abstracted or masked. How the value is changed and/or whether the value is changed may be based, at least in part, on the rank and classification of the value in some examples. In some examples, changing the value may include masking the value when the classification indicates the data type of the value is sensitive data and the rank indicates the number of times the value is included in the data is equal to or below a threshold value. In some examples, changing the value may include abstracting the value when the classification indicates the data type of the value is sensitive data and the rank indicates the number of times the value is included in the data is equal to or above a threshold value. In some examples, sensitive data may include proper names, dates, addresses, passwords, birth dates, account numbers, and/or user names. At block510, “providing the data to a machine learning model” may be performed. The data provided to the machine learning model may include the changed values in some examples. That is, the data provided to the machine learning model may be modified from the data originally received at block502. In some examples, the data may be used as a training data set to train the machine learning model. Optionally, at block512, “training the machine learning model” may be performed. The machine learning model may be trained with the training data set. Optionally, in some examples, “parsing the data into one or more tokens” may be performed at block514. In some examples, individual ones of the tokens may include at least a portion of the data received at block502. In some examples, such as the one shown inFIG.5, the parsing may be performed prior to ranking and/or classifying the data. FIG.6a functional block diagram of a machine learning application600for providing outputs in accordance with of the present disclosure. In some examples, machine learning application600may be implemented by computer readable instructions. In some examples, machine learning application600may be implemented by hardware, such as FPGAs and/or ASICs. In some examples, machine learning application600may be implemented by a combination of computer readable instructions and hardware. In some examples, machine learning application600may be implemented by computing device200shown inFIG.2. In some examples, machine learning application600may be used in combination with and/or be included with machine learning application300shown inFIG.2. For example, machine learning model602may be included in machine learning model316or machine learning model316may be included in machine learning model602. In some examples, the machine learning application600may include a machine learning model602that may be trained to generate a result (Y)604(e.g., an inference) based on data (X)622provided to the machine learning model602as an input. The machine learning model602may generate a confidence level (C)606associated with the result604. The confidence level606may represent a degree of certainty (e.g., probability) that the machine learning application600has provided a correct or desired result604based on the data622. Determining the confidence level606is described in more detail with reference toFIGS.7and8. Typically, providing results with low confidence levels is undesirable. However, in machine learning models, absolute or near absolute certainty is rare. Thus, confidence levels corresponding to such certainty may indicate that the machine learning model602memorized a result from a training data set (not shown inFIG.6) used to train the machine learning model602. In some applications, results with high confidence levels may be more likely to include sensitive data and/or may expose information regarding the machine learning model and/or training data set. Accordingly, it may be desirable to refrain from providing result604if the confidence level606is high. In some examples, the confidence level606may be analyzed as shown at block608. In some examples, block608may include a comparator which may compare the confidence level606to one or more threshold values. In some examples, the confidence level606may be compared to a threshold value that may confirm that the result604does not include a memorized result from a training data set. In some examples, the threshold value may represent a high certainty or probability that the result604is the correct or desired result based on the data622. For example, the threshold value may be 0.99 or 1.00 in some examples. Optionally, in some examples, another threshold value may confirm that the confidence level606is high enough to provide a correct result604with an acceptable level of reliability. What threshold value corresponds to an acceptable level of reliability may vary depending on the application. For example, in some applications, a threshold value of 0.51 may be an acceptable confidence level. In other applications, a threshold value of 0.60 may be an acceptable confidence level. In other applications, a threshold value of 0.80, 0.90, or 0.95 may be an acceptable confidence level. In some applications, a threshold level may not be used and a classification having a highest probability (and/or highest probability after removing any classifications with probabilities greater than an upper threshold value) may be returned as result604. Based on the analysis of the confidence level606, the machine learning application600may provide an output624. In some examples, if the analysis of the confidence level606determines that the result604is not a memorized result (e.g., the confidence level606is equal to or below a threshold value), the output624may include the result604as indicated by block610. In some examples, if the analysis of the confidence level606determines that the result604is a memorized result (e.g., the confidence level606is equal to or above a threshold value), the output624may not include the result604. In some examples, as indicated by block612, the output624may include an error signal. The error signal may indicate that no result can be provided for the input data622. Optionally, in some examples, the error signal may be provided when the confidence level606is equal to or below a threshold value that indicates that the result604is not reliable (e.g., has a low probability of being the correct and/or desired output for the data622). Optionally, in some examples, if the confidence level606indicates the result604is a memorized result, the machine learning application600may generate another result (Y′) from the machine learning model602that has a confidence level that indicates the result is not memorized. That is, the confidence level for the new result Y′ may be lower than the confidence level606associated with the original result604. In some instances, the result Y′ may represent a “second best” result. The result Y′ may then be included in the output624as indicated by block614. In some examples, the data622may be provided by a separate application620, which may be included on a computing device separate from the computing device which implements machine learning application600. For example, application620may be on an edge device, such as edge device222. In some examples, the output624may be provided to the application620. In some applications, concern over including memorized results in the output624may vary depending on the source of the data622, the source of the training data set used to train machine learning model602, what the output624is provided to, and/or a user of the machine learning application600. For example, if an administrator is using the machine learning application600, the threshold value may be set high (e.g.,1.0) for determining whether a result is memorized or not. An example of an administrator may be software engineer at a company that owns the machine learning application600who is testing the machine learning application600. In another example, if a user accessing the machine learning application600(e.g., a user of application620) is also the source of the training data set, the threshold value may also be set high. For example, when a smart compose machine learning model602was trained only on the user's own emails. In a further example, if a user is not an administrator and the machine learning model602was not trained solely on data from the user, the threshold value may be set lower (e.g., 0.97, 0.98, 0.99). FIG.7is a diagram of a neural network700in accordance with examples of the present disclosure. In some examples, the neural network700may be included in a machine learning model, such as machine learning model316and/or machine learning model602. In some examples, neural network700may be deterministic. The neural network700may include input nodes702. In some examples, the input nodes702may be organized in a layer. The input nodes702may be coupled to one or more layers of hidden units706by weights704. In some examples, the hidden units706may perform operations on one or more inputs from the input nodes702based, at least in part, with the associated weights704. The outputs of the hidden units706may be provided to an output layer708that can return confidence values, that is, values associated with a level of confidence in a result inferred by the neural network700. The output layer708may calculate a confidence values (e.g., confidence levels) associated with a result Y provided to a result node710. In some examples, the output layer708may use a softmax function to calculate the confidence value of classification or regression output. The softmax function may be represented as: softmax(y^)=ey^(n)∑ney^(n) Where softmax(ŷ) is used as the confidence values of the output, ŷ is the output and n is the number of outputs. However, variations of the softmax equation (e.g., argmax) or other equations or specialized additional layers may be used to calculate the confidence level in other examples. FIG.8is a diagram of a neural network800in accordance with examples of the present disclosure. In some examples, the neural network800may be included in a machine learning model, such as machine learning model316and/or machine learning model602. In some examples, neural network800may be stochastic (e.g., a Bayesian representation). Similar to neural network700, the neural network800may include input nodes802. In some examples, the input nodes802may be organized in a layer. The input nodes802may be coupled to one or more layers of hidden units806by weights804. In some examples, the hidden units806may perform operations on one or more inputs from the input nodes802based, at least in part, with the associated weights804. The outputs of the hidden units806may be provided to a result node810. However, unlike neural network700, the result at result node810is not a single value but a distribution of outputs Y808. In some examples, the distribution of outputs Y may be used to estimate the confidence level from the probability distribution represented as: p(y(n)|x(n),θ) Where θ are weights of the neural network800and p is the conditional probability distribution on the output layer, from which confidence level is derived. Other distributions or analysis of the distribution of outputs808may be used in other examples to determine the confidence level. The techniques for determining the confidence level shown inFIGS.7and8are provided for exemplary purposes only and the disclosure is not limited to the examples provided. FIG.9is a flow chart of a method900in accordance with examples of the present disclosure. In some examples, all or a portion of method900may be performed by a computing device, for example, computing device200shown inFIG.2. In some examples, all or a portion of the method900may be performed by a machine learning application, such as machine learning application600shown inFIG.6and/or machine learning application300shown inFIG.3, which in some examples may be implemented by a computing device such as computing device200. At block902, “receiving a data input” may be performed. In some examples, the data input may be received by an interface, such as interface214. In some examples, the data input may include text, images, and/or sound data. In some examples, the data input may be received from an edge device, such as edge device222. At block904, “analyzing the data input with a machine learning model to generate a result and a confidence level” may be performed. In some examples, the machine learning model may be a neural network. In some examples, the neural network may be deterministic. In some examples, the confidence level may be generated based, at least in part, on a softmax algorithm, such as the one referred to inFIG.7. In some examples, the neural network may be stochastic. In some examples, the confidence level may be generated based, at least in part, on a distribution of the results. At block906, “comparing the confidence level to a threshold value” may be performed. In some examples, the comparing may be performed by a comparator. At block908, “providing an output based on the comparing” may be performed. In some examples, the output may be provided from a computing device, such a computing device200to an edge device, such as edge device222. In some examples, the output includes an error signal when the confidence level is equal to or above the threshold value. In some examples, the output includes the result when the confidence level is equal to or below the threshold value. In some examples, the threshold value is 0.99. In some examples, the threshold value is based, at least in part, on a type of user of the machine learning model. Types of users may include regular users and administrators, for example. In some examples, the threshold value is based, at least in part, on a source of a training data set used to train the machine learning model and a user of the machine learning model. In some examples, the threshold value is higher when the source of the training data set is the user than when the source of the training data set is not the user. Optionally, when the confidence level is equal to or above the threshold value, in some examples, blocks908and910may be performed to provide a more privacy-aware result. At block908, “analyzing the data input with the machine learning model to generate a second result” may be performed. The second result may have a second confidence level below the threshold value in some examples. At block910, “providing the second result as the output” may be performed. In some examples, method900may be performed during and/or after method500. FIG.10is a computing system1000in accordance with examples of the present disclosure. The computing system1000may include one or more edge devices1012, such as a wearable (e.g., a smart watch)1002and/or a mobile device (e.g., smart phone, tablet)1004. The wearable1002and/or mobile device1004may be operated by a user1001. The computing system1000may further include a cloud computing system1006, which may include one or more computing devices (e.g., computing device200). In some examples, the edge devices1012may implement one or more machine learning applications, such as applications300,320,600, and/or620, or portions thereof. For example, the edge devices1012may implement a machine learning application that abstracts and/or masks data collected by the edge device1012. For example, the wearable1002may collect fitness data (e.g., user location, heart rate, miles per hour, workout duration) and the machine learning application implemented by the wearable1002may abstract and/or mask certain values in the fitness data (e.g., exact locations). In some examples, the cloud computing system1006may implement one or more machine learning applications, such as applications300,320,600, and/or620, or portions thereof. For example, the cloud computing system1006may include a training application1008that generates training data sets and/or trains a machine learning model. In some examples, the abstracted and/or masked data may then be provided from the edge device1012to the training application1008. The training application1008may use the abstracted and/or masked data from the edge device1012to train a machine learning model. In this example, since the abstracting and/or masking is performed on the edge device1012, little or no sensitive data may be transmitted by the edge device1012and/or received by the cloud computing system1006. This may provide additional security for sensitive information in some applications. In some examples, the cloud computing system1006may include a machine learning application1010that generates results based on inputs provided from the edge devices1012. In some examples, the machine learning application1010may implement a machine learning application, such as machine learning application600, which suppresses memorized results. In some examples, only non-memorized results (e.g., results having a confidence level equal to or below a threshold value) are provided from the cloud computing system1006to the edge devices1012. In some applications, this may reduce the risk of sensitive data being released by the cloud computing system1006and/or other information that may allow reverse engineering of the machine learning application1010. The apparatuses, systems, and methods of the present disclosure may enable more privacy-aware operations of machine learning models, applications, and/or systems. The apparatuses, systems, and methods described herein may abstract and/or mask values in data prior to providing the data to a machine learning model for training. This may reduce or prevent the machine learning model from memorizing sensitive information in some applications. Furthermore, the apparatuses, systems, and methods of the present disclosure may analyze a confidence level associated with a result from a machine learning model. If the confidence level is too high, the result may not be provided as an output. Abstracting and/or masking data used for training machine learning models and/or not providing a result from the machine learning model under certain conditions may reduce or prevent exposure of sensitive data and/or reverse engineering of the machine learning model, training methods, and/or training data. In some applications, this may improve privacy protection of individuals and/or entities. The foregoing description of certain embodiments is merely exemplary in nature and is in no way intended to limit the scope of the disclosure or its applications or uses. In this detailed description of embodiments of the present apparatuses, systems and methods, reference is made to the accompanying drawings which form a part hereof, and which are shown by way of illustration specific embodiments in which the described apparatuses, systems and methods may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice presently disclosed apparatus, systems and methods, and it is to be understood that other embodiments may be utilized and that structural and logical changes may be made without departing from the spirit and scope of the disclosure. Moreover, for the purpose of clarity, detailed descriptions of certain features will not be discussed when they would be apparent to those with skill in the art so as not to obscure the description of embodiments of the disclosure. The discussion herein is therefore not to be taken in a limiting sense, and the scope of the disclosure is defined only by the appended claims. As used herein, the term “apparatus” may refer to a circuit, device, system, component, or combinations thereof. For example, an apparatus may be a computing device, a processor, a memory, a memory device, a mobile device, an edge device, a server, and/or a cloud computing system. Of course, it is to be appreciated that any one of the examples, embodiments or processes described herein may be combined with one or more other examples, embodiments and/or processes or be separated and/or performed amongst separate devices or device portions in accordance with the present systems, devices and methods. Finally, the above-discussion is intended to be merely illustrative and should not be construed as limiting the appended claims to any particular embodiment or group of embodiments. Thus, while various embodiments of the disclosure have been described in particular detail, it should also be appreciated that numerous modifications and alternative embodiments may be devised by those having ordinary skill in the art without departing from the broader and intended spirit and scope of the present disclosure as set forth in the claims that follow. Accordingly, the specification and drawings are to be regarded in an illustrative manner and are not intended to limit the scope of the appended claims. | 44,891 |
11861494 | DESCRIPTION OF EMBODIMENTS Turning now toFIG.1, a computing architecture100is shown in which a cognitive space encoder104, trajectory generator106, decoder110and evaluator112may map a neural space of the neural network102(e.g., implemented with artificial intelligence and/or machine learning) into a latent space (e.g., a cognitive space), determine trajectories (e.g., cognitive process) through the latent space, and map the trajectories to an input space to evaluate the trajectories for validity. For example, the cognitive space encoder104, trajectory generator106, decoder110and evaluator112may be a neural network evaluation system that learns a compressed representation of how the neural network102transforms inputs into outputs. The cognitive space encoder104and decoder110may be trained based on activations of the neural network102and a training set during a training process to identify points in the input space (e.g., a human interpretable space such as images, labels, etc.) that correspond to the cognitive process. In doing so, the neural network evaluation system may be able to interpret a cognitive process of the neural network102in a human readable format (e.g., images, labels or facial features in the input data space) to determine whether the neural network102is operating with efficiency. The neural network102may be retrained based on whether the neural network102is operating with efficiency, resiliency and securely. Thus, the computing architecture100may implement functions (e.g., decompose neural functions and trajectories into a human understandable format) that would be difficult if not impossible to manually implement. Moreover, the computing architecture100may identify trajectories through the cognitive space to fully understand and comprehend a reasoning process of the neural network102. Furthermore, the neural network102may be enhanced at least to the extent that the neural network102may be retrained with specific focuses to strengthen identified inefficiencies or inadequacies. In more detail, in neural network102, activations of all the layers that transform inputs X0, Xtinto outputs Y0, Ytmay be considered representations of the reasoning process of the neural network102. The neural network102may be parameterized by its weights and biases θ as fθ(x) where x is the input of the neural network102and f is the neural network102. The cognitive space encoder104may learn a latent space that represents the cognitive process of the neural network102and use the learned latent space (also referred to as a cognitive space) to evaluate how the neural network102relates the two different inputs X0and Xt. The cognitive space encoder104may receive activations A0, Atfrom the neural network102, and translate the activations A0, Atinto a low dimensional trajectory map106a(which may be a latent space and/or cognitive space). For example, the cognitive space encoder104may modify the activations A0, Atfrom a first dimensionality into a second dimensionality that is smaller than the first dimensionality to match the compressed dimensionality of the trajectory map106a. For example, an activation may be represented as having three dimensions (e.g., (x, y, z)), a function of the cognitive space encoder104may project the three dimensions to two dimensions (e.g., (x, y) where the x and y may be modified from original values based on the z value). Thus, in the creation of the trajectory map106athe cognitive space encoder104may include a function to map any point in the first dimension (e.g., (x,y,z) space) to the second dimension (e.g., 2D (x,y) space). The neural activations A0, Atmay be αfθ(x) of the neural network102(may also be referred to as fθ) as the concatenation of all the outputs of every layer in the neural network102, or a subset of them, depending on the application. The first and second dimensions of the activations A0, Atand/or modified neural activations C0, Ct, may each include inputs, parameters and/or outputs. The cognitive space encoder104may output modified activations (e.g., energies) C0, Ct, mapped to the trajectory map106aat a lower dimension relative to a dimension of the activations A0, At. In some embodiments, the modified activations C0, Ct, may carry information about the activations A0, Atbut are not interpretable as such in the space of the trajectory map106a. For example, the trajectory map106amay correspond to the neural space of the neural network102. The trajectory map106amay have a lower dimensional space than the neural space. The cognitive space encoder104thus compresses input data (e.g., activations A0, At) that are in the form of the analyzed neural network activations (e.g., αfθ(x)∈RN) into a much lower dimensional space C∈RM(e.g., M is much smaller than N) of the trajectory map106ato generate C0, Ct. The compressed representation embodied in trajectory map106amay facilitate path/trajectory planning methods to navigate from one point to another. The cognitive space encoder104may output activations C0, Ct. The activations C0, Ctmay be compressed versions of activations A0, Atthat are mapped to the trajectory map106a(e.g., the compressed representation of the neural space). In some embodiments, the activations A0, Atmay be a start and end point of a neural analysis (e.g., an initial point and an output point), and X0, Xtmay respectively be considered an initial data point (e.g., a facial image) and destination data point (e.g., a user associated with the image) from an input space (e.g., a human interpretable dataset). In some embodiments, intermediate activations (e.g., A1, A2, A3, etc.) between A0and Atmay be provided to the cognitive space encoder104which correspond to activations of the neural network102between activation A0and At. The cognitive space encoder104may similarly map the further activations to the trajectory map106a. The trajectory generator106may produce trajectories that traverse the trajectory map106a(e.g., the cognitive space). For example, the trajectory generator106may generate trajectory108(e.g., a path) from the initial point (e.g., start point corresponding to C0and based on activation A0) to the end point (e.g., goal point corresponding to Ctbased on activation At). Trajectory108in the trajectory map106amay not be generated in a straight line but may follow a path that connects the initial point C0to the end point while avoiding obstacles. In some embodiments, obstacles include unobserved or uncertain regions of the trajectory map106aor cognitive space. An unobserved or uncertain region of the trajectory map106amay be a portion that was not properly represented or underrepresented in samples (e.g., under sampled) of a training set to train the cognitive space encoder104and the decoder110. Thus, the trajectory generator106may generate trajectories that traverse regions of the trajectory map106athat are highly sampled (e.g., highly represented in a training data set). The trajectory map106amay be intentionally more accurate in highly “travelled” regions by construction and through learning. Thus, the output samples, or discrete points Co-Cn(explained further below), may be of high quality and directly related to the behavior of the neural network102for accuracy. For example, the trajectory generator106may receive the initial point C0(e.g., an activation energy) and the end point Ct(e.g., an activation energy). The initial point C0and the end point Ctmay be mapped to the trajectory map106a. The trajectory generator106may then generate a likely path between the initial point C0and the end point Ctbased on path planning algorithms and survival functions based on estimates of densities of points (e.g., activations) in the trajectory map106a. As noted above, the cognitive space encoder104may also map intermediate points (e.g., C2, C5, etc.) to the trajectory map106ain some embodiments. For example, a non-parametric density estimation may estimate the distribution of the compressed activations in the trajectory map106a. High-density regions may be favored during trajectory generation, while low-density regions may be avoided. The trajectory generator106may then generate the likeliest path through all of the intermediate points and to connect the initial point C0and the end point Ct. The likeliest path will be stored as the trajectory108. The trajectory108described in the trajectory map106ato navigate from the initial point C0to the end point Ct(e.g., the target) may provide an interpretable insight into the validity of the reasoning process of the neural network102. In order to generate such insights, the trajectory sampler106bmay sample the trajectory108. For example, the trajectory sampler106may sample the discrete points Cost along trajectory108. For example, the trajectory sampler106bmay sample a set of discrete points along trajectory108that correspond to a sequence of points in the input space (e.g. images, facial features, human interpretable data labels). The points may be decoded by the decoder110and evaluated by an evaluator112so that the coherence of the trajectory108may be evaluated. The trajectory108may represent a “thought-process” of the neural network102, and thus the decoded points represent a human-interpretable form of the “thought-process.” The trajectory sampler106bmay sample the trajectory108through various processes such as linear processes, log processes, exponential processes, based on curvature processes (e.g., increase samples in regions with high curvature) and so on. As an example, a linear sampling may be used where the trajectory108is sampled at N equidistant points in a curve space. In some embodiments, the trajectory sampler106bmay receive each point along the trajectory108but provide a subset of discrete points Co-Cnto the decoder110for decoding. The decoder110may decode the discrete points Co-Cninto an input space (e.g., a same space as the inputs X0and Xt). For example, the decoder110may first decode the points Co-Cnfrom the cognitive space into the neural space of the neural network102(e.g., as a series of activation energies with high dimensionality). The decoded points may be activations of the neural network102. Such decoded activations may be converted back into the input space by running another process, such as an energy based interpretive decoder. In some embodiments, the decoder110may include a cognitive space decoder to decode the discrete points Co-Cninto the input space. In some embodiments, the decoder110may include an energy based generative model (EBM). The EBM may be trained in parallel with the cognitive space encoder104during training to build associations with energy levels and inputs in the input space. The EBM may learn to encode input points into low energy scalar values and vice-versa (e.g., energy levels during processing may be similar to the energy levels during training). This mapping from the input space to energy levels may be used to generate points in the input space from energy levels in the neural space. For example, the EBM may correlate energy levels of the decoded points into the input space. The EBM may be used as generative models by finding values at the input space that have low energy values. Thus, the trajectory108may be decoded into the input space. In order to decode the activations into an input point, a random point in the input space may be sampled. This point may be fed forward through the learned EBM and the gradient with respect to the input value is computed. By performing iterated gradient steps over different input values, the random input points may converge to a low energy point that is a point similar to a sample from the training set. In doing so, the EBM may decode a point in the trajectory map106a(e.g., a latent cognitive space) into the input space. The above process may repeat for each of the sampled points in the trajectory108. In some embodiments, in addition to an EBM or alternatively, a statistical regression system (e.g., a neural network, a neural network implemented with artificial intelligence and/or machine learning), may be trained to reconstruct inputs given cognitive space representations of the inputs. In some embodiments, in addition to the above or instead of, a Generative Adversarial Network (GAN) generator may be employed. In such embodiments, a non-parametric density estimator will be used as the distribution of the cognitive space, which may be sampled by the GAN generator to generate new samples. The decoder110may provide the decoded samples X0:t, that are mapped into the input space, to an evaluator112. The evaluator112determine a measure of rationality of the trajectory108. For example, if the number of decoded samples X0:tare not above a threshold, the trajectory108may be deemed to be excessively long or inefficient. In some embodiments, if the decoded samples X0:tare unalike from each other, the trajectory108may be deemed to be illogical. For example, if for facial recognition, a first of the decoded samples X0:tis corresponds to a face with a beard and eyeglasses, and a second of the decoded samples X0:tcorresponds to a face with no beard and eyeglasses, then the trajectory108may be deemed illogical and the neural network102may be considered prone to error or vulnerable to attack. Another example may include generating multiple trajectories based on different inputs and/or start and destination points. For example, the evaluator112may analyze cognitive trajectories among different views of the same face (e.g. with beard, scarf, glasses, hat, different lighting conditions, etc.) to detect vulnerabilities and correct the vulnerabilities during the validation phase by controlling training of the neural network102. Thus, in some embodiments, the neural network102may implement facial recognition (e.g., to unlock computing features if an authorized user is identified from the facial recognition). The neural network evaluation system described herein may evaluate the neural network102for security and retrain the neural network102if the neural network102does not meet security requirements. The evaluator112may be further control the inputs (e.g., X0, Xt) into the neural network102based on various parameters and to test for weaknesses or deficiencies in the neural network102. For example, the evaluator112may provide two random inputs from a training dataset. As another example, the evaluator112may generate two random points in the trajectory map106a. Depending on the sparsity of the trajectory map106a(e.g., the cognitive space) the two random points may be in obstacles and therefore fail to generate a trajectory between the random points. The random selection by the evaluator112may provide insights about how the neural network102traverses non-densely populated parts of the trajectory map106a. In some embodiments, the evaluator112may selects two points based on user input in the input space. The evaluator112may then provide inputs to the neural network102based on the user input. The evaluator112may further generate an adversarial input to quantify a robustness of the neural network102. For example, the initial point or end point may include known adversarial examples. The different combinations of adversarial-to-known, known-to-adversarial or adversarial-to-adversarial inputs into the neural network102may provide different insights about how the trajectory map106ais formed in corresponding regions (e.g., regions that enhance security by eliminating adversarial inputs). Evaluating the neural network102in adversarial regions may provide insight into how the neural network102deals with the different types of adversarial attacks and aid in resiliency evaluation and enhancement. For example, the evaluator112may test whether adversarial inputs are within the trajectory map106a. If adversarial inputs are placed in low-density regions (based on non-parametrically estimated density function). The evaluator112may provide an indication that training based on samples in adversarial regions of the trajectory map106aare necessary to protect the network's responses. For example, the evaluator112may test whether all samples along the trajectory between a sample and an adversarially modified counterpart (e.g., the sample itself with adversarial noise added to it) are located in high density regions. If so, a failure may not be due to lack of samples along the data paths such as the trajectories. Rather, the evaluator112may conclude that the neural network102and/or a training procedure of the neural network102are intrinsically frail to adversarial attacks. In some embodiments, the evaluator112may evaluate a sequence of points along the trajectory path108correspond to the input space as images. In the example of images, a sequence of images may be returned. The sequence may be ranked in a certain range (e.g., 0-1 range that may also be referred to as a validation score) for coherence. For example, a sequence from a car image to a plane that travels through car-truck-bus-plane may be ranked as 0.9. If the trajectory was a car-horse-centaur-Pegasus-plane, the trajectory can be ranked as 0.2, since the reasoning is not entirely logical. Finally, if the sequence is car-person-burger-cat-plane the trajectory coherence score can take a value of 0 is completely not logical. The evaluator112may repeat the evaluation process on the neural network102a number of times to obtain an aggregated coherency score that may be related to logic and adversarial attack resiliency. In some embodiments, the evaluator112may generate several scores (e.g., validation scores) for different initial and end point generation methods (e.g., adversarial, random, etc.). In some embodiments, the evaluator112may evaluate the neural network102several times. A ratio of coherent trajectories versus incoherent trajectories may yield an indicator of coherence of the neural network102(as ranked above). In some embodiments, the evaluator112may take actionable measures (e.g., retrain) for network correction to mitigate undesirable results. Following the example provided earlier, a user might query the neural network102with images of a car and a plane. If the returned trajectory is car-carriage-horse-centaur-Pegasus-plane, this might provide an indication that there is a lack of samples along more expectable trajectories108, like car-truck-bus-plane for example. The evaluator112may subsequently add more samples in these less-observed categories to a training set of the neural network102and iterate until the evaluator112with the results. Thus, some embodiments may generate cognitive trajectories to evaluate a reasoning process of the neural network102. Further, some embodiments may validate the neural network102based on the trajectories. Some embodiments may further translate machine readable spaces into interpretable human spaces. Some embodiments may be applied to “mission-critical tasks” in autonomous systems such as industrial robots, autonomous vehicles, service robotics, surveillance systems, etc. Reliable deployment of such systems may be deployed through the validation process described above. FIG.2shows a method320of generating a cognitive space and identifying resiliency of a neural network. The method320may generally be implemented in a neural network evaluation architecture such as, for example, the cognitive space encoder104, trajectory generator106, decoder110and evaluator112(FIG.1), already discussed. In an embodiment, the method320is implemented in one or more modules as a set of logic instructions stored in a machine- or computer-readable storage medium such as random access memory (RAM), read only memory (ROM), programmable ROM (PROM), firmware, flash memory, etc., in configurable logic such as, for example, programmable logic arrays (PLAs), field programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), in fixed-functionality logic hardware using circuit technology such as, for example, application specific integrated circuit (ASIC), complementary metal oxide semiconductor (CMOS) or transistor-transistor logic (TTL) technology, or any combination thereof. For example, computer program code to carry out operations shown in the method320may be written in any combination of one or more programming languages, including an object oriented programming language such as JAVA, SMALLTALK, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. Additionally, logic instructions might include assembler instructions, instruction set architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, state-setting data, configuration data for integrated circuitry, state information that personalizes electronic circuitry and/or other structural components that are native to hardware (e.g., host processor, central processing unit/CPU, microcontroller, etc.). Illustrated processing block322identifies a cognitive space that is to be a compressed representation of activations of a neural network. For example, the neural network is associated with a first number of dimensions and the cognitive space is associated with a second number of dimensions, where the second number of dimensions is less than the first number of dimensions. For example, the activations of the neural network may correspond to first number of dimensions and the cognitive space may correspond to the second number of dimensions. Illustrated processing block324maps a plurality of activations of the neural network to a cognitive initial point and a cognitive destination point in the cognitive space. Illustrated processing block326generates a first cognitive trajectory through the cognitive space, where the first cognitive trajectory is to traverse the cognitive space from the cognitive initial point to the cognitive destination point. In some embodiments, the method320may further include sampling the first cognitive trajectory to identify one or more intermediate points in the cognitive space, and decoding the one or more intermediate points into an input space to generate input points in the input space. At least one of the plurality of activations is associated with an initial data point from the input space and at least one of the plurality of activations is associated with a destination data point from the input space. The cognitive initial point corresponds to the initial data point and the cognitive destination point corresponds to the destination data point. In some embodiments, the method320includes determining whether to retrain the neural network based on whether a validity score associated with the first cognitive trajectory meets a threshold (e.g., determine whether the reasoning is valid). For example, the method320may identify a plurality of trajectories (including the first cognitive trajectory) through the cognitive space and generate a validity score based on a ratio of coherent trajectories from the plurality of trajectories and incoherent trajectories from the plurality of trajectories. The method320may thus generate a cognitive space and generate cognitive trajectories through the cognitive space. The method320may generate a validation score (e.g., resiliency score) based on the cognitive trajectories to identify whether to retrain the neural network to enhance functioning of the neural network. For example, the neural network may be retained with a specific focus to strengthen an underperforming portion and/or process of the neural network. Thus, the technology may provide security-enhanced and resiliency-enhanced neural networks. Furthermore, the method320may implement a new and enhanced neural network analysis to identify a cognitive process (which may otherwise be opaque and unknown to most systems and/or developers) of the neural network through cognitive trajectory maps and trajectories. FIGS.3A and3Bshow a two-pass training process300,320to train a cognitive space encoder306. Additionally, a cognitive space decoder310may be trained in process300. Process300shows pre-processing based on activations from the neural network304. The cognitive space encoder geψ(α)306may be trained to learn a low dimensional representation of the neural activations α(fθ)(x) of the neural network304. For example, the cognitive space encoder306may be trained based on dataset302(e.g., input data in an input space) that causes activations in the neural network304. For example, the neural network304may analyze inputs X0, Xtfrom dataset302to generate the activations of the neural network304. The cognitive space encoder306may reduce a dimension of a neural network304space (e.g., α(f_θ)(x)∈R{circumflex over ( )}N) of the activations into a lower dimensional space C∈R{circumflex over ( )}M(e.g., M may be significantly smaller than N). The cognitive space encoder306may be trained with the activations that are a result of performing a forward pass of the dataset302through the neural network304. It is worthwhile to note that depending on the application, different dimensionality reduction techniques can be selectively applied. For example, Principal Component Analysis (PCA), Random Forests and the different types of auto-encoders (e.g. convolutional, multilayer, regularized, etc.) may be employed. In some embodiments, the dimensions may be selected in a way to satisfy one or more constraints (e.g., metric properties). For example, the one or more constraints may be based on distances. For example, two points close in the input space may similarly need to be close in the encoded space (e.g., the distance between the two points in the input and encoded space are similar). Process300may concurrently (e.g., in parallel) train cognitive space decoder310based on the activations. The cognitive space decoder310may be trained to decode input points into low energy scalar values. The mapping from the input space to an energy level, may be used to generate points in the input space that have low energy, and are similar to the input values shown to the cognitive space decoder310during training. FIG.3Bshows process320to build a set of observed points C1-C10. For example, once the cognitive space encoder306learns the mapping from activations to the low dimensional space, a second pass320on the dataset302may build a set of observed points C1-C10in the cognitive space308. The low dimensional representation of network activations induced by the dataset302may be used together as a set of observed points C1-C10in the cognitive space308. The set of observed points C1-C10may correspond to anchors in the cognitive space308that may be decoded into corresponding real-world identifications (e.g., images, facial recognition, etc.). These observed points C1-C10may guide the trajectory generation through regions of the cognitive space308that have been observed in the dataset302and avoid traversing regions of the cognitive space308that are uncertain or unexplored. For example, a non-parametric density estimation technique may be is used to estimate the distribution of the compressed activations in the cognitive space308. High-density regions may be favored during trajectory generation, while low-density regions will be avoided. Once the cognitive space encoder306is trained, the cognitive space encoder306may map activations of the neural network304into the cognitive space308. The cognitive space encoder306may populate the cognitive space308with the activations. FIG.4shows a cognitive path generation process340to generate a trajectory348that traverse a cognitive space342from an initial point C0to a destination point Cn. The trajectory348in the cognitive space342may not be a straight line but may follow a path that connects the initial point C0to the destination point Cnwhile avoiding obstacles (e.g., unseen, uncertain or unobserved regions). As noted above, a non-parametric density estimation technique may be used to estimate a distribution of the compressed activations in the cognitive space. The trajectory generator344may favor high-density regions during trajectory generation, while low-density regions may be avoided. For example, the trajectory generator344may use a survival function of the estimated density (e.g., its tail) as the likelihood of each point in the cognitive space342to contain an obstacle. The trajectory generator344may implement path planning algorithms (e.g. RRT, PRM, A*) to trace a path from the initial point C0to the destination point Cnwhile avoiding regions of the space that may not be represented in the training data. Thus, the trajectory348is a continuous path from the initial point C0to a destination point Cn. The trajectory generator344generates trajectories that traverse regions of the cognitive space342that are highly sampled during training of the cognitive space encoder350. The cognitive space342will be, by construction and learning, more accurate in these regions. Thus, the samples will be of high quality and directably relatable to a neural network's behavior. FIG.5shows a method400of generating a trajectory through a cognitive space. The method400may be readily implemented with the evaluation system (FIG.1), the method320(FIG.2), the cognitive space encoder350and trajectory generator344(FIG.4), already discussed. More particularly, the method400may be implemented in one or more modules as a set of logic instructions stored in a machine- or computer-readable storage medium such as RAM, ROM, PROM, firmware, flash memory, etc., in configurable logic such as, for example, PLAs, FPGAs, CPLDs, in fixed-functionality logic hardware using circuit technology such as, for example, ASIC, CMOS or TTL technology, or any combination thereof. Illustrated processing block402identifies a start point and a destination point in cognitive space (e.g., based on activations from a neural network). Illustrated processing block404identifies observed areas adjacent to a current position. The current position presently corresponds to the start point. Illustrated processing block406selects a highest probability area as a next position in the trajectory. The highest probability area may be an area that has a greatest probability of leading to the destination point. For example, a position may be selected based on whether a path towards the destination node is available from the position (e.g., the position is not a “dead end”), and avoid positions that do not have available paths to the destination point. As noted above, density of samples may also be considered when determining the next position. In some embodiments, processing block406may modify operation based on a type of path planning analysis. Some path planning analysis may execute iteratively but may not be “anytime” (meaning the output is generated at once, when the algorithm finishes its processing). Some planning algorithms (e.g., A* search algorithm) may find an optimal (e.g., shortest) path if such a path exists. In some embodiments, Probabilistic Roadmap (PRM) methods may be used instead to execute more effectively in higher dimensionality spaces. Illustrated processing block408updates the current position to the next position and updates the trajectory to include the next position. Illustrated processing block410identifies whether the destination point is reached. For example, if the current position is the same as the destination position then the destination position may have been reached, and illustrated processing block412outputs the trajectory. Otherwise, illustrated processing block404may execute. FIG.6shows a method440of determining a resiliency score (e.g., a validation score) of a neural network and retraining based on the resiliency score. The method440may be readily implemented with the evaluation system (FIG.1), the method320(FIG.2), the two-pass training process300,320(FIGS.3A-3B), the cognitive space encoder350and trajectory generator344(FIG.4), the method400(FIG.5) already discussed. More particularly, the method440may be implemented in one or more modules as a set of logic instructions stored in a machine- or computer-readable storage medium such as RAM, ROM, PROM, firmware, flash memory, etc., in configurable logic such as, for example, PLAs, FPGAs, CPLDs, in fixed-functionality logic hardware using circuit technology such as, for example, ASIC, CMOS or TTL technology, or any combination thereof. Illustrated processing block444generates trajectories through a cognitive space associated with the neural network. Illustrated processing block446identifies characteristics of the trajectories. For example, the characteristics may include identifying whether the trajectory includes similar or dissimilar intermediate points (e.g., whether images are similar to each other). Some embodiments may include identifying whether adversarial inputs for facial recognition are properly identified and whether the intermediate points indicate that the neural network properly processed the adversarial input. Illustrated processing block448determines a resiliency score based on the characteristics. Illustrated processing block450determines whether the resiliency score indicates that retraining is needed (e.g., retraining may be needed if the resiliency score is below a threshold). If not, the method440may end. Otherwise, illustrated processing block452retrains the neural network based on the resiliency score. In some embodiments, the characteristics may indicate that a particular portion and/or process of the neural network underperforms (e.g., adversarial inputs are not properly identified and “fool” the system, unobserved portions of the neural network lead to poor trajectories and should be remedied by retraining to include more samples from the unobserved portions). In such embodiments, the retraining may execute with a specific focus to the underperforming portions of the neural network. For example, samples from unobserved portions may be provided to the neural network to mitigate adversarial attacks. Turning now toFIG.7, a resiliency-enhanced computing system158(e.g., a computing device) is shown. The computing system158may generally be part of an electronic device/platform having computing functionality (e.g., personal digital assistant/PDA, notebook computer, tablet computer, convertible tablet, server), communications functionality (e.g., smart phone), imaging functionality (e.g., camera, camcorder), media playing functionality (e.g., smart television/TV), wearable functionality (e.g., watch, eyewear, headwear, footwear, jewelry), vehicular functionality (e.g., car, truck, motorcycle), etc., or any combination thereof. In the illustrated example, the system158includes a host processor160(e.g., CPU with one or more processor cores) having an integrated memory controller (IMC)162that is coupled to a system memory164. The host processor160further includes accelerators A1-A3(although any number of accelerators may be provided) to implement a neural network. In some embodiments, the system158may further communicate with other electronic devices that also implement the neural network. For example, the system158may synchronize with the other electronic devices by exchanging weights, biases and data with the other electronic devices. The illustrated system158also includes a graphics processor168(e.g., graphics processing unit/GPU) and an input output (10) module166implemented together with the processor160(e.g., as microcontrollers) on a semiconductor die170as a system on chip (SOC), where the IO module166may communicate with, for example, a display172(e.g., touch screen, liquid crystal display/LCD, light emitting diode/LED display), a network controller174(e.g., wired and/or wireless), and mass storage176(e.g., HDD, optical disc, SSD, flash memory or other NVM). The illustrated SOC170includes a ROM178with logic instructions, which when executed by the accelerators A1-A3, host processor160or graphics processor160, cause the computing system158to implement and/or perform one or more aspects of the evaluation system (FIG.1), the method320(FIG.2), the two-pass training process300,320(FIGS.3A-3B), the cognitive space encoder350and trajectory generator344(FIG.4), the method400(FIG.5), and/or method440(FIG.6), already discussed. In some embodiments, the system158may further include processors (not shown) and/or an AI accelerator148that is dedicated to artificial intelligence (AI) and/or neural network (NN) processing. For example, the system SoC170may include vision processing units (VPUs, not shown) and/or other AI/NN-specific processors such as the AI accelerator148, etc. In some embodiments, any aspect of the embodiments described herein may be implemented in the processors and/or accelerators dedicated to AI and/or NN processing such as AI accelerator148, the graphics processor168and/or the host processor160. Thus, the illustrated system158may identify a cognitive space that is to be a compressed representation of activations of a neural network, map a plurality of activations of the neural network to a cognitive initial point and a cognitive destination point in the cognitive space and generate a first cognitive trajectory through the cognitive space, wherein the first cognitive trajectory maps the cognitive initial point to the cognitive destination point. The system158may generate a validation score (e.g., resiliency score) based on the first cognitive trajectory to identify whether to retrain the neural network, and whether the neural network should be retained with a specific focus to strengthen an underperforming portion and/or process of the neural network. Thus, the system158may provide security-enhanced and resiliency-enhanced neural networks. Furthermore, the system158may implement a new and enhanced neural network analysis to identify a “thought-process” of the neural network through cognitive trajectory maps and trajectories. In some embodiments, the validation score may be presented on the display172so a user may view the validation score. In some embodiments, the system150may cause the electronic devices to also retrain based on the analysis conducted by the system150. For example, the system150may transmit a message to the electronic devices through the network controller174to instruct the electronic devices to retrain. FIG.8shows a semiconductor package apparatus180. The illustrated apparatus180includes one or more substrates184(e.g., silicon, sapphire, gallium arsenide) and logic182(e.g., transistor array and other integrated circuit/IC components) coupled to the substrate(s)184. In one example, the logic182is implemented at least partly in configurable logic or fixed-functionality logic hardware. The logic182may implement and/or perform one or more aspects of the evaluation system (FIG.1), the method320(FIG.2), the two-pass training process300,320(FIGS.3A-3B), the cognitive space encoder350and trajectory generator344(FIG.4), the method400(FIG.5), and/or method440(FIG.6), already discussed. In one example, the logic182includes transistor channel regions that are positioned (e.g., embedded) within the substrate(s)184. Thus, the interface between the logic182and the substrate(s)184may not be an abrupt junction. The logic182may also be considered to include an epitaxial layer that is grown on an initial wafer of the substrate(s)184. In some embodiments, the logic182may further include processors (not shown) and/or accelerators (not shown) dedicated to AI and/or NN processing. For example, the logic182may include VPUs, and/or other AI/NN-specific processors, etc. In some embodiments, any aspect of the embodiments described herein may be implemented in the processors and/or accelerators dedicated to AI and/or NN processing. FIG.9illustrates a processor core200according to one embodiment. The processor core200may be the core for any type of processor, such as a micro-processor, an embedded processor, a digital signal processor (DSP), a network processor, or other device to execute code. Although only one processor core200is illustrated inFIG.9, a processing element may alternatively include more than one of the processor core200illustrated inFIG.9. The processor core200may be a single-threaded core or, for at least one embodiment, the processor core200may be multithreaded in that it may include more than one hardware thread context (or “logical processor”) per core. FIG.9also illustrates a memory270coupled to the processor core200. The memory270may be any of a wide variety of memories (including various layers of memory hierarchy) as are known or otherwise available to those of skill in the art. The memory270may include one or more code213instruction(s) to be executed by the processor core200, wherein the code213may implement and/or perform one or more aspects of the evaluation system (FIG.1), the method320(FIG.2), the two-pass training process300,320(FIGS.3A-3B), the cognitive space encoder350and trajectory generator344(FIG.4), the method400(FIG.5), and/or method440(FIG.6), already discussed. The processor core200follows a program sequence of instructions indicated by the code213. Each instruction may enter a front end portion210and be processed by one or more decoders220. The decoder220may generate as its output a micro operation such as a fixed width micro operation in a predefined format, or may generate other instructions, microinstructions, or control signals which reflect the original code instruction. The illustrated front end portion210also includes register renaming logic225and scheduling logic230, which generally allocate resources and queue the operation corresponding to the convert instruction for execution. The processor core200is shown including execution logic250having a set of execution units255-1through255-N. Some embodiments may include a number of execution units dedicated to specific functions or sets of functions. Other embodiments may include only one execution unit or one execution unit that can perform a particular function. The illustrated execution logic250performs the operations specified by code instructions. After completion of execution of the operations specified by the code instructions, back end logic260retires the instructions of the code213. In one embodiment, the processor core200allows out of order execution but requires in order retirement of instructions. Retirement logic265may take a variety of forms as known to those of skill in the art (e.g., re-order buffers or the like). In this manner, the processor core200is transformed during execution of the code213, at least in terms of the output generated by the decoder, the hardware registers and tables utilized by the register renaming logic225, and any registers (not shown) modified by the execution logic250. Although not illustrated inFIG.9, a processing element may include other elements on chip with the processor core200. For example, a processing element may include memory control logic along with the processor core200. The processing element may include I/O control logic and/or may include I/O control logic integrated with memory control logic. The processing element may also include one or more caches. Referring now toFIG.10, shown is a block diagram of a computing system1000embodiment in accordance with an embodiment. Shown inFIG.10is a multiprocessor system1000that includes a first processing element1070and a second processing element1080. While two processing elements1070and1080are shown, it is to be understood that an embodiment of the system1000may also include only one such processing element. The system1000is illustrated as a point-to-point interconnect system, wherein the first processing element1070and the second processing element1080are coupled via a point-to-point interconnect1050. It should be understood that any or all of the interconnects illustrated inFIG.10may be implemented as a multi-drop bus rather than point-to-point interconnect. As shown inFIG.10, each of processing elements1070and1080may be multicore processors, including first and second processor cores (i.e., processor cores1074aand1074band processor cores1084aand1084b). Such cores1074a,1074b,1084a,1084bmay be configured to execute instruction code in a manner similar to that discussed above in connection withFIG.9. Each processing element1070,1080may include at least one shared cache1896a,1896b. The shared cache1896a,1896bmay store data (e.g., instructions) that are utilized by one or more components of the processor, such as the cores1074a,1074band1084a,1084b, respectively. For example, the shared cache1896a,1896bmay locally cache data stored in a memory1032,1034for faster access by components of the processor. In one or more embodiments, the shared cache1896a,1896bmay include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While shown with only two processing elements1070,1080, it is to be understood that the scope of the embodiments is not so limited. In other embodiments, one or more additional processing elements may be present in a given processor. Alternatively, one or more of processing elements1070,1080may be an element other than a processor, such as an accelerator or a field programmable gate array. For example, additional processing element(s) may include additional processors(s) that are the same as a first processor1070, additional processor(s) that are heterogeneous or asymmetric to processor a first processor1070, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processing element. There can be a variety of differences between the processing elements1070,1080in terms of a spectrum of metrics of merit including architectural, micro architectural, thermal, power consumption characteristics, and the like. These differences may effectively manifest themselves as asymmetry and heterogeneity amongst the processing elements1070,1080. For at least one embodiment, the various processing elements1070,1080may reside in the same die package. The first processing element1070may further include memory controller logic (MC)1072and point-to-point (P-P) interfaces1076and1078. Similarly, the second processing element1080may include a MC1082and P-P interfaces1086and1088. As shown inFIG.10, MC's1072and1082couple the processors to respective memories, namely a memory1032and a memory1034, which may be portions of main memory locally attached to the respective processors. While the MC1072and1082is illustrated as integrated into the processing elements1070,1080, for alternative embodiments the MC logic may be discrete logic outside the processing elements1070,1080rather than integrated therein. The first processing element1070and the second processing element1080may be coupled to an I/O subsystem1090via P-P interconnects10761086, respectively. As shown inFIG.10, the I/O subsystem1090includes P-P interfaces1094and1098. Furthermore, I/O subsystem1090includes an interface1092to couple I/O subsystem1090with a high performance graphics engine1038. In one embodiment, bus1049may be used to couple the graphics engine1038to the I/O subsystem1090. Alternately, a point-to-point interconnect may couple these components. In turn, I/O subsystem1090may be coupled to a first bus1016via an interface1096. In one embodiment, the first bus1016may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the embodiments are not so limited. As shown inFIG.10, various I/O devices1014(e.g., biometric scanners, speakers, cameras, sensors) may be coupled to the first bus1016, along with a bus bridge1018which may couple the first bus1016to a second bus1020. In one embodiment, the second bus1020may be a low pin count (LPC) bus. Various devices may be coupled to the second bus1020including, for example, a keyboard/mouse1012, communication device(s)1026, and a data storage unit1019such as a disk drive or other mass storage device which may include code1030, in one embodiment. The illustrated code1030may implement and/or perform one or more aspects of the evaluation system (FIG.1), the method320(FIG.2), the two-pass training process300,320(FIGS.3A-3B), the cognitive space encoder350and trajectory generator344(FIG.4), the method400(FIG.5), and/or method440(FIG.6), already discussed. Further, an audio I/O1024may be coupled to second bus1020and a battery1010may supply power to the computing system1000. Note that other embodiments are contemplated. For example, instead of the point-to-point architecture ofFIG.10, a system may implement a multi-drop bus or another such communication topology. Also, the elements ofFIG.10may alternatively be partitioned using more or fewer integrated chips than shown inFIG.10. ADDITIONAL NOTES AND EXAMPLES Example 1 includes a computing device comprising a network controller to communicate with one or more electronic devices that are to implement a neural network, a graphics processor, a central processing unit, and a memory including a set of instructions, which when executed by one or more of the graphics processor or the central processing unit, cause the computing device to identify a cognitive space that is to be a compressed representation of activations of the neural network, map a plurality of activations of the neural network to a cognitive initial point and a cognitive destination point in the cognitive space and generate a first cognitive trajectory through the cognitive space, wherein the first cognitive trajectory is to traverse the cognitive space from the cognitive initial point to the cognitive destination point. Example 2 includes the computing device of example 1, wherein the instructions, when executed, cause the computing device to determine whether to retrain the neural network based on whether a validity score associated with the first cognitive trajectory meets a threshold. Example 3 includes the computing device of example 1, wherein the instructions, when executed, cause the computing device to sample the first cognitive trajectory to identify one or more intermediate points in the cognitive space, and decode the one or more intermediate points into an input space to generate input points in the input space. Example 4 includes the computing device of example 3, wherein at least one of the plurality of activations is to be associated with an initial data point from the input space, at least one of the plurality of activations is to be associated with a destination data point from the input space, and the cognitive initial point is to correspond to the initial data point and the cognitive destination point is to correspond to the destination data point. Example 5 includes the computing device of example 1, wherein the instructions, when executed, cause the computing device to identify a plurality of trajectories through the cognitive space, and generate a validity score based on a ratio of coherent trajectories from the plurality of trajectories and incoherent trajectories from the plurality of trajectories. Example 6 includes the computing device of example 1, wherein the neural network is to be associated with a first number of dimensions and the cognitive space is to be associated with a second number of dimensions, wherein the second number of dimensions is to be less than the first number of dimensions. Example 7 includes a semiconductor apparatus comprising one or more substrates, and logic coupled to the one or more substrates, wherein the logic is implemented in one or more of configurable logic or fixed-functionality logic hardware, the logic coupled to the one or more substrates to identify a cognitive space that is to be a compressed representation of activations of a neural network, map a plurality of activations of the neural network to a cognitive initial point and a cognitive destination point in the cognitive space, and generate a first cognitive trajectory through the cognitive space, wherein the first cognitive trajectory is to traverse the cognitive space from the cognitive initial point to the cognitive destination point. Example 8 includes the apparatus of example 7, wherein the logic coupled to the one or more substrates is to determine whether to retrain the neural network based on whether a validity score associated with the first cognitive trajectory meets a threshold. Example 9 includes the apparatus of example 7, wherein the logic coupled to the one or more substrates is to sample the first cognitive trajectory to identify one or more intermediate points in the cognitive space, and decode the one or more intermediate points into an input space to generate input points in the input space. Example 10 includes the apparatus of example 9, wherein at least one of the plurality of activations is to be associated with an initial data point from the input space, at least one of the plurality of activations is to be associated with a destination data point from the input space, and the cognitive initial point is to correspond to the initial data point and the cognitive destination point is to correspond to the destination data point. Example 11 includes the apparatus of example 7, wherein the logic is to identify a plurality of trajectories through the cognitive space, and generate a validity score based on a ratio of coherent trajectories from the plurality of trajectories and incoherent trajectories from the plurality of trajectories. Example 12 includes the apparatus of example 7, wherein the neural network is to be associated with a first number of dimensions and the cognitive space is to be associated with a second number of dimensions, wherein the second number of dimensions is to be less than the first number of dimensions. Example 13 includes the apparatus of example 7, wherein the logic coupled to the one or more substrates includes transistor channel regions that are positioned within the one or more substrates. Example 14 includes at least one computer readable storage medium comprising a set of instructions, which when executed by a computing device, cause the computing device to identify a cognitive space that is to be a compressed representation of activations of a neural network, map a plurality of activations of the neural network to a cognitive initial point and a cognitive destination point in the cognitive space, and generate a first cognitive trajectory through the cognitive space, wherein the first cognitive trajectory is to traverse the cognitive space from the cognitive initial point to the cognitive destination point. Example 15 includes the at least one computer readable storage medium of example 14, wherein the instructions, when executed, cause the computing device to determine whether to retrain the neural network based on a whether validity score associated with the first cognitive trajectory meets a threshold. Example 16 includes the at least one computer readable storage medium of example 14, wherein the instructions, when executed, cause the computing device to sample the first cognitive trajectory to identify one or more intermediate points in the cognitive space, and decode the one or more intermediate points into an input space to generate input points in the input space. Example 17 includes the at least one computer readable storage medium of example 16, wherein at least one of the plurality of activations is to be associated with an initial data point from the input space, at least one of the plurality of activations is to be associated with a destination data point from the input space, and the cognitive initial point is to correspond to the initial data point and the cognitive destination point is to correspond to the destination data point. Example 18 includes the at least one computer readable storage medium of example 14, wherein the instructions, when executed, cause the computing device to identify a plurality of trajectories through the cognitive space, and generate a validity score based on a ratio of coherent trajectories from the plurality of trajectories and incoherent trajectories from the plurality of trajectories. Example 19 includes the at least one computer readable storage medium of example 14, wherein the neural network is to be associated with a first number of dimensions and the cognitive space is to be associated with a second number of dimensions, wherein the second number of dimensions is to be less than the first number of dimensions. Example 20 includes a method comprising identifying a cognitive space that is to be a compressed representation of activations of a neural network, mapping a plurality of activations of the neural network to a cognitive initial point and a cognitive destination point in the cognitive space, and generating a first cognitive trajectory through the cognitive space, wherein the first cognitive trajectory traverses the cognitive space from the cognitive initial point to the cognitive destination point. Example 21 includes the method of example 20, further including determining whether to retrain the neural network based on whether a validity score associated with the first cognitive trajectory meets a threshold. Example 22 includes the method of example 20, further including sampling the first cognitive trajectory to identify one or more intermediate points in the cognitive space, and decoding the one or more intermediate points into an input space to generate input points in the input space. Example 23 includes the method of example 22, wherein at least one of the plurality of activations is to be associated with an initial data point from the input space, at least one of the plurality of activations is to be associated with a destination data point from the input space, and the cognitive initial point is to correspond to the initial data point and the cognitive destination point is to correspond to the destination data point. Example 24 includes the method of example 20, further including identifying a plurality of trajectories through the cognitive space, and generating a validity score based on a ratio of coherent trajectories from the plurality of trajectories and incoherent trajectories from the plurality of trajectories. Example 25 includes the method of example 20, wherein the neural network is to be associated with a first number of dimensions and the cognitive space is to be associated with a second number of dimensions, wherein the second number of dimensions is less than the first number of dimensions. Example 26 includes a semiconductor apparatus comprising means for identifying a cognitive space that is to be a compressed representation of activations of a neural network, means for mapping a plurality of activations of the neural network to a cognitive initial point and a cognitive destination point in the cognitive space, and means for generating a first cognitive trajectory through the cognitive space, wherein the first cognitive trajectory traverses the cognitive space from the cognitive initial point to the cognitive destination point. Example 27 includes the apparatus of example 20, further including means for determining whether to retrain the neural network based on whether a validity score associated with the first cognitive trajectory meets a threshold. Example 28 includes the apparatus of example 20, further including means for sampling the first cognitive trajectory to identify one or more intermediate points in the cognitive space, and means for decoding the one or more intermediate points into an input space to generate input points in the input space. Example 29 includes the apparatus of example 28, wherein at least one of the plurality of activations is to be associated with an initial data point from the input space, at least one of the plurality of activations is to be associated with a destination data point from the input space, and the cognitive initial point is to correspond to the initial data point and the cognitive destination point is to correspond to the destination data point. Example 30 includes the apparatus of example 26, further including means for identifying a plurality of trajectories through the cognitive space, and means for generating a validity score based on a ratio of coherent trajectories from the plurality of trajectories and incoherent trajectories from the plurality of trajectories. Example 31 includes the apparatus of any of examples 26-30, wherein the neural network is to be associated with a first number of dimensions and the cognitive space is to be associated with a second number of dimensions, wherein the second number of dimensions is less than the first number of dimensions. Thus, technology described herein may generate a cognitive space and generate cognitive trajectories through the cognitive space. The system may generate a validation score (e.g., resiliency score) based on the cognitive trajectories to identify whether to retrain the neural network to enhance functioning of the neural network. For example, the neural network may be retained with a specific focus to strengthen an underperforming portion and/or process of the neural network. Thus, the technology may provide security-enhanced and resiliency-enhanced neural networks. Furthermore, the technology may implement a new and enhanced neural network analysis to identify a cognitive process (which may otherwise be opaque and unknown to most systems and/or developers) of the neural network through cognitive trajectory maps and trajectories. Embodiments are applicable for use with all types of semiconductor integrated circuit (“IC”) chips. Examples of these IC chips include but are not limited to processors, controllers, chipset components, programmable logic arrays (PLAs), memory chips, network chips, systems on chip (SOCs), SSD/NAND controller ASICs, and the like. In addition, in some of the drawings, signal conductor lines are represented with lines. Some may be different, to indicate more constituent signal paths, have a number label, to indicate a number of constituent signal paths, and/or have arrows at one or more ends, to indicate primary information flow direction. This, however, should not be construed in a limiting manner. Rather, such added detail may be used in connection with one or more exemplary embodiments to facilitate easier understanding of a circuit. Any represented signal lines, whether or not having additional information, may actually comprise one or more signals that may travel in multiple directions and may be implemented with any suitable type of signal scheme, e.g., digital or analog lines implemented with differential pairs, optical fiber lines, and/or single-ended lines. Example sizes/models/values/ranges may have been given, although embodiments are not limited to the same. As manufacturing techniques (e.g., photolithography) mature over time, it is expected that devices of smaller size could be manufactured. In addition, well known power/ground connections to IC chips and other components may or may not be shown within the figures, for simplicity of illustration and discussion, and so as not to obscure certain aspects of the embodiments. Further, arrangements may be shown in block diagram form in order to avoid obscuring embodiments, and also in view of the fact that specifics with respect to implementation of such block diagram arrangements are highly dependent upon the computing system within which the embodiment is to be implemented, i.e., such specifics should be well within purview of one skilled in the art. Where specific details (e.g., circuits) are set forth in order to describe example embodiments, it should be apparent to one skilled in the art that embodiments can be practiced without, or with variation of, these specific details. The description is thus to be regarded as illustrative instead of limiting. The term “coupled” may be used herein to refer to any type of relationship, direct or indirect, between the components in question, and may apply to electrical, mechanical, fluid, optical, electromagnetic, electromechanical or other connections. In addition, the terms “first”, “second”, etc. may be used herein only to facilitate discussion, and carry no particular temporal or chronological significance unless otherwise indicated. As used in this application and in the claims, a list of items joined by the term “one or more of” may mean any combination of the listed terms. For example, the phrases “one or more of A, B or C” may mean A; B; C; A and B; A and C; B and C; or A, B and C. Those skilled in the art will appreciate from the foregoing description that the broad techniques of the embodiments can be implemented in a variety of forms. Therefore, while the embodiments have been described in connection with particular examples thereof, the true scope of the embodiments should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the drawings, specification, and following claims. | 65,387 |
11861495 | The same numbers are used throughout the disclosure and the figures to reference like components and features. Numbers in the 100 series refer to features originally found inFIG.1; numbers in the 200 series refer to features originally found inFIG.2; and so on. DESCRIPTION OF THE EMBODIMENTS Video summarization is a computer vision system that automatically finds representative and salient moments in a video. It enables users to quickly browse a large collection of videos by seeing highlights and summaries. Also it helps save storage and communication bandwidth by keeping only informative sections. The summary will consist of a set of key frames or sub-scenes that would succinctly convey an overall storyline of the whole video. Previously many existing video summarization solutions were bounded to using low-level features such as color histogram or optical flow. This content-agnostic approach could not capture semantically meaningful moments when building a summary and worked only for limited cases where professional editing is assumed like movies and TV news. Also their applications were confined to videos of a single event or situation. Embodiments described herein determines high-level semantic contexts (e.g. activities, objects, locations, and people) residing in a video and produces content-aware summary with an importance scoring mechanism from the semantic contexts. Instead of relying on hand-crafted features, convolutional neural networks are used that extracts data-driven features from millions of images. This deep feature is more invariant to erratic camera motion, illumination change and scene clutter that happen severely in long hour or unedited videos captured by normal users. FIG.1is a block diagram of an electronic device that enables video summarization using semantic information. The electronic device100may be, for example, a laptop computer, tablet computer, mobile phone, smart phone, or a wearable device, among others. The electronic device100may include a central processing unit (CPU)102that is configured to execute stored instructions, as well as a memory device104that stores instructions that are executable by the CPU102. The CPU may be coupled to the memory device104by a bus106. Additionally, the CPU102can be a single core processor, a multi-core processor, a computing cluster, or any number of other configurations. Furthermore, the electronic device100may include more than one CPU102. The memory device104can include random access memory (RAM), read only memory (ROM), flash memory, or any other suitable memory systems. For example, the memory device104may include dynamic random access memory (DRAM). The electronic device100also includes a graphics processing unit (GPU)108. As shown, the CPU102can be coupled through the bus106to the GPU108. The GPU108can be configured to perform any number of graphics operations within the electronic device100. For example, the GPU108can be configured to render or manipulate graphics images, graphics frames, videos, or the like, to be displayed to a user of the electronic device100. In some embodiments, the GPU108includes a number of graphics engines, wherein each graphics engine is configured to perform specific graphics tasks, or to execute specific types of workloads. The CPU102can be linked through the bus106to a display interface110configured to connect the electronic device100to a display device122. The display device122can include a display screen that is a built-in component of the electronic device100. The display device122can also include a computer monitor, television, or projector, among others, that is externally connected to the electronic device100. The CPU102can also be connected through the bus106to an input/output (I/O) device interface114configured to connect the electronic device100to one or more I/O devices116. The I/O devices116can include, for example, a keyboard and a pointing device, wherein the pointing device can include a touchpad or a touchscreen, among others. The I/O devices116can be built-in components of the electronic device100, or can be devices that are externally connected to the electronic device100. Accordingly, the electronic device100also includes a microphone array118for capturing audio. The microphone array118can include any number of microphones, including two, three, four, five microphones or more. In some embodiments, the microphone array118can be used together with an image capture mechanism120to capture synchronized audio/video data, which may be stored to a storage device122as audio/video files. The storage device122is a physical memory such as a hard drive, an optical drive, a flash drive, an array of drives, or any combinations thereof. The storage device122can store user data, such as audio files, video files, audio/video files, and picture files, among others. The storage device122can also store programming code such as device drivers, software applications, operating systems, and the like. The programming code stored to the storage device122may be executed by the CPU102, GPU108, or any other processors that may be included in the electronic device100. High-level contents in a video and their correlation can lead to a more semantically meaningful summary. Video from the GPU or storage122can be summarized in a context aware fashion. A context-aware video summary can be generated from unedited videos captured by wearable and mobile devices. High-level semantic entities may be extracted, such as activities, objects and places from a video by using deep network like CNN (Convolutional Neural Network). A scoring mechanism may be implemented that evaluates an importance level of each scene based on the correlation between semantic entities (e.g. co-occurrence between activities and objects). The CPU102may be linked through the bus106to cellular hardware124. The cellular hardware124may be any cellular technology, for example, the 4G standard (International Mobile Telecommunications-Advanced (IMT-Advanced) Standard promulgated by the International Telecommunications Union-Radio communication Sector (ITU-R)). In this manner, the PC100may access any network126without being tethered or paired to another device, where the network130is a cellular network. The CPU102may also be linked through the bus106to WiFi hardware126. The WiFi hardware is hardware according to WiFi standards (standards promulgated as Institute of Electrical and Electronics Engineers' (IEEE) 802.11 standards). The WiFi hardware126enables the wearable electronic device100to connect to the Internet using the Transmission Control Protocol and the Internet Protocol (TCP/IP), where the network130is the Internet. Accordingly, the wearable electronic device100can enable end-to-end connectivity with the Internet by addressing, routing, transmitting, and receiving data according to the TCP/IP protocol without the use of another device. Additionally, a Bluetooth Interface128may be coupled to the CPU102through the bus106. The Bluetooth Interface128is an interface according to Bluetooth networks (based on the Bluetooth standard promulgated by the Bluetooth Special Interest Group). The Bluetooth Interface128enables the wearable electronic device100to be paired with other Bluetooth enabled devices through a personal area network (PAN). Accordingly, the network130may be a PAN. Examples of Bluetooth enabled devices include a laptop computer, desktop computer, ultrabook, tablet computer, mobile device, or server, among others. The block diagram ofFIG.1is not intended to indicate that the electronic device100is to include all of the components shown inFIG.1. Rather, the computing system100can include fewer or additional components not illustrated inFIG.1(e.g., sensors, power management integrated circuits, additional network interfaces, etc.). The electronic device100may include any number of additional components not shown inFIG.1, depending on the details of the specific implementation. Furthermore, any of the functionalities of the CPU102may be partially, or entirely, implemented in hardware and/or in a processor. For example, the functionality may be implemented with an application specific integrated circuit, in logic implemented in a processor, in logic implemented in a specialized graphics processing unit, or in any other device. FIG.2is an overview of video summarization using semantic information. At block202, an input video is obtained. The input video can be video being currently captured by an image capture mechanism or store or saved videos. Additionally, the input video can be a movie or a television show. At block204, the content of the input video can be extracted using a deep network. A deep network, as used herein, may be the result of a deep learning architecture that includes algorithms with many layers that can be used for feature extraction and transformation. Deep networks include a convolutional neural network (CNN) that is a feed-forward artificial neural network with individual neurons tiled in a manner that response to overlapping regions in the visual field. For example, a convolutional neural network can consist of multiple layers of small neuron collections which can analyze each frame of the input video. The results of these collections are then tiled so that they overlap to obtain a better representation of the activity or objects present in the video data. This tiling can be repeated for every such layer of the convolutional neural network. At block206, the importance of each frame, according to an analysis of the semantic contents of each frame, is evaluated in connection with a scoring mechanism. Semantic contents, as used herein, refers to the meaning associated with contents of each frame. The importance associated with the semantic contents of each frame may be scored differently based on a relationship between the various semantic contents found in each frame. At block208, a final summary is generated. The final summary can answer the following questions when key frames or key sub-scenes are presented to a user, such as: what series of activities does the original video contain; how many different locations or places the video was taken at; and what objects are important and who appears in the video? The final summary may be based on an importance score of each video clip above a particular threshold. In this manner, the resulting video summary is a context aware video summary through the use of semantic information associated with each video clip. Typical video summarization solutions are based on low-level features like color or motion cues and they have difficulty in answering these questions about the original video correctly because by nature they are given no clue on what part of videos are semantically representative moments. The present techniques are based on a high-level semantic context and can provide more meaningful answers close to what a user anticipates when watching the whole original video. Furthermore, the correlation between semantic contents enables important aspects across the video to be noted. For example, in a video where “brushing teeth” has been identified as an activity, the resulting summary should consider more frames containing key objects like tooth brushes/pastes as important frames and include them as part of the summary. As used herein, important frames are frames with a high score, as described below. A high score may be high compared to other scores, or it may be a score above a particular threshold. In the present techniques, the correlation between two semantic contents such as the activity and an object, is leveraged to improve the quality of a video summary. The semantic contents are extracted from the video using deep machine learning techniques like CNN. Deep features trained from millions of images have a more discriminative power and in general show higher classification and detection accuracy than those from previously used hand-crafted features. Adding new semantic information into the summarization would be done more conveniently under the same deep learning architecture. Additionally, the length and content complexity of an input video according to the present techniques can handle a broader scope of use cases. Existing methods are usually restricted to video inputs containing a short and single event. From the benefit of deep features, which are more insensitive to illumination change, camera motion and scene clutter, a consistent summarization performance is achievable even for long hour videos with multiple events taken by wearable or mobile devices. FIG.3is an overview of a video summarization pipeline300. The pipeline300represents a content-aware video summarization technique that generates a set of key frames or abstracted clips from a given video through the analysis of semantic visual contents. It involves deep machine learning techniques as discussed above that provide activity classification and object detection. Types of input videos are not only movies or TV films which are professionally edited, but also unedited or unstructured ones taken by wearable or mobile devices. FIG.3illustrates three main modules comprising a summarization method: temporal segmentation302, importance scoring304, and summary selection306. The temporal segmentation302module detects shot boundaries from activity transition and divides the input video into a set of sub-scenes, where each sub-scene includes one or more activity segments. In embodiments, the shot boundary is an obvious transition between various settings of the video. For example, a shot boundary may be a transition from one room to another in a video. In each activity segment the importance scoring module304detects visual objects and evaluates a score for each image frame weighted by an object-to-activity co-occurrence. Finally the summary selection module306chooses a high score region that would be the most important and salient moment within that activity segment. A region is a specific area in the image. Then a set of the summarized key clips from every sub-scene are collected and displayed for users. In particular, temporal segmentation302divides an input video308into sub-scenes that includes semantically different activities. A convolution neural network (CNN)310can be used for activity classification. For example, the CNN may be trained for 20 daily indoor/outdoor activities, such as brushing teeth, watching TV, using computer, eating food, making food, laundry, and etc. To find temporal segments each frame is first classified into one of the activity classes312by an activity classifier via the CNN310. Once all the frames in the input video are classified, activity labels are temporally smoothened using the mode filter in order to prune misclassified labels. Each activity segment consists of continuous frames with the same activity in a minimum time. The segments shorter than a certain threshold are removed and adjacent segments belonging to the same activity are joined. Thus, the segments can be clustered by activity as indicated by arrow314. Once the video is divided into activity segments, best moments from each segment are selected according to a scoring function. Thus, the classified sub-scenes316are sent to the importance scoring module304. For a given activity segment A, the following two scoring mechanisms may be applied. An activity based score is calculated as follows: S1=ProbAct(fi,A) where ProbAct( ) is a classification probability returned by the activity classifier. The score S1 and classification probability is a degree of belief that an image frame fiat the time i belongs to activity class A. The second score is an activity-to-object co-occurrence based score: S2=∑iProbObj(Oi)Concurrence(Oi,A) where ProbObj(Oi) is a probability returned by the object detector, which means that an object Oibelongs to its labelled class. The Concurrence (Oi, A) represents how important is an object is for a particular activity. The score S2 gives a higher score to a frame containing more important objects which are highly correlated with a labelled activity of the segment. The co-occurrence is computed by what fraction of frames of activity A have an object Oiin the frames. The importance of an object Oifor an activity A is directly proportional to the value of Concurrence (Oi, A) as follows: Concurrence(Oi,A)=F(Oi,A)F(A) where F(Oi, A) is the number of frames in activity A that contains an object Oiand F(A) is the total number of frames for activity A. The co-occurrence is learned from the object and activity labels of training data. Labels can be obtained by running both activity classifier and object detector on the training data or by human annotation. For example, Table 1 shows the first two most co-occurring objects with given activities, which is used in our experiment. TABLE 1ActivityObject #1Object #2Wash faceTapSoapLaundryWasher/dryContainerWatching TVTVDoorReading/writingBookLaptopEating foodDishMug/cup Accordingly, a region proposal is made at reference number318. A CNN is then used to classify the objects in each region of the frame. A fast, regional CNN320may be used to learn the objects in each region of the frame. The regional CNN320results in object classes322. The object classes322and the activity classes312may be used in the equations above, and an object to activity correlation may be found as indicated at arrow324. The scores326resulting from the equations above and the object to activity correlation as indicated at arrow324are sent to the summary selection module306. The summary selection module306generates a final summary of the most important frames or highlights of the video clips332. The final summary can be generated in various ways by inspecting the importance score distribution or a score graph328. For example, a set of key image frames, each of which corresponds to the highest score for each activity segment may be selected as indicated by arrow330. In another example, a set of key clips each of which corresponds to an N-second of each action segment that shows the highest score sum can be selected for the final summary. In embodiments, N can be chosen arbitrarily, such as five or ten second depending on a user's preference or storage constraint. In embodiments, the activity classifier and object detector used are each CNN-based deep networks. The CNN-based deep networks are pre-trained by millions of images from database for hundreds of different labels, and then fine-tuned for use with video summarization, including a modified set of labels and additional training datasets. A dataset may be, for example, millions of frames of dozens of people performing unscripted, everyday activities. Consider, for example, a dataset consisting of 20 videos collected by different individuals capturing their everyday activities at their own homes. Along with videos human annotations for activities and object classes are provided for ground truth. In evaluation15videos are used for training and the remaining 5 are for testing. FIG.4is an illustration of the qualitative result400of the CNN activity classifier. Six frames are illustrated with the resulting activity class. Although six frames are illustrated, any number of frames may be used. For each frame, the activity classes are washing dishes402, laundry404, dry face/hand406, makeup/comb408, reading/writing410, and walking outside412. Activity classification as described above may be performed using a CNN-based deep network that is trained for classifying20activities using the human annotations. The activity classification yields a 55.1% accuracy in frame-by-frame testing. The weights of the network can be initialized by a pre-trained network like MIT Places Hybrid-CNN (which trained by 2.3 million images for 1183 object/scene classes). The weights may be defined as the coefficient numbers of each node in the CNN network. The temporal segmentation module uses the following 20 activity labels for segmenting the input video by activity shown inFIG.4: Make-up/comb, Brush teeth, Wash face/hand, Dry face/hand, Laundry, Adjust thermostat, Wash dishes, Make tea/coffee, Drink water-bottle, Make food, Eat food, Mop kitchen, Vacuum, Take pills, Watch TV, Use computer, Use cell, Reading/writing, Drink coffee, Walk outside. The accuracy number of 55.1% is good considering that most of these activity classes are very similar to each other. Additionally, twenty activity labels are used inFIG.4as an example, and any number of activity labels can be used according to the present techniques. After the activities have been classified, object detection is performed. As discussed above, a fast-RCNN may be used for object detection. The fast-RCNN may take a CNN pre-trained with an image database organized according to the WordNet hierarchy, in which each node of the hierarchy is depicted by hundreds and thousands of images. The fast-RCNN then fine-tunes the network on a visual object class detection data. The fast-RCNN may be additionally trained for more object classes using the annotation as provided. FIG.5is an illustration of bounding boxes with detection probability for object classification.FIG.5includes frame502, frame504, frame506, and frame508. Frame502includes and a person510, a door512, a towel514, and a tap516. Frame504includes a window520, a tap522, soap/liquid524, and a cup526. Frame506includes a television530and a cell phone532. Frame508includes a door540and a book542. Each detected object within the frame includes a bounding boxes with a detection probability in test images by Fast-RCNN. A video summary may be created by selecting the best five continuous seconds based on the S2 score (activity-to-object co-occurrence based scoring) from six activity segments. The same activity is prevented from being repeated in the summary and longer activities are preferably included in the summary. To visualize the summary, best moments of each activity segment is displayed in a grid layout inFIGS.6A and6B. In these figures a bar602represents an entire duration of the video input and the cross-hatched portion604blue bar represents time period of the activity segment. The solid portion606indicates the moment of the highest importance score shown in the grid. The location of the solid portion606was decided according to the scoring mechanism S2. The summarized outputs show the capability of our algorithm to capture meaningful moments of important activities. When compared with naïve uniform sampling, many important activities in the video are missed. In content-unaware methods like the uniform sampling, one drawback is that certain activities taking a very long duration may dominate the summary. FIG.7is a process flow diagram of a method for video summarization with semantic information. At block702, each frame of a plurality of frames may be labeled according to an activity class. In embodiments, the plurality of frames is segmented into sub-scenes. At block704, an object-to-activity correlation for each frame is determined. In embodiments, the object-to-activity correlation results in one or more scores that indicate the likelihood that an object is related to the particular activity class for the frame. At block706, a video summary is rendered. The video includes the frames with the highest object-to-activity correlation for each frame in sub-scene or a shot boundary. FIG.8is a block diagram showing a medium800that contains logic for video summarization. The medium800may be a computer-readable medium, including a non-transitory medium that stores code that can be accessed by a processor802over a computer bus804. For example, the computer-readable medium800can be volatile or non-volatile data storage device. The medium800can also be a logic unit, such as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), or an arrangement of logic gates implemented in one or more integrated circuits, for example. The medium800may include modules808-810configured to perform the techniques described herein. For example, a segmentation module806may be configured segment the frames into video data and apply an activity class to the segments. A scoring module808may be configured to generate one or more scores based on an object-to-activity correlation. A summary module810may be configured to render frames with the highest object-to-activity correlation in a video summary. The block diagram ofFIG.8is not intended to indicate that the medium800is to include all of the components shown inFIG.8. Further, the medium800may include any number of additional components not shown inFIG.8, depending on the details of the specific implementation. Example 1 is an apparatus. The apparatus includes a controller to segment an incoming video stream into a plurality of activity segments, wherein each frame is associated with an activity; a scoring mechanism to calculate a score for each frame of each activity, wherein the score is based, at least partially, on a classification probability of each frame; and a summarizer to summarize the activity segments based on the score for each frame. Example 2 includes the apparatus of example 1, including or excluding optional features. In this example, the score is based on, at least partially, an activity-to-object co-occurrence. Example 3 includes the apparatus of any one of examples 1 to 2, including or excluding optional features. In this example, the activities are segmented by one or more shot boundaries and each frame is labeled according to an activity class. Example 4 includes the apparatus of any one of examples 1 to 3, including or excluding optional features. In this example, a convolutional neural network is used to classify each segment into an activity. Optionally, frames with mislabeled activities are relabeled according to the activities of surrounding frames. Example 5 includes the apparatus of any one of examples 1 to 4, including or excluding optional features. In this example, a fast, regional convolutional neural network is used to classify a plurality of objects of each frame. Example 6 includes the apparatus of any one of examples 1 to 5, including or excluding optional features. In this example, segments lower than a predefined threshold in length are discarded. Example 7 includes the apparatus of any one of examples 1 to 6, including or excluding optional features. In this example, the scoring mechanism determines a score that is the probability that a frame belongs to an activity based on objects in the frame. Example 8 includes the apparatus of any one of examples 1 to 7, including or excluding optional features. In this example, the scoring mechanism determines a score that is the probability that an object of a frame belongs to a class of objects combined with the importance of the object for the activity assigned to the frame. Example 9 includes the apparatus of any one of examples 1 to 8, including or excluding optional features. In this example, the summarizer is to create a summary by adding frames to the summary with scores above a predefined threshold. Example 10 includes the apparatus of any one of examples 1 to 9, including or excluding optional features. In this example, the summary is generated by selecting key image frames that correspond to the highest score for each segment, or key clips of N-seconds are selected for each segment. Example 11 is a method for video summarization. The method includes labeling each frame of a plurality of frames according to an activity class; determining an object-to-activity correlation for an object within each frame; and rendering a video summary that comprises the frames with object-to-activity correlations above a predetermined threshold for each frame in a shot boundary. Example 12 includes the method of example 11, including or excluding optional features. In this example, object to activity correlation is obtained by executing both an activity classifier and object detector on a set of training data, by human annotation, or by any combination thereof. Example 13 includes the method of examples 11 or 12, including or excluding optional features. In this example, the importance of an object for an activity is directly proportional to the value of the number of frames in activity that contains an object divided by the total number of frames for that activity. Example 14 includes the method of any one of examples 11 to 13, including or excluding optional features. In this example, a convolutional neural network is used to label each frame according to an activity class. Example 15 includes the method of any one of examples 11 to 14, including or excluding optional features. In this example, a fast, regional convolutional neural network is used to classify a plurality of objects of each frame. Example 16 includes the method of any one of examples 11 to 15, including or excluding optional features. In this example, a probability than the object belongs to a particular activity is used to determine, at least partially, the object to activity correlation. Example 17 includes the method of any one of examples 11 to 16, including or excluding optional features. In this example, a scoring mechanism determines a score that is the probability that a frame belongs to an activity based on objects in the frame that is used to determine, at least partially, the object to activity correlation. Example 18 includes the method of any one of examples 11 to 17, including or excluding optional features. In this example, a scoring mechanism determines a score that is the probability that an object of a frame belongs to a class of objects combine with the importance of the object for the activity assigned to the frame, that is used to determine, at least partially, the object to activity correlation. Example 19 includes the method of any one of examples 11 to 18, including or excluding optional features. In this example, the video summary is rendered by creating a summary by adding frames to the summary with an object-to-activity correlation above a predefined threshold. Example 20 includes the method of any one of examples 11 to 19, including or excluding optional features. In this example, the video summary is generated by selecting key image frames that correspond to the highest an object-to-activity correlation. Example 21 is a system. The system includes a display; an image capture mechanism; a memory that is to store instructions and that is communicatively coupled to the image capture mechanism and the display; and a processor communicatively coupled to the image capture mechanism, the display, and the memory, wherein when the processor is to execute the instructions, the processor is to: label each frame of a plurality of frames according to an activity class; determine a score corresponding to each frame; and render a video summary that comprises the frames with scores above a predetermined threshold for each frame in a shot boundary. Example 22 includes the system of example 21, including or excluding optional features. In this example, the score is based on, at least partially, an activity-to-object co-occurrence of an object within each frame. Example 23 includes the system of any one of examples 21 to 22, including or excluding optional features. In this example, the score is based on, at least partially, a classification probability of each frame. Example 24 includes the system of any one of examples 21 to 23, including or excluding optional features. In this example, a convolutional neural network is used to label each frame of the plurality of frames according to an activity class. Example 25 includes the system of any one of examples 21 to 24, including or excluding optional features. In this example, frames with mislabeled activity classes are relabeled according to the activities of surrounding frames. Example 26 includes the system of any one of examples 21 to 25, including or excluding optional features. In this example, a fast, regional convolutional neural network is used to classify a plurality of objects of each frame. Example 27 includes the system of any one of examples 21 to 26, including or excluding optional features. In this example, frames with scores lower than the predetermined threshold are discarded. Example 28 includes the system of any one of examples 21 to 27, including or excluding optional features. In this example, the score is a probability that an object of a frame belongs to a class of objects combined with the importance of the object for the activity assigned to the frame. Example 29 is a tangible, non-transitory, computer-readable medium. The computer-readable medium includes instructions that direct the processor to label each frame of a plurality of frames according to an activity class; determine an object-to-activity correlation for an object within each frame; and render a video summary that comprises the frames with object-to-activity correlations above a predetermined threshold for each frame in a shot boundary. Example 30 includes the computer-readable medium of example 29, including or excluding optional features. In this example, object to activity correlation is obtained by executing both an activity classifier and object detector on a set of training data, by human annotation, or by any combination thereof. Example 31 includes the computer-readable medium of any one of examples 29 to 30, including or excluding optional features. In this example, the importance of an object for an activity is directly proportional to the value of the number of frames in activity that contains an object divided by the total number of frames for that activity. Example 32 includes the computer-readable medium of any one of examples 29 to 31, including or excluding optional features. In this example, a convolutional neural network is used to label each frame according to an activity class. Example 33 includes the computer-readable medium of any one of examples 29 to 32, including or excluding optional features. In this example, a fast, regional convolutional neural network is used to classify a plurality of objects of each frame. Example 34 includes the computer-readable medium of any one of examples 29 to 33, including or excluding optional features. In this example, a probability than the object belongs to a particular activity is used to determine, at least partially, the object to activity correlation. Example 35 includes the computer-readable medium of any one of examples 29 to 34, including or excluding optional features. In this example, a scoring mechanism determines a score that is the probability that a frame belongs to an activity based on objects in the frame that is used to determine, at least partially, the object to activity correlation. Example 36 includes the computer-readable medium of any one of examples 29 to 35, including or excluding optional features. In this example, a scoring mechanism determines a score that is the probability that an object of a frame belongs to a class of objects combine with the importance of the object for the activity assigned to the frame, that is used to determine, at least partially, the object to activity correlation. Example 37 includes the computer-readable medium of any one of examples 29 to 36, including or excluding optional features. In this example, the video summary is rendered by creating a summary by adding frames to the summary with an object-to-activity correlation above a predefined threshold. Example 38 includes the computer-readable medium of any one of examples 29 to 37, including or excluding optional features. In this example, the video summary is generated by selecting key image frames that correspond to the highest an object-to-activity correlation. Example 39 is an apparatus. The apparatus includes instructions that direct the processor to a controller to segment an incoming video stream into a plurality of activity segments, wherein each frame is associated with an activity; a means to calculate a score for each frame; and a summarizer to summarize the activity segments based on the score for each frame. Example 40 includes the apparatus of example 39, including or excluding optional features. In this example, the score is based on, at least partially, an activity-to-object co-occurrence. Example 41 includes the apparatus of any one of examples 39 to 40, including or excluding optional features. In this example, the score is based on, at least partially, a classification probability of each frame. Example 42 includes the apparatus of any one of examples 39 to 41, including or excluding optional features. In this example, the activities are segmented by one or more shot boundaries and each frame is labeled according to an activity class. Example 43 includes the apparatus of any one of examples 39 to 42, including or excluding optional features. In this example, a convolutional neural network is used to classify each segment into an activity. Optionally, frames with mislabeled activities are relabeled according to the activities of surrounding frames. Example 44 includes the apparatus of any one of examples 39 to 43, including or excluding optional features. In this example, a fast, regional convolutional neural network is used to classify a plurality of objects of each frame. Example 45 includes the apparatus of any one of examples 39 to 44, including or excluding optional features. In this example, segments lower than a predefined threshold in length are discarded. Example 46 includes the apparatus of any one of examples 39 to 45, including or excluding optional features. In this example, the means to calculate a score for each frame determines a score that is the probability that a frame belongs to an activity based on objects in the frame. Example 47 includes the apparatus of any one of examples 39 to 46, including or excluding optional features. In this example, the means to calculate a score for each frame determines a score that is the probability that an object of a frame belongs to a class of objects combined with the importance of the object for the activity assigned to the frame. Example 48 includes the apparatus of any one of examples 39 to 47, including or excluding optional features. In this example, the summarizer is to create a summary by adding frames to the summary with scores above a predefined threshold. Example 49 includes the apparatus of any one of examples 39 to 48, including or excluding optional features. In this example, the summary is generated by selecting key image frames that correspond to the highest score for each segment, or key clips of N-seconds are selected for each segment. Some embodiments may be implemented in one or a combination of hardware, firmware, and software. Some embodiments may also be implemented as instructions stored on the tangible, non-transitory, machine-readable medium, which may be read and executed by a computing platform to perform the operations described. In addition, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine, e.g., a computer. For example, a machine-readable medium may include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; or electrical, optical, acoustical or other form of propagated signals, e.g., carrier waves, infrared signals, digital signals, or the interfaces that transmit and/or receive signals, among others. An embodiment is an implementation or example. Reference in the specification to “an embodiment,” “one embodiment,” “some embodiments,” “various embodiments,” or “other embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments, of the present techniques. The various appearances of “an embodiment,” “one embodiment,” or “some embodiments” are not necessarily all referring to the same embodiments. Not all components, features, structures, characteristics, etc. described and illustrated herein need be included in a particular embodiment or embodiments. If the specification states a component, feature, structure, or characteristic “may”, “might”, “can” or “could” be included, for example, that particular component, feature, structure, or characteristic is not required to be included. If the specification or claim refers to “a” or “an” element, that does not mean there is only one of the element. If the specification or claims refer to “an additional” element, that does not preclude there being more than one of the additional element. It is to be noted that, although some embodiments have been described in reference to particular implementations, other implementations are possible according to some embodiments. Additionally, the arrangement and/or order of circuit elements or other features illustrated in the drawings and/or described herein need not be arranged in the particular way illustrated and described. Many other arrangements are possible according to some embodiments. In each system shown in a figure, the elements in some cases may each have a same reference number or a different reference number to suggest that the elements represented could be different and/or similar. However, an element may be flexible enough to have different implementations and work with some or all of the systems shown or described herein. The various elements shown in the figures may be the same or different. Which one is referred to as a first element and which is called a second element is arbitrary. It is to be understood that specifics in the aforementioned examples may be used anywhere in one or more embodiments. For instance, all optional features of the computing device described above may also be implemented with respect to either of the methods or the computer-readable medium described herein. Furthermore, although flow diagrams and/or state diagrams may have been used herein to describe embodiments, the techniques are not limited to those diagrams or to corresponding descriptions herein. For example, flow need not move through each illustrated box or state or in exactly the same order as illustrated and described herein. The present techniques are not restricted to the particular details listed herein. Indeed, those skilled in the art having the benefit of this disclosure will appreciate that many other variations from the foregoing description and drawings may be made within the scope of the present techniques. Accordingly, it is the following claims including any amendments thereto that define the scope of the present techniques. | 42,961 |
11861496 | DETAILED DESCRIPTION Before any embodiments of the invention are explained in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the following drawings. The invention is capable of other embodiments and of being practiced or of being carried out in various ways. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” or “having” and variations thereof herein is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. Unless specified or limited otherwise, the terms “mounted,” “connected,” “supported,” and “coupled” and variations thereof are used broadly and encompass both direct and indirect mountings, connections, supports, and couplings. Further, “connected” and “coupled” are not restricted to physical or mechanical connections or couplings. The following discussion is presented to enable a person skilled in the art to make and use embodiments of the invention. Various modifications to the illustrated embodiments will be readily apparent to those skilled in the art, and the generic principles herein can be applied to other embodiments and applications without departing from embodiments of the invention. Thus, embodiments of the invention are not intended to be limited to embodiments shown, but are to be accorded the widest scope consistent with the principles and features disclosed herein. The following detailed description is to be read with reference to the figures, in which like elements in different figures have like reference numerals. The figures, which are not necessarily to scale, depict selected embodiments and are not intended to limit the scope of embodiments of the invention. Skilled artisans will recognize the examples provided herein have many useful alternatives and fall within the scope of embodiments of the invention. In some embodiments, the system is configured to provide an AI engine to effectively “understand” engineering diagrams and increasingly “intelligently” and autonomously interact with users—engineers and business managers—based on the information contained in them. In some embodiments, the system is configured to provide broad ingestion capabilities which can include, without limitation, integrating ingested data with existing solid models. In some embodiments, the system is configured to integrate functionality with AVEVA Net (which is commercially available from AVEVA Group plc and its affiliates.) Gateways are part of AVEVA Net functionality. Some embodiments provide an upgrade to the current capabilities of AVEVA Net, offering, without limitation, enhanced ingestion of data which are may or may not be contained in CAD files, spreadsheets and the like. In some embodiments, the system is configured to provide enhanced capabilities for automatically onboarding data to help optimize processes. In some embodiments, the system is configured to enable users to search, fetch, and effectively display engineering diagrams based on the user's content and/or preferences. Some embodiments can comprise, without limitation: piping and instrumentation diagrams (P&ID), electrical plans, power plant, electronic circuit diagrams, block diagrams, logic diagrams, HVAC, process flow, welding and wiring diagrams, flow charts, class diagrams, state transition and timing diagrams. In some embodiments, the system is configured to recognize, without limitation, business diagrams such as audit, work flow diagrams and Gantt charts. In some embodiments, the system is configured to regonize, without limitation, construction diagrams, such as floor plans, site plans, structural, and plumbing drawings. In some embodiments, the system is configured to recognize, without limitation, oil and gas diagrams, such as anticline, channel sands, fault trap, frac imaging, and the like. FIG.1illustrates a piping and instrumentation diagram according to some embodiments. In some embodiments, piping and instrumentation diagrams comprises several types of information. Some embodiments comprise information for training an AI Engine to recognize and understand a wide variety of content types and formats. In some embodiments, the information for training an AI Engine comprises, without limitation: text, instrumentation symbols and locations of both, relationships and associations between the text and the symbols, and the like. In some embodiments, the system is configured to ingest information regarding the particular assets of interest. Many prior art systems can store information identifying an asset as a pump according to some embodiments. While these systems are useful, some embodiments disclosed herein offer enhanced functionality which can include, without limitation, identification and ingestion of actual asset characteristics which includes a pump's operating characteristics. In some embodiments, this enhanced functionality can lead to significantly better optimization strategies because high efficiency assets (or conversely, low efficiency assets) are taken into account. In some embodiments, such assets can include tags (e.g. VBF-302) which can comprise the operating characteristics or enable tracking thereof through association with the asset. In some embodiments, the system is configured to build a neural network including these characteristics to lead to better optimization outcomes. In some embodiments, the system is configured to anonymize the data so that they can be used for other customers or environments. As one non-limiting example, the pump operating characteristics can be stored from one ingestion and automatically or manually populated into another neural network when the same pump is used according to some embodiments. Further, in some embodiments, such enhanced knowledge by the neural network can be used to enhance setup and operating performance based on location or operating environment conditions. Additionally, such enhanced knowledge can improve predictive analyses including maintenance scheduling and the like. FIGS.2A-2Billustrate text processing with Optical Character Recognition (“OCR”) according to some embodiments. In some embodiments, the system is configured to use a conventional software for printed text recognition such Microsoft Software® Azure® OCR. In some embodiments, the system is configured to integrate Azure® Cognitive Services OCR to recognize and extract printed text on the piping and instrumentation diagrams. Azure® is a registered trademark of Microsoft Corporation of Redmond, Wash. In some embodiments, the system is configured to enable a user to convert one or more an industrial images such as a paper sheets to high resolution (200 dpi) images from SVG and PDF formats. In some embodiments, the system is configured to perform image pre-processing to remove noise and enhance both text and drawings on the sheets. In some embodiments, Azure® service can have a 4200×4200 pixel size limit. In some embodiments, the system is configured to tile each paper sheet, process all paper sheets one at a time, and add the results back together. In some embodiments, the system is configured to capture sideways text. In some embodiments, the system is configured to capture sideways text by rotating the paper sheet 90° degrees clockwise on each tile and repeat the OCR process. In some embodiments, the system is configured to enable a user can correct the OCR model deficiencies as a series of special cases. In some embodiments, the OCR model can comprise two or more results. In some embodiments, the first result comprises separating the recognized text and saving the text locations in CSV format for use in text NLP based search and display of piping and instrumentation diagrams. In some embodiments, a second result comprises erasing recognized text. In some embodiments, the system is configured to take the text removed piping and instrumentation diagram sheets forward for symbol recognition. It should be noted that Azure® OCR does not capture a small fraction of the text which is left behind. In some embodiments and as just one non-limiting example, In some embodiments, the system is configured to execure a ball valve analysis. In some embodiments, ball valve recognize is divided into two separate classes. In some embodiments, the classes can comprise class horizontal ball valve (“HBV”) and class vertical ball valve (“VBV”). FIG.9shows ball valve classes according to some embodiments. In some embodiments, the system is configured to execute a ball valve recognition—dataset generation. In some embodiments, having found no public or private dataset of P&ID symbols, the system is configured generate a dataset of samples. In some embodiments, the system is configured to generate the dataset of samples by clipping a plurality of samples of a ball valve from 200 dpi or other resolution text-removed shell piping and instrumentation diagram sheets by hand. In some embodiments, the system is configured to turn each clipping by 90°, 180° and 270° degrees. In some embodiments, the system is configured to flip each clip along horizontal and vertical axes. In some embodiments, the flipped clipping can produce a sample dataset of 216 horizontal and 216 vertical ball valves, for example. In some embodiments the sample dataset is a seeding dataset to train AI. FIG.10shows example rotated ball valve clippings according to some embodiments. In some embodiments, the system is configured to extract and transform data while keeping formats as close to native as possible or as desired. In some embodiments, the system is configured to build one or more neural networks for recognition of a wide variety of data types and formats. FIGS.3A-3Cillustrate a deep convolutional neural network for ball valve recognition according to some embodiments. In some embodiments, the system comprises a deep convolutional neural network for ball valve recognition. In some embodiments, various types of models can be trained for the task. In some embodiments, the first model can be a deep convolutional neural network (“DCNN”). In some embodiments, the DCNN can comprise a deep residual network architecture. In some embodiments, system comprises a ResNet50 DCNN model. In some embodiments, the ResNet50 DCNN model is trained to recognize 1000 everyday objects, including, faces, cars, footballs, and the like. In some embodiments, the ResNet50 DCNN is used as a Prior. In some embodiments, it can take large datasets to train a DCNN. In some embodiments, a dataset of 216 samples generated from the clippings is a small dataset, so a Prior comprising a database of everyday recognized images is used to increase the training set size. In some embodiments, the system is configured to integrate Python Keras to train the DCNN ResNet model. FIG.4illustrates a boosted HAAR cascades for ball valve recognition according to some embodiments. In some embodiments, the system comprises a boosted HAAR cascades for ball valve recognition (any step that references to a specific industrial reference image is purely an example and not limiting as any steps described herein applies to any industrial reference image). In some embodiments, a HAAR cascade can be known to learn simple features such as horizontal, vertical, and angled lines which are core features of many problem spaces. In some embodiments, HAAR cascade can be known to learn features from small datasets. In some embodiments, algorithm can train the models and run recognition. In some embodiments, training and recognition can comprise a pixel domain. FIG.5illustrates a hit rate according to some embodiments. In some embodiments, there can be two important performance metrics for training a model. In some embodiments, hit rate can comprise a converse of the false negatives rate. In some embodiments, false negatives can occur when a model fails to identify ball valves. In some embodiments, false negative rate can comprise scores between [0.0 and 1.0]. In some embodiments, 1.0 can provide a perfect hit rate that can capture all ball valves. In some embodiments, a hit rate can comprise a false positive rate. In some embodiments, false alarms can occur when a model identifies areas that are not ball valves. In some embodiments, false positive rate scores can be between [0.0 and 1.0]. In some embodiments, 0.0 rate can provide no additional areas marked as ball valves. In some embodiments, the system is configured to accept 99.9% (e.g., −minHitRate 0.999) as an acceptable hit rate which are accepted in the final model. In some embodiments, after the model training session, an algorithm can provide a recognition run on the positive training samples. In some embodiments, the recognition run must recognize 999 out of 1000 ball valves, (i.e. the recognition run may misrecognize only 1 in 1000). In some embodiments, the system is configured to define a training for maximum false positives rate. In some embodiments, the system is configured to enable false positives rate to be set to 1% (−maxFalseAlarmRate 0.01). In some embodiments, the system is configured to provide a recognition run on the negative training samples. In some embodiments, the system is configured to not recognize fewer than 1 in 100 images in an industrial reference as ball valves. In some embodiments, the system is configured to end the AI training session and consider the AI is trained when these criteria are met. In some embodiments, the system is configured to enable a user can to relax the false positives rate to generate a large sample set. In some embodiments, the industrial references training set comprises 5-10 piping and instrumentation diagram sheets. In some embodiments, a training set for a piping and instrumentation diagram sheet, for instance is pre-processed by the system for text-removal from an OCR stage. In some embodiments, the system is configured to include a set of coordinates for each ball valve recognized. FIG.6illustrates recognition results according to some embodiments. In some embodiments, the system is configured to return a set of coordinates for each ball valve can include x and y coordinates. In some embodiments, the x and y coordinates are located on the top left corner of each recognized ball valve. In some embodiments, a set of coordinates comprises a width and height in pixels. In some embodiments, the width and height pixels can include: x, y, w, h. In some embodiments, the system is configured to to draw out the recognition results on each respective original piping and instrumentation diagram sheet. In some embodiments, output statistics on recognition results can comprise a CSV file. In some embodiments, the system is configured to clip all recognized samples. In some embodiments, recognized samples comprises 5-10 training sheets into a database folder. Some embodiments include relaxing the accepted false positives rate to 0.6 and above. In some embodiments, clippings can be separated into positive and negative samples. In some embodiments, the system is configured to rotate and flip the positive samples. In some embodiments, the system is configured to add the positive samples to a training dataset. In some embodiments, the system is configured to add negative samples as background samples in the dataset. In some embodiments, the system is configured to generate large sample sets. In some embodiments, a large sample set can comprise approximately 2000 positive and 14000 negative samples. In some embodiments, the system is configured to use the large sample set to train models as a seed dataset. In some embodiments, the large sample datasets are needed to provide excellent results on hundreds of piping and instrumentation diagram sheets previously unseen by the model. FIG.7illustrates OCR and symbol (ball valve) recognition workflow according to some embodiments. In some embodiments, the system includes a TF-IDF NLP kernel for keyword extraction. In some embodiments, text extracted from piping and instrumentation diagrams with OCR can be a mix of significant words. In some embodiments, the mix of significant words can comprise high information content and insignificant words with low information content. Some embodiments include, TF-IDF natural language processing (“NLP”) kernel. In some embodiments, TF-IDF natural language processing (“NLP”) kernel can separation of significant words and can discard words with low information content. FIG.8illustrates digital twin assists according to some embodiments. Some embodiments include functionality based on various faces of life. In some embodiments, faces of life can comprise private, professional, and a user's role. Digital twin technologies can be very helpful for providing all data needed at any time, but sometimes the data lakes become extremely cumbersome and inefficient. Some embodiments provide needed context and data applicability to enhance use and analyses of the data lake. In some embodiments, a user's role can comprise the beginning of a career. In some embodiments, a user can want to learn fast, need help and answers on tap, and want to connect with likeminded users. In some embodiments, connection with likeminded users can be from anywhere in an approved trusted user network or a Bot network. In some embodiments, a user can trust Bots and know how to contribute to their learning. In some embodiments, a user can be notified by the system to multi-task efficiently. In some embodiments, a user can be presented with content/information that is relevant to a user's role, tasks, or schedule. In some embodiments, a user's search history, time the user takes to act and/or execute can be used to drive a user's productivity. In some embodiments, a user can correct the Bot. Some embodiments include an assistant and some embodiments include a profile. In some embodiments, the assistant can be a user's profile. In some embodiments, a user's profile can drive work ethic and a digital fingerprint. Some embodiments include a human network. In some embodiments, the human network can comprise the Bots and assistants network. In some embodiments, the network can be provided by Trusted Network and Human Digital Footprint in the Eco-System network which may include assets/facilities information networks). In some embodiments, the Trusted Network can comprise opposites attract, complementary skills, one team—one fight, and crowd source analyses and recommendations to deliver desired results. Some embodiments include an opinion for suitability. In some embodiments, the opinion for suitability can record metrics, e.g. task rate completion, constant revisiting of the same content, and questions to learn more. In some embodiments, a user can ask questions to learn more, confirm what a user already knows, why something is the way it is, design intent, OEE, and surveys. Some embodiments include a human digital footprint. In some embodiments the human digital footprint can be in the Eco-System. In some embodiments, people's network footprint can comprise, past experience to solve the problem, challenge, and/or opportunity. In some embodiments, a user can touch data or act upon data every day for a purpose (why). In some embodiments, a user can annotate and/or exercise the Eco-System relevance to all information networks. In some embodiments, the assistant can inform fellow Bots of new annotations and/or facts. Some embodiments include a reporter. In some embodiments, a reporter can comprise crawling, listening, and reporting. In some embodiments, as noted above, a reporter can crawl content. In some embodiments, a reporter can be interested in things that have been programmed and/or configured. In some embodiments, a reporter can comprise anomalies. In some embodiments, anomalies can be positive and negative. Some embodiments include dependability on people, systems, output from other Bots, for content to crawl. Some embodiments include adherence to privacy and respected anonymized content. In some embodiments, a reporter can listen. In some embodiments, a reporter may not have access to AU systems. In some embodiments, a reporter can listen for published events, and even when a reporter does have access the reporter may still need events to initiate system action. In some embodiments, the reporter can report useful information of many types. In some embodiments, the reporter can state fact, unbiased and with zero emotion. In some embodiments, a reporter can infer a need for a baseline, standard, and/or other factors or data to begin from. In some embodiments, a reporter can evolve “reportation” based on one or more evolving baselines. In some embodiments, tribal knowledge can be used to design and evolve the system. Transient contributors such as independent contractors can add content which is readily digested by the system and can be acted on my one or more neural networks. One non-limiting example of this content is a cooling system expert's input which can be used in one or a large number of systems. Some embodiments log or learn what user's skillsets are in order to best present those data they need. And some embodiments preserve the original data, allowing the data to be re-annotated or remapped to ensure completeness and enabling the addition of any needed context. FIG.9illustrates a computer system enabling or operating the system according to some embodiments. In some embodiments, the system can be operatively coupled to the computer system210shown inFIG.9or the computer system210can comprise the system. In some embodiments, the computer system210can include and/or operate and/or process computer-executable code of one or more of the above-mentioned program logic, software modules, and/or systems. Further, in some embodiments, the computer system210can operate and/or display information within one or more graphical user interfaces coupled to the system. In some embodiments, the computer system210can comprise a cloud server and/or can be coupled to one or more cloud-based server systems. In some embodiments, the system210can comprise at least one computing device including at least one processor232. In some embodiments, the at least one processor232can include a processor residing in, or coupled to, one or more server platforms. In some embodiments, the system210can include a network interface235aand an application interface235bcoupled to the least one processor232capable of processing at least one operating system234. Further, in some embodiments, the interfaces235a,235bcoupled to at least one processor232can be configured to process one or more of the software modules238(e.g., such as enterprise applications). In some embodiments, the software modules238can include server-based software, and can operate to host at least one user account and/or at least one client account, and operating to transfer data between one or more of these accounts using the at least one processor232. With the above embodiments in mind, it should be understood that the invention can employ various computer-implemented operations involving data stored in computer systems. Moreover, the above-described databases and models described throughout can store analytical models and other data on computer-readable storage media within the system210and on computer-readable storage media coupled to the system210. In addition, the above-described applications of the system can be stored on computer-readable storage media within the system210and on computer-readable storage media coupled to the system210. These operations are those requiring physical manipulation of physical quantities. Usually, though not necessarily, these quantities take the form of electrical, electromagnetic, or magnetic signals, optical or magneto-optical form capable of being stored, transferred, combined, compared and otherwise manipulated. In some embodiments of the invention, the system210can comprise at least one computer readable medium236coupled to at least one data source237a, and/or at least one data storage device237b, and/or at least one input/output device237c. In some embodiments, the invention can be embodied as computer readable code on a computer readable medium236. In some embodiments, the computer readable medium236can be any data storage device that can store data, which can thereafter be read by a computer system (such as the system210). In some embodiments, the computer readable medium236can be any physical or material medium that can be used to tangibly store the desired information or data or instructions and which can be accessed by a computer or processor232. In some embodiments, the computer readable medium236can include hard drives, network attached storage (NAS), read-only memory, random-access memory, FLASH based memory, CD-ROMs, CD-Rs, CD-RWs, DVDs, magnetic tapes, other optical and non-optical data storage devices. In some embodiments, various other forms of computer-readable media236can transmit or carry instructions to a computer240and/or at least one user231, including a router, private or public network, or other transmission device or channel, both wired and wireless. In some embodiments, the software modules238can be configured to send and receive data from a database (e.g., from a computer readable medium236including data sources237aand data storage237bthat can comprise a database), and data can be received by the software modules238from at least one other source. In some embodiments, at least one of the software modules238can be configured within the system to output data to at least one user231via at least one graphical user interface rendered on at least one digital display. In some embodiments of the invention, the computer readable medium236can be distributed over a conventional computer network via the network interface235awhere the system embodied by the computer readable code can be stored and executed in a distributed fashion. For example, in some embodiments, one or more components of the system210can be coupled to send and/or receive data through a local area network (“LAN”)239aand/or an internet coupled network239b(e.g., such as a wireless internet). In some further embodiments, the networks239a,239bcan include wide area networks (“WAN”), direct connections (e.g., through a universal serial bus port), or other forms of computer-readable media236, or any combination thereof. In some embodiments, components of the networks239a,239bcan include any number of user devices such as personal computers including for example desktop computers, and/or laptop computers, or any fixed, generally non-mobile internet appliances coupled through the LAN239a. For example, some embodiments include personal computers240acoupled through the LAN239athat can be configured for any type of user including an administrator. Other embodiments can include personal computers coupled through network239b. In some further embodiments, one or more components of the system210can be coupled to send or receive data through an internet network (e.g., such as network239b). For example, some embodiments include at least one user231coupled wirelessly and accessing one or more software modules of the system including at least one enterprise application238via an input and output (“I/O”) device237c. In some other embodiments, the system210can enable at least one user231to be coupled to access enterprise applications238via an I/O device237cthrough LAN239a. In some embodiments, the user231can comprise a user231acoupled to the system210using a desktop computer, and/or laptop computers, or any fixed, generally non-mobile internet appliances coupled through the internet239b. In some further embodiments, the user231can comprise a mobile user231bcoupled to the system210. In some embodiments, the user231bcan use any mobile computing device231cto wireless coupled to the system210, including, but not limited to, personal digital assistants, and/or cellular phones, mobile phones, or smart phones, and/or pagers, and/or digital tablets, and/or fixed or mobile internet appliances. The subject matter described herein are directed to technological improvements to the field of artificial intelligence by proving artificial intelligence driven industrial reference recognition software that takes less computing resources to train and execute. The disclosure describes the specifics of how a machine including one or more computers comprising one or more processors and one or more non-transitory computer implement the system and its improvements over the prior art. The instructions executed by the machine cannot be performed in the human mind or derived by a human using a pin and paper but require the machine to convert process input data to useful output data. Moreover, the claims presented herein do not attempt to tie-up a judicial exception with known conventional steps implemented by a general-purpose computer; nor do they attempt to tie-up a judicial exception by simply linking it to a technological field. Indeed, the systems and methods described herein were unknown and/or not present in the public domain at the time of filing, and they provide a technologic improvements advantages not known in the prior art. Furthermore, the system includes unconventional steps that confine the claim to a useful application. It is understood that the system is not limited in its application to the details of construction and the arrangement of components set forth in the previous description or illustrated in the drawings. The system and methods disclosed herein fall within the scope of numerous embodiments. The previous discussion is presented to enable a person skilled in the art to make and use embodiments of the system. Any portion of the structures and/or principles included in some embodiments can be applied to any and/or all embodiments: it is understood that features from some embodiments presented herein are combinable with other features according to some other embodiments. Thus, some embodiments of the system are not intended to be limited to what is illustrated but are to be accorded the widest scope consistent with all principles and features disclosed herein. Some embodiments of the system are presented with specific values and/or setpoints. These values and setpoints are not intended to be limiting and are merely examples of a higher configuration versus a lower configuration and are intended as an aid for those of ordinary skill to make and use the system. Furthermore, acting as Applicant's own lexicographer, Applicant imparts the additional meaning to the following terms: “Substantially” and “approximately” when used in conjunction with a value encompass a difference of 5% or less of the same unit and/or scale of that being measured. In some embodiments, “substantially” and “approximately” are defined as presented in the specification in accordance with some embodiments. “Simultaneously” as used herein includes lag and/or latency times associated with a conventional and/or proprietary computer, such as processors and/or networks described herein attempting to process multiple types of data at the same time. “Simultaneously” also includes the time it takes for digital signals to transfer from one physical location to another, be it over a wireless and/or wired network, and/or within processor circuitry. The use of and/or, in terms of “A and/or B,” means one option could be “A and B” and another option could be “A or B.” Such an interpretation is consistent with the USPTO Patent Trial and Appeals Board ruling in ex parte Gross, where the Board established that “and/or” means element A alone, element B alone, or elements A and B together. As used herein, some embodiments recited with term “can” or “may” or derivations there of (e.g., the system display can show X) is for descriptive purposes only and is understood to be synonymous with “configured to” (e.g., the system display is configured to show X) for defining the metes and bounds of the system The previous detailed description is to be read with reference to the figures, in which like elements in different figures have like reference numerals. The figures, which are not necessarily to scale, depict some embodiments and are not intended to limit the scope of embodiments of the system. Any of the operations described herein that form part of the invention are useful machine operations. The invention also relates to a device or an apparatus for performing these operations. The apparatus can be specially constructed for the required purpose, such as a special purpose computer. When defined as a special purpose computer, the computer can also perform other processing, program execution or routines that are not part of the special purpose, while still being capable of operating for the special purpose. Alternatively, the operations can be processed by a general-purpose computer selectively activated or configured by one or more computer programs stored in the computer memory, cache, or obtained over a network. When data is obtained over a network the data can be processed by other computers on the network, e.g. a cloud of computing resources. The embodiments of the invention can also be defined as a machine that transforms data from one state to another state. The data can represent an article, that can be represented as an electronic signal and electronically manipulate data. The transformed data can, in some cases, be visually depicted on a display, representing the physical object that results from the transformation of data. The transformed data can be saved to storage generally, or in particular formats that enable the construction or depiction of a physical and tangible object. In some embodiments, the manipulation can be performed by a processor. In such an example, the processor thus transforms the data from one thing to another. Still further, some embodiments include methods can be processed by one or more machines or processors that can be connected over a network. Each machine can transform data from one state or thing to another, and can also process data, save data to storage, transmit data over a network, display the result, or communicate the result to another machine. Computer-readable storage media, as used herein, refers to physical or tangible storage (as opposed to signals) and includes without limitation volatile and non-volatile, removable and non-removable storage media implemented in any method or technology for the tangible storage of information such as computer-readable instructions, data structures, program modules or other data. Although method operations are presented in a specific order according to some embodiments, the execution of those steps do not necessarily occur in the order listed unless a explicitly specified. Also, other housekeeping operations can be performed in between operations, operations can be adjusted so that they occur at slightly different times, and/or operations can be distributed in a system which allows the occurrence of the processing operations at various intervals associated with the processing, as long as the processing of the overlay operations are performed in the desired way and result in the desired system output. It will be appreciated by those skilled in the art that while the invention has been described above in connection with particular embodiments and examples, the invention is not necessarily so limited, and that numerous other embodiments, examples, uses, modifications and departures from the embodiments, examples and uses are intended to be encompassed by the claims attached hereto. The entire disclosure of each patent and publication cited herein is incorporated by reference, as if each such patent or publication were individually incorporated by reference herein. Various features and advantages of the invention are set forth in the following claims. | 36,222 |
11861497 | The present inventive concept is best described through certain embodiments thereof, which are described herein with reference to the accompanying drawings, wherein like reference numerals refer to like features throughout. It is to be understood that the term invention, when used herein, is intended to connote the inventive concept underlying the embodiments described below and not merely the embodiments themselves. It is to be understood further that the general inventive concept is not limited to the illustrative embodiments described below and the following descriptions should be read in such light. Additionally, the word exemplary is used herein to mean, “serving as an example, instance or illustration.” Any embodiment of construction, process, design, technique, etc., designated herein as exemplary is not necessarily to be construed as preferred or advantageous over other such embodiments. Particular quality or fitness of the examples indicated herein as exemplary is neither intended nor should be inferred. DETAILED DESCRIPTION Deep Learning and Segmentation Real-time image segmentation is an important problem in computer vision with a multitude of applications. Among them is the segmentation of hair for live color augmentation in beauty applications. This use case, however, presents additional challenges. First, unlike many objects with simple shape, hair has a very complex structure. For realistic color augmentation, a coarse hair segmentation mask is insufficient. One needs a hair matte instead. Secondly, many beauty applications run on mobile devices or in web browsers, where powerful computing resources are not available. This makes it more challenging to achieve real-time performance. There is described herein a system and method, etc. to accurately segment hair at over 30 fps on a mobile device. The hair segmentation system and method is based on convolutional neural networks (CNNs). Most modern CNNs cannot run in real-time even on powerful GPUs and may occupy a large amount of memory. A target of the system and method herein is real-time performance on a mobile device. In a first contribution there is shown how to adapt the recently proposed MobileNets architecture of Google Inc for hair segmentation, which is both fast and compact enough to be used on a mobile device. Details regarding MobileNets may be found in “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications” of Howard et al., arXiv:1704.04861v1 [cs:CV] 17 Apr. 2017 incorporated herein by reference. In the absence of detailed hair segmentation ground truth, the network is trained on noisy and coarse crowd-sourced data (coarse segmentation data where the labelling is not finely accurate at the pixel level). A coarse segmentation result, however, is esthetically unpleasing for hair color augmentation purposes. For realistic color augmentation, a more accurate hair matte yields improved results. In a second contribution, we propose a method for obtaining more accurate hair mattes in real-time without the need for accurate hair matte training data. First, it is shown how to modify the baseline network architecture to have the capacity for capturing fine-level details. Next, by adding a secondary loss function that promotes perceptually appealing matting results, it is shown that the network can be trained to yield detailed hair mattes using only coarse hair segmentation training data. We compare this approach to a simple guided filter (an edge preserving filter with a liner run time complexity with respect to image size) post-processing and show that it yields more accurate and sharper results. Before describing deep learning and segmentation in detail, earlier approaches to developing a hair coloring solution for video on mobile devices were undertaken and evaluated by the present applicant. By way of an example, a classifier was developed incorporating a random forest (RF) model based on features of color histogram, position and gradient factors. The classifier processed the pixels successively, sliding a filter or kernel around the image, as is well-known, to determine whether a central pixel in the filter is a hair/not hair pixel. A sketch is shown inFIG.1where image100is processed by scanning a filter102over the image to successively process pixels using an RF classifier104that outputs whether a center pixel106of the filter102is a hair/not hair pixel. The output108may be used to define a hair matte (not shown) where the output is a data value of a corresponding pixel in the hair matte. It is understood that in the sketch ofFIG.1, filter102, center pixel106, and the illustrated path are not to scale. Results were not promising in neither speed nor accuracy so the approach was discarded in favor of deep learning. A reason that deep learning was not selected in the first place is that it is still quite challenging to make it run in real-time on mobile devices. Most deep learning architectures don't even run in real-time on powerful GPUs. An initial approach adapted a Visual Group Geometry neural net architecture, namely VGG16. A VGG16 based classification network pre-trained on ImageNet (a large visual database (an open source dataset) designed for use with object recognition software research) was adapted by removing the last 3 layers (e.g. full connected layers and output layer) and converting to a semantic segmentation network by adding several convolutional (often abbreviated herein as “conv”) transpose layers. Though the results (output) were quite good, processing was slow, especially on a mobile device (over a second per frame). The approach thus shifted to find a lighter architecture that is smaller in size and performs fewer operations to enhance processing speeds, etc. MobileNet Architecture MobileNet architecture of Google Inc. is a light weight, pre-trained, deep learning neural network architecture implemented for mobile devices.FIG.2is a schematic illustration of the MobileNet network200showing source image100as input, network layers/layer groups202in accordance with the pre-trained CNN of MobileNet and class labels204as the output of the MobileNet network200. Input images processed by MobileNet are 224×224 pixels in resolution with 3 colour values (i.e. 224×224×3). The MobileNet architecture employs depthwise separable convolutions (a form of factorized convolutions) to minimize processing operations (i.e. floating point operations, multiplications and/or adds, etc.). Depthwise separable convolutions factorize (e.g. spit up functions of) a standard convolution into a depthwise convolution and a 1×1 convolution (also referenced as a “pointwise convolution”) with a view to making processing faster by reducing or minimizing the number of operations required. The depthwise convolution applies a single filter to each input channel. The pointwise convolution then applies a 1×1 convolution to combine the outputs of the depthwise convolution, separating filtering and combining functions/operations into two steps rather than a single filtering and combining operation performed by standard convolutions. Thus the structures in architecture200may include two conv layers per structure, one depthwise conv layer and one pointwise conv layer, to define or illustrate a “layer group”. Table 1 shows activation map size information and processing operation(s) information for each of the 17 layers/layer groups202beginning from left to right through to the Softmax operation. MobileNet, strictly, has 28 conv layers from its first full conv layer and its fully connected layer. TABLE 1Processing Operation(s)Depthwise ConvLayers/LayerConv3X3 + BN + ReLU + ConvAverageFullyGroupsMap Size3X31X1 + BN + ReLUPool 7 × 7ConnectedSoftmax1112 × 112 × 32X2112 × 112 × 64X3-456 × 56 × 128X5-628 × 28 × 256X7-1214 × 14 × 512X13-147 × 7 × 1024X151 × 1 × 1024X161 × 1 × 1024X171 × 1 × 1024X In Table 1, BN represents a batch normalization (batchnorm) function to normalize input (to a subsequent operation) by adjusting and scaling the activations (e.g. individual values in the activation map provided from one layer/operations to the next). ReLU is a rectifier and represents a rectified linear units function (e.g. max function(X, 0) for input X such that all negative values of X are set to 0). Downsampling is handled with strided convolution in the depth wise convolutions as well as in the first layer. A final downsample by Average Pool 7×7 uses a downsampling function based on averaging values in a 7×7 array. Softmax, or the normalized exponential function, “squashes” a K-dimensional vector z of arbitrary real values to a K-dimensional vector σ (z) of real values, where each entry is in the range (0, 1), and all the entries add up to 1 (e.g. a scaling and normalizing function). Usefully, the output can be used to represent a categorical distribution—a probability distribution over K different possible outcomes (categories or classes) and is thus used frequently with neural network classifiers classifying to K classes. The respective 17 layers are grayscale and pattern coded inFIG.2by processing operation(s). FIG.3is an illustration showing an adapted deep learning network300with layers/layer groups302. Network300also receives source image100as input for processing. Network300(e.g. the layers/layer groups302thereof) are adapted to output a hair mask304. Table 2 shows activation map size information and processing operation(s) information for each of the22layers/layer groups302beginning from left to right. TABLE 2Processing Operation(s)Depthwise Conv3X3 + BN +ReLU + ConvDepthwise ConvConvLayer/Conv1X1 +Up3X3 + Conv1 × 1 +GroupMap Size3X3BN + ReLUsampling1X1 + BN + ReLUReLUSoftmax1112 × 112 × 32X2112 × 112 × 64X3-456 × 56 × 128X5-628 × 28 × 256X7-1228 × 28 × 512X13-1428 × 28 × 1024X1556 × 56 × 1024X1656 × 56 × 64X17112 × 112 × 64X18112 × 112 × 64X19224 × 224 × 64X20224 × 224 × 64X21224 × 224 × 2X22224 × 224 × 2X Network300is similar to Network200but is adapted. The downsampling to 14×14 resolution and then to 7×7 resolution of network200is avoided and the minimum resolution is 28×28 in layers 5-14. The final three layers are removed (i.e. the two fully connected layers and the Softmax layer though a final Softmax layer is also used). To preserve fine details the output feature resolution is increased by changing the step size of the last two layers with step size of 2 to 1. Due to the use of pre-trained weights on ImageNet incorporated in the base architecture of MobileNet, the kernels for the layers with updated resolution are dilated by their scale factor with respect to their original resolution. Namely, kernels for layers that increased by a factor of 2 are dilated by 2 and kernels for layers that increased by a factor of 4 are dilated by 4. This yields a final minimum resolution of 28×28 in the encoder stage. Layers/layer groups 15 and forward may define a decoder stage. Layers 2-14, 16, 18 and 20 incorporate depthwise separable convolutions—factorized standard convolutions where a depthwise convolution applies a single filter to each input channel and a pointwise convolution combines the outputs of the depthwise convolution. Depthwise separate convolutions have the effect of reducing computation and model size, both of which are assistive for processing in a mobile device environment. The decoder phase takes the above CNN features from the encoder phase as input and upsamples them to a hair mask at the original 224×224 resolution. Upsampling is performed at layers 15, 17 and 19, alternating with further feature analysis in layers 16, 18 and 20. Upsampling is performed by a simplified version of an inverted MobileNet architecture. At each stage, operations upsample the previous layer by a factor of 2 by replicating each pixel in a 2×2 neighborhood. Then, separable depthwise convolution is applied, followed by pointwise 1×1 convolutions with 64 filters, followed by ReLU as shown in Table 2. Operations conclude in layer/layer group21by adding a 1×1 convolution with Softmax activation and 2 output channels for hair/non-hair. Though not shown, the network is trained by minimizing the binary cross entropy loss LMbetween predicted and ground truth masks. Binary cross entropy is discussed further below in relation toFIG.5. Hence,FIG.3shows a fully convolutional MobileNet architecture for hair segmentation. The model thus defined and trained applies a plurality of convolutional (conv) filters in a succession of conv layers to detect respective features, where a first set of conv layers in the succession provides output which down samples (and encodes) the image from a first image resolution down to a minimum resolution and a second set of conv layers in the succession up samples the output back to the first image resolution. The model has upsampling operations interspersed before an initial layer of the second set of conv layers and before respective subsequent layers of the second set of conv layers to upsample output to the first image resolution. The first set may define an encoder stage and the second set a decoder stage. Training deep neural networks requires a large amount of data. While there are large datasets for general semantic segmentation, these datasets are much less popular for hair segmentation. Moreover, unlike some objects like cars, which have a relatively simple shape, hair shape is very complex. Therefore, obtaining precise ground truth segmentation for hair is even more challenging To cope with this challenge a pre-trained network on ImageNet was used. It was further fine-tuned on hair segmentation data. Nevertheless, several thousands of training images are still needed. Data was crowd-sourced using a hair coloring app where users have to manually mark their hair. While getting this data is inexpensive, the resulting hair segmentation labels are very noisy and coarse. This source data may be manually cleaned by only keeping the images of human faces with sufficiently good hair masks. This is considerably faster than marking the hair from scratch or fixing incorrect segmentations. Two in-house sets of test data are similarly defined. The above network300was implemented on an Apple iPad Pro 12.9 (2015) incorporating the Core ML™ library from Apple Corporation. Core ML automatically generates the MobileNet class from the MobileNet model and may be adapted as described. To take advantage of parallelization, the model was processed using the iPad's GPU and its related memory. It is noted that for some implementations to achieve desired processing the CNN may be processed (e.g. executed) by a GPU and in others it may be sufficient to process using a CPU of the computing device. Due to the compactness of the architecture (300) and usage of the Core ML library, a forward pass over a single image takes only 60 ms. The network was also implemented using Tensorflow™ (an open source software library for high performance numerical computation with support for machine learning and deep learning originally develop by Google Inc.). Comparable processing was slower at ˜300 ms. While Tensorflow has NEON™ optimizations (NEON technology is an advance is single instruction multiple data (SIMD) architecture extension for certain processors of Arm Limited) it is not optimized for graphics processing. It is recognized that GPUs on modern phones and tablets do pack considerable power. This model yields already very good qualitative and quantitative results as shown in Table 3 where In-house Set 1 comprises 350 face cropped images. It is manually annotated from crowd sourced data and is similar to the training data used. In-house set 2 is 108 face images in 3:4 aspect (from the source input device) and is manually labeled. In-house set 1 has coarser manual labeling and in-house set 2 is finer manual labeling with neither set having fine labelling. TABLE 3Test SetPrecisionRecallF1 ScoreIoUPerformanceAccuracyIn-house0.894060.934230.910450.843020.771830.95994Set 1In-house0.9048360.929260.9146810.845140.768550.95289Set 2 However this approach still does not capture all the hair, and provides a coarse and blobby mask only rather than an accurate alpha matte. Post-processing the resulting mask using guided filtering to make it more visually appealing corrects only minor errors as described further below. There is a desire to improve the results to obtain truer (and preferably true) matting using CNNs. Two challenges exist in this framework—the CNN300downsampled the image quite heavily in the encoding stage, and thus the resulting masks cannot be expected to contain very high resolution detail. As well, neither training data nor test data is available at sufficient accuracy, as mentioned, to train and evaluate a matting method. To address the first issue of downsampling skip connections are added to the architecture to redefine the upsampling operations. By adding skip connections, powerful but low-res features are combined with weaker but higher-res features. Note that the architecture has reverted to the original encoder architecture, going all the way to 7×7 resolution, since due to the added skip-connections there is no longer a need at restricting the downsampling. Fewer skip-connections would be employed if the architecture ofFIG.3was adapted and resolution was stopped at 28×28, for example. This way, shallower layers in the encoder, which contain high-res but weak features are combined with low-res but powerful features from deeper layers. The layers are combined by first applying a 1×1 convolution to the incoming encoder layers to make the output depth compatible with the incoming decoder layers (64 for the three outer skip connections and 1024 for the inner skip connection) and then merging the layers using addition. For each resolution, the deepest encoder layer at that resolution is taken for skip connection. FIG.4is an illustration showing an adapted deep learning network400with layers/layer groups402. Network400also receives source image100as input for processing. Network400(e.g. the layers/groups402thereof) are adapted to output a hair mask404. Table 4 shows activation map size information and processing operation(s) information for each of the 26 layers/layer groups402beginning from left to right. In the model of both network300and network400, each of the plurality of conv filters generates an activation map and the activation map generated by one conv layer is output to provide an input to a next conv layer in the succession. The plurality of conv filters comprises a first set of conv filters and a second set of conv filters such that: the first set of conv filters processes the image in the first set of layers (e.g. in layers 2-14 of network400) such as to comprise an encoder; and the second set of conv filters processes the image in the second set of layers (e.g. in layers 16, 18, 20, 22 and 24 in network400) such as to comprise a decoder. The hair segmentation mask is defined from a final activation map output from a final conv layer (e.g. from layer 25) of the succession. The model of network300or network400may comprise a normalization function and a rectifier function in succession interspersed with the first set of conv filters to normalize and linearly rectify output. The model may comprise the rectifier function interspersed with the second set of conv filters to linearly rectify output. In the model of network300or network400the first set of layers comprises: an initial layer defined by an initial conv 3×3 filter; and a plurality of subsequent depthwise separable convolutions each defined by, in succession, a respective depthwise conv 3×3 filter, a batch normalization function and a rectified linear units function, and a conv 1×1 filter followed by the batch normalization and the rectified linear units function. In the model of network300or network400the second set of layers comprises: a plurality of initial layers in succession each defined by a respective depthwise conv 3×3 filter, a conv 1×1 filter and a rectified linear units function; and a final layer defined by a final conv 1×1 filter and the rectified linear units function. TABLE 4Processing Operation(s)DepthwiseDepthwise ConvConvConvLayer/Conv3X3 + BN + ReLU + ConvUp3X3 + Conv1 × 1 +GroupMap Size3X31X1 + BN + ReLUsampling1X1 + BN + ReLUReLUSoftmax1112 × 112 × 32X2112 × 112 × 64X3-456 × 56 × 128X5-628 × 28 × 256X7-1214 × 14 × 512X13-147 × 7 × 1024X1514 × 14 × 1024X1614 × 14 × 64X1728 × 28 × 64X1828 × 28 × 64X1956 × 56 × 64X2056 × 56 × 64X21112 × 112 × 64X22112 × 112 × 64X23224 × 224 × 64X24224 × 224 × 64X25224 × 224 × 2X26224 × 224 × 2X FIG.4shows upsampling operations at layers 15, 17, 19, 21 and 23. Upsampling at layers 15, 17, 19 and 21 includes performing a skip connection with an output map of a previous layer. The map of a previous layer has a resolution equal to the target resolution. The upsampling function at these layers performs a conv 1×1 operation on the map of an earlier corresponding layer of the encoder stage to combine channel information and performs an upsample operation (e.g. 2×2 as described) on the adjacent layer's activation map normally provided as input to the respective layer such that it has the same resolution as the map from the earlier layer. The output of the conv 1×1 operation and upsampling operation are added. So the upsampling function uses a respective skip connection, where each of the respective skip connections combines a first activation map output from an adjacent conv layer in the succession as input to the next conv layer of the second set of conv layers; and a second activation map output from an earlier conv layer in the first set of conv layers, where the second activation map has a larger image resolution than the first activation map. Each of the respective skip connections is defined to add an output of a conv 1×1 filter applied to the second activation map with an output of the upsampling function applied to the first activation map to increase resolution of the first activation map to the larger image resolution. Lastly, the hair segmentation map is defined by applying a Softmax (normalized exponential function) to the final activation map to define values between 0 and 1 for each pixel of the hair segmentation map. The quantitative results of this approach are shown in Table 5: TABLE 5Test SetPrecisionRecallF1 ScoreIoUPerformanceAccuracyIn-house Set 10.882370.947270.910240.842260.7705180.95817In-house Set 20.9082390.941960.923200.859710.795480.95699 Moreover, due to the decrease in the final encoder resolution, the above architecture is much faster even though it contains additional decoder layers. A forward pass using Core ML over a single image takes 30 ms on Apple iPad Pro 12.9 (2015). From an accuracy point of view, while it does seem better on the 2nd set than the model without skip connections, the results on the first set are inconclusive. This illustrates the second point above—that the coarse segmentation data has limited accuracy and this does not only impede training but testing as well. It is contended that quantitative evaluation with such data has more or less reached its capacity at the current performance level. Qualitatively, however, this architecture also seems only marginally better. One possible explanation is that while the skip-connections architecture now has the capacity to learn fine-level details, these details are not present in our current training data making the resulting network output masks that are just as coarse as those in the training set. Evaluating and minimizing the mask-image gradient consistency loss Given the training and test data available, the CNN is limited to learning hair matting using only coarse segmentation training data. Motivated by the work of Rhemann et al., there is added a perceptually inspired measure of mask correctness. To that end, there is added a measure of consistency between the mask and images gradients. The distance (loss) measure is as follows. Mask-image gradient consistency loss is shown in Eq. 1: Lc=∑Mmag[1-(IxMx+IyMy)2]∑Mmag,(Eq.1) where Ix,Iyare normalized image gradient and Mx,Myare normalized mask gradient and Mmagis mask gradient magnitude. The value of the loss (Lc) is small when there is an agreement between image and mask gradients. This loss is added to the original binary cross entropy loss with a weight w, making the overall loss L=LM+wLC(Eq. 2) The combination of the two losses maintains the balance between being true to training masks while generating masks that adhere to image edges. This mask-image gradient consistency loss measure is used to both evaluate existing models and train a new model where the binary cross entropy (loss) measure is combined with this new measure of Eq. 1 as indicated in in Eq 2. Cross-entropy loss or log loss measures the performance of a classification model whose output is a probability value between 0 and 1. Cross-entropy loss increases as the predicted probability diverges from the actual (true) label. In the present example, the model classifies each pixel in respective observations o as one of two classes c—hair, not hair—such that a binary cross entropy loss LMmay be calculated as in Eq. 3: LM=−(ylog(p)+(1−y)log(1−p)) (Eq. 3) where y is the binary label (0 or 1) for a correct classification for the observation o and p is the predicted probability that the observation o is of class c. FIG.5shows a training architecture including network500with skip connections receiving image100(I) and producing mask504(M). Network500is comparable in structure to network400but is trained using the loss measures discussed. Also shown is input502comprising Ix,Iy(normalized image gradient) from image100and input506comprising Mx,My(normalized mask gradient) to a mask-image gradient consistency loss determiner component508for the training. Also shown in a binary cross-entropy loss determiner component510. The mask-image gradient consistency loss LCand binary cross-entropy loss LMare combined (not shown) to define a loss L parameter to train the network500as described above. Thus the network500is a CNN that comprises a pre-trained network for image classification such as one pre-trained using open source image training data. The pre-trained network is adapted to define an object segmentation mask such as a hair segmentation mask rather than to classify the image per se. The CNN is further trained to minimize a mask-image gradient consistency loss when trained and may be so trained using coarse segmentation data. This mask-image gradient consistency loss may be combined with a binary cross entropy loss. The CNN is further adapted to use skip connections between layers in an encoder stage and corresponding layers in a decoder stage to combine low resolution but powerful features and high resolution but weak features when upsampling in the decoder stage to define the object segmentation mask. The resulting masks look much more like mattes and are much more detailed.FIG.6shows an image table600depicting qualitative results. Shown is image602to be processed (an example of image100) as well as respective masks (examples of masks304,404and504) produced using the models ofFIGS.3-5respectively. Mask604was produced using a model without skip connections (Model 1) perFIG.3, mask606was produced using a model with skip connections (Model 2) perFIG.4and masks608was produced using a model with skip connections and mask-image gradient consistency loss following training (Model 3) perFIG.5. Quantitatively, this method performs better according to the new consistency measure but slightly worse based on the rest (similarity to ground truth) of the measures. However, as mentioned earlier, given the current ground truth accuracy available, it may not be desired to maximize a prediction's agreement with ground truth beyond a certain level. Table 6 shows qualitative results of all the models using the same test data sets: TABLE 6GndTruthMask-Mask-ImageImageGradGradCons.Cons.Mod.Test SetPrec.RecallF1 ScoreIoUPerform.Acc.LossLoss1In-house Set 10.894060.934230.910450.843020.771830.959940.25390.175In-house Set 20.9048360.929260.9146810.845140.768550.952890.2750.1712In-house Set 10.882370.947270.910240.842260.7705180.958170.23060.175In-house Set 20.9082390.941960.923200.859710.795480.956990.24140.1713In-house Set 10.934950.8872030.899630.834440.7484090.961190.0950.175In-house Set 10.955870.878260.911980.8427570.742590.954070.09120.171 The matte output of Model 3 of the architecture ofFIG.5was compared to the coarser mask output of Model 1 of the architecture ofFIG.3and with that output of Model 1 with the addition of a guided filter. A guided filter is an edge-preserving filter and has a linear runtime complexity with respect to the image size. It takes only 5 ms to process a 224×224 image on iPad Pro.FIG.7shows an image table700depicting qualitative results of models ofFIGS.3and5. Shown is image702to be processed (an example of image100). Image704is the mask from Model 1 ofFIG.3, without guided filter post-processing. Image706is the mask from Model 1, with added guided filter post-processing. Image708is the mask (or matte) output from Model 3 ofFIG.5. As stated, Model 3 uses skip connections and is trained with the noisy and course segmentation data from crowd source data and is trained with the mask-images gradient consistency loss function. Image706using the guided filter shows capturing more details, with individual hair strands becoming apparent. However, the guided filter adds detail only locally near the edges of the mask. Moreover, the edges of the refined masks have a visible halo around them, which becomes even more apparent when the hair color has lower contrast with its surroundings. This halo causes color bleeding during hair recoloring. The architecture ofFIG.5yields sharper edges (as seen in image708) and captures longer hair strands, without the unwanted halo effect seen in guided filter post-processing. As an additional bonus, the architecture ofFIG.5runs twice as fast compared to the architecture ofFIG.4, taking only 30 ms per frame on a mobile device and without the need for an extra post-processing matting step. Due to the use of skip connections, which help with capturing high resolution detail, the architecture ofFIG.5maintains the original MobileNet encoder structure with the deepest layers having 7×7 resolution. These layers have many depth channels (1024) and become very expensive to process with increased resolution. Having a 7×7 resolution makes processing much faster compared to the 28×28 resolution in the architecture ofFIG.4. Experiments The method was evaluated on three datasets. First is the crowd-sourced dataset, consisting of 9000 training, 380 validation, and 282 testing images. All three subsets include the original images and their flipped versions. Since a target is hair matting on mobile devices, a pre-processing of the data is performed by detecting the face and cropping a region around it based on the scale expected for typical selfies. To compare the method to existing approaches, two public datasets are evaluated: LFW Parts dataset of Kae et al. and the hair dataset of Guo and Aarabi. The former consists of 2927 250×250 images, with 1500 training, 500 validation, and 927 test images. Pixels are labeled into three categories: hair, skin, and background, generated at the superpixel level. The latter consists of 115 high-resolution images. Since it contains too few images to train on, we use our crowdsourced training data when evaluating on this set. To make this dataset consistent with our training data, pre-processing in a similar manner is performed (using face detection and cropping), adding flipped images as well. Since in a few cases faces were not detected, the resulting dataset consists of 212 images. Training is done using a batch size of 4 using the Adadelta (Adadelta: an adaptive learning rate method,” arXiv preprint arXiv:1212.5701, 2012) method in Keras (F. Chollet et al., https://github.com/keras-team/keras, 2015), with learning rate 1:0, ρ=0:95, and ε=1e−7. L2 regularization is used with the weight 2·10−5for convolution layers only. Depthwise convolution layers and the last convolution layer are not regularized. The loss balancing weight is set to to ω=0:5 in (Eq. 3). In the three-class LFW data, only the hair class is contributing to the mask-image gradient consistency loss. The model is trained for 50 epochs and the best performing epoch selected using validation data. Training on crowd-sourced dataset takes 5 hours on Nvidia GeForce GTX 1080 Ti™ (Nvidia, GeForce and GTX 1080 Ti are trademarks of Nvidia Corporation) GPU and less than an hour on LFW Parts due to much smaller training set size. A. Quantitative Evaluation For quantitative performance analysis, the F1-score, Performance, IoU, and Accuracy are measured, averaged across all test images. To measure the consistency of image and hair mask edges, the mask-image gradient consistency loss (Eqn. 1) is also reported. Recall that during the manual clean-up in of crowd sourced images (image data) images were only filtered rather than corrected relative to the masks. As a result, the quality of the hair annotation is still poor. Therefore, prior to evaluation on the crowd-sourced data, manual correction of the test masks was undertaken, spending no more than 2 minutes per annotation. This yielded slightly better ground truth. Three variants of the method (model 1, Model 1 with guided filtering and Model 3) are evaluated on this relabeled data. All three methods perform similarly with respect to the ground truth comparison measures, however, Model 3 is the clear winner in the gradient consistency loss category, indicating that its masks adhere much better to image edges. On the LFW Parts dataset, an on-par performance is reported with the best performing method in Qin et al., but it is achieved in real-time on a mobile device. Only the accuracy measure is used for evaluation since it is the only measure used in Qin et al. Arguably, especially since LFW Parts was annotated at the superpixel level, the ground truth there may not good enough for high-accuracy analysis. On the dataset of Guo and Aarabi there is reported an F1-score of 0:9376 and Performance of 0:8253. HNN was re-run on this post-processed dataset and obtained similar performance to that reported by the authors, with F1-score of 0:7673 and Performance of 0:4674. B. Qualitative Evaluation The method is evaluated on publicly available selfie images for qualitative analysis. Model 1 yields good but coarse masks. Model 1 with guided filter produces better masks but with an undesirable blur around hair boundaries. The most accurate and sharpest results are achieved by Model 3. A failure mode of both guided filter post-processing and Model 3 is their under-segmentation of hair-like objects in the vicinity of hair, such as eyebrows in case of dark hair or bright background for light hair. In addition, highlights inside the hair can cause the hair mask from Model 3 to be non-homogeneous. C. Network Architecture Experiments Using the validation data, experiments with a number of decoder layer channels was undertaken, but it was observed that it does not have a large effect on accuracy, with64channels yielding the best results according to most measures. These experiments were done using the skip connections architecture inFIG.4without using the gradient consistency loss. Howard et al. observed that MobileNets perform better given higher image resolution. Given a goal of accurate hair matting, experiments were undertaken using our Model 3, increasing the resolution beyond 224×224, which is the highest resolution MobileNet were trained on ImageNet. A qualitative comparison of masks inferred using Model 3 from 224×224 images vs. 480×480 images shows the 480×480 results look more accurate around the hair edges, with longer hair strands being captured including those over a face (e.g., on the nose). However, the issues mentioned in the previous section are emphasized as well, with more of the hair mask bleeding into non-hair regions and the inside of the mask becoming non-homogeneous due to hair highlights. In addition, processing a larger image is significantly more expensive. As noted above, the CNN is configured for run-time execution on a user's computing device such as a mobile device. It may be configured such that execution of the CNN is at least in part on a GPU of such a device to take advantage of processing (e.g. parallelization in such GPUs). In some implementations, execution may be on a CPU. It is understood that training environments to define a trained network using the coarse segmentation data (training data) may vary from run-time environments. Training environments may have higher processing capabilities and/or more storage to hasten training operations. FIG.8is a block diagram of an example computing device800, in accordance with one or more aspects of the present disclosure, such as a handheld mobile device (e.g. a smartphone or tablet). However it may be another computing device such as a laptop, desktop, workstation, etc. As noted earlier, a goal of the investigation and development effort is to produce a deep neural network for operation on a handheld mobile device with limited resources to produce processed images (e.g. video) effectively and efficiently. Computing device800comprises a user device, for example, to acquire one or more images such as a video and process the images to change one or more attributes and present new images. In one example, the images are processed to change a color of hair in the images. Computing device800comprises one or more processors802, one or more input devices804, a gesture-based I/O device806, one or more communication units808and one or more output devices810. Computing device800also includes one or more storage devices812storing one or more modules and/or data. Modules may include deep neural network model814, application816having components for a graphical user interface (GUI818), color prediction820and image acquisition822. Data may include one or more images for processing (e.g. image824), one or more masks generated from the one or more images (e.g. mask826generated from image824), and one or more new images generated using the one or more masks and the one or more images (e.g. new image828). Application816provides the functionality to acquire one or more images such as a video and process the images to change one or more attributes and present new images. In one example, the images are processed to change a color of hair in the images. Application performs the image processing using a deep neural network as provided by neural network model814. Network model may be configured as any of the models shown inFIGS.3,4and5. Application816may be associated with certain attribute data such as color data830for changing one or more attributes of the image. Changing attributes relates to changing pixel values to create a new image. It is understood that image related data (e.g. for storing, printing and/or displaying images) may be represented using various color models and data formats and application816may be configured accordingly. In other examples, the attribute data may relate to changing an effect such a lighting conditions, texture, shape, etc. Application816may be configured with one or more functions for changing attribute(s) (not shown), for example, to apply an effect to the image at desired location (e.g. an object or portion thereof of interest in the image identified by the deep neural network). Storage device(s)212may store additional modules such as an operating system832and other modules (not shown) including communication modules; graphics processing modules (e.g. for a GPU of processors802); map module; contacts module; calendar module; photos/gallery module; photo (image/media) editor; media player and/or streaming module; social media applications; browser module; etc. Storage devices may be referenced as storage units herein. Communication channels838may couple each of the components802,804,806,808,810,812, and any modules814,816and826for inter-component communications, whether communicatively, physically and/or operatively. In some examples, communication channels838may include a system bus, a network connection, an inter-process communication data structure, or any other method for communicating data. The one or more processors802may implement functionality and/or execute instructions within computing device800. For example, processors802may be configured to receive instructions and/or data from storage devices812to execute the functionality of the modules shown inFIG.8, among others (e.g. operating system, applications, etc.) Computing device800may store data/information to storage devices812. Some of the functionality is described further herein below. It is understood that operations may not fall exactly within the modules814,816and826ofFIG.8such that one module may assist with the functionality of another. Computer program code for carrying out operations may be written in any combination of one or more programming languages, e.g., an object oriented programming language such as Java, Smalltalk, C++ or the like, or a conventional procedural programming language, such as the “C” programming language or similar programming languages. Computing device800may generate output for display on a screen of gesture-based I/O device806or in some examples, for display by a projector, monitor or other display device. It will be understood that gesture-based I/O device806may be configured using a variety of technologies (e.g. in relation to input capabilities: resistive touchscreen, a surface acoustic wave touchscreen, a capacitive touchscreen, a projective capacitance touchscreen, a pressure-sensitive screen, an acoustic pulse recognition touchscreen, or another presence-sensitive screen technology; and in relation to output capabilities: a liquid crystal display (LCD), light emitting diode (LED) display, organic light-emitting diode (OLED) display, dot matrix display, e-ink, or similar monochrome or color display). In the examples described herein, gesture-based I/O device806includes a touchscreen device capable of receiving as input tactile interaction or gestures from a user interacting with the touchscreen. Such gestures may include tap gestures, dragging or swiping gestures, flicking gestures, pausing gestures (e.g. where a user touches a same location of the screen for at least a threshold period of time) where the user touches or points to one or more locations of gesture-based I/O device806. Gesture-based I/O device806and may also include non-tap gestures. Gesture-based I/O device806may output or display information, such as graphical user interface, to a user. The gesture-based I/O device806may present various applications, functions and capabilities of the computing device800including, for example, application818to view images, process the images and display new images, messaging applications, telephone communications, contact and calendar applications, Web browsing applications, game applications, e-book applications and financial, payment and other applications or functions among others. Although the present disclosure illustrates and discusses a gesture-based I/O device806primarily in the form of a display screen device with I/O capabilities (e.g. touchscreen), other examples of gesture-based I/O devices may be utilized which may detect movement and which may not comprise a screen per se. In such a case, computing device800includes a display screen or is coupled to a display apparatus to present new images. Computing device800may receive gesture-based input from a track pad/touch pad, one or more cameras, or another presence or gesture sensitive input device, where presence means presence aspects of a user including for example motion of all or part of the user. One or more communication units808may communicate with external devices (not shown) for example to receive new attribute data or application functionality, to share new images with another computing device, printing device or display device (all not shown) via one or more communication networks (not shown) by transmitting and/or receiving network signals on the one or more networks. The communication units may include various antennae and/or network interface cards, chips (e.g. Global Positioning Satellite (GPS)), etc. for wireless and/or wired communications. Input devices804and output devices810may include any of one or more buttons, switches, pointing devices, cameras, a keyboard, a microphone, one or more sensors (e.g. biometric, etc.), a speaker, a bell, one or more lights, a haptic (vibrating) device, etc. One or more of same may be coupled via a universal serial bus (USB) or other communication channel (e.g.838). A camera (an input device804) may be front-oriented (i.e. on a same side as) to permit a user to capture image(s) using the camera while looking at the gesture based I/O device806to take a “selfie”. The one or more storage devices812may take different forms and/or configurations, for example, as short-term memory or long-term memory. Storage devices812may be configured for short-term storage of information as volatile memory, which does not retain stored contents when power is removed. Volatile memory examples include random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), etc. Storage devices812, in some examples, also include one or more computer-readable storage media, for example, to store larger amounts of information than volatile memory and/or to store such information for long term, retaining information when power is removed. Non-volatile memory examples include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memory (EPROM) or electrically erasable and programmable (EEPROM) memory. Though not shown, a computing device may be configured as a training environment to train neural network model814for example using the network as shown inFIG.5along with appropriate training and/or testing data. Computing device800comprises a processing unit and a storage unit coupled to the processing unit. The storage unit stores instructions, which when executed by the processing unit, configure the computing device to: store and provide a deep learning neural network model (e.g. comprising a convolutional neural network) configured to classify pixels of an image to determine whether each of the pixels a member of an object of interest (e.g. is a hair pixel); and define and present a new image (e.g. a colored hair image) by changing one or more attributes of the pixels that are a member of the object of interest. Changing operations use a mask defined from the image using the deep neural network model. In one example, the object of interest is hair and the attribute is color. Thus changing one or more attributes applies a new hair color to hair pixels in the image using a hair segmentation mask. The deep neural network is adapted to a light architecture for a computing device that is a mobile device (e.g. a smartphone or tablet) having fewer processing resources than a “larger” device such as a laptop, desktop, workstation, server or other comparable generation computing device. The deep neural network model may be configured as a depthwise separable convolution neural network comprising convolutions in which individual standard convolutions are factorized into a depthwise convolution and a pointwise convolution. The depthwise convolution is limited to applying a single filter to each input channel and the pointwise convolution is limited to combining outputs of the depthwise convolution. The deep neural network model may be further configured to comprise operations to perform skip connections, between layers in the encoder and corresponding layers in the decoder, such as when upsampling. The deep neural network model may be trained using a mask-image gradient consistency loss measure whereby mask-image gradient consistency loss is determined relative to an image processed and a mask generated and the loss measure used train the model. Mask-image gradient consistency loss may be determined as per Eq. 1. FIG.9is a flowchart of operations900of computing device800in accordance with an example. Operations900relate to a user of computing device800using an application such as application816to take a selfie comprising a video comprising video frames defining images of the user's head including the user's hair, selecting a new hair color and watching a video in which the hair color of the user's hair is the new hair color. At902operations receive an image via an input device or unit such as a camera which may invoke GUI818and image acquisition component822. The image is provided for display (904) such as to gesture-based I/O device806or another display device. To assist with processing for hair on mobile devices, images (data) may be pre-processed (not shown). Images may be pre-processed by detecting the face and cropping a region around it based on the scale expected for typical selfies. At906, input is received via gesture-based I/O device806and GUI818(an interactive GUI) to select a new hair color. GUI818may be configured to present hair color data814via an interactive interface for selection. In some examples, application816may be configured to suggest a color. Though not shown, operations may include determining an existing or current hair color from the image received and optionally other color (e.g. skin color) or light information, etc. User preferences represented as data (not shown) may be solicited through GUI818. Operations may further include providing same to the color prediction component820. Color prediction component820may have a function to suggest an appropriate color (e.g. one or more candidates for new hair colors from color data830) responsive to one or more of the existing hair color, skin or other color and/or light information, the user preferences, trends, etc. At908operations receive a second image for processing to apply the new attribute, namely the new hair color, to pixels of hair in the second image that are identified by the deep neural network model814as described herein. As the camera is continually capturing images, the first image used to define an existing hair colour is no longer current. At910operations define a hair segmentation mask that identifies hair pixels in the (second) image using model814. At912operations define a new image (e.g. a colored hair image) by applying the new hair color to hair pixels in the image using the hair segmentation mask. At914, operations provide the new image for output to gesture-based I/O device806in a GUI provided by GUI component818. As further images are received from the camera, further respective masks are defined and further respective new image with colored hair are defined and presented. Additional or alternative GUIs or GUI functions may be provided to facilitate other attribute changes, live comparisons of existing and new hair colors or two new hair colors or to save still images or video segments showing a new hair color. Operations may present a GUI via the gesture-based I/O device where the GUI comprises a first portion to view the image and a second portion to view the colored hair image simultaneously such as in a split screen arrangement. Operations may apply a lighting conditions treatment to the hair color (existing or new color) to show the hair in a different lighting condition. Operations may be configured to show a first new hair color and a second new hair color in respective new images. A single mask may be defined and provided to two separate coloring operations to apply the two new hair colors. The respective new color images for the first and second colors may be provided sequentially or simultaneously. Additionally or alternatively to any GUI interface options or controls discussed, voice activated controls may be provided. Other light architectures may be adapted in a similar manner to produce a hair segmentation mask by using skip connections between corresponding layers of an encoder and decoder and trained using a mask-image gradient consistency loss function. One example of such an architecture is ShuffleNet™, a computation-efficient CNN designed especially for mobile devise with very limited computational power (e.g. 10-150 MFLOPs) using pointwise group convolution and channel shuffle of Zhang et al and Megvii Technology Limited. Details regarding ShuffleNet are provided in ShuffleNet: An Extremely Efficient Convolutional Neural Network for Moble Devices of Zhang et al. arXiv:1707.01083v2 [cs:CV] 7 Dec. 2017 incorporated herein by reference. FIG.10is a flowchart of steps1000to define an exemplary CNN in accordance with a teaching herein. At1002, a CNN pre-trained for image classification and configured to execute on a computing device having limited computational power is obtained (e.g. MobileNet, ShuffleNet, etc.) At1004a step adapts the CNN to define an object segmentation mask, for example, removing full connected layers, defining upsampling operations, etc. At1006a step adapts the CNN to use skip connections between layers in an encoder stage and corresponding layers in a decoder stage to combine low resolution but powerful features and high resolution but weak features when upsampling in the decoder stage. At1008a step is performed to obtain segmentation training data comprising labelled data for object segmentation. As noted this may be crowd sourced and be noisy and coarse segmentation data where the object segmentation mask (labelling) is not fine. A step1010is performed to define a mask-image gradient consistency loss function as a parameter to minimize when training. At1012a step is performed to further train the pre-trained CNN as adapted using the segmentation training data and the mask-image gradient consistency loss function to minimize the mask-image gradient consistency loss to generate a further trained CNN. At1014the further trained CNN is tested with segmentation testing data comprising labelled data for object segmentation. It will be apparent that some of the steps inFIG.10are optional. For example and not limitation, adapting to use skip connections may be optional. Adapting to minimize the gradient consistency loss (and thus similarly training for such loss) may be optional. The training data may be noisy and course segmentation (e.g. without fine labelling) such as obtained from crowd source data. The thus trained CNN may be provided for storing and using on a mobile device as described. In addition to computing device aspects, a person of ordinary skill will understand that computer program product aspects are disclosed, where instructions are stored in a non-transient storage device (e.g. a memory, CD-ROM, DVD-ROM, disc, etc.) to configure a computing device to perform any of the method aspects stored herein. Practical implementation may include any or all of the features described herein. These and other aspects, features and various combinations may be expressed as methods, apparatus, systems, means for performing functions, program products, and in other ways, combining the features described herein. A number of embodiments have been described. Nevertheless, it will be understood that various modifications can be made without departing from the spirit and scope of the processes and techniques described herein. In addition, other steps can be provided, or steps can be eliminated, from the described process, and other components can be added to, or removed from, the described systems. Accordingly, other embodiments are within the scope of the following claims. Throughout the description and claims of this specification, the word “comprise” and “contain” and variations of them mean “including but not limited to” and they are not intended to (and do not) exclude other components, integers or steps. Throughout this specification, the singular encompasses the plural unless the context requires otherwise. In particular, where the indefinite article is used, the specification is to be understood as contemplating plurality as well as singularity, unless the context requires otherwise. Features, integers characteristics, compounds, chemical moieties or groups described in conjunction with a particular aspect, embodiment or example of the invention are to be understood to be applicable to any other aspect, embodiment or example unless incompatible therewith. All of the features disclosed herein (including any accompanying claims, abstract and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive. The invention is not restricted to the details of any foregoing examples or embodiments. The invention extends to any novel one, or any novel combination, of the features disclosed in this specification (including any accompanying claims, abstract and drawings) or to any novel one, or any novel combination, of the steps of any method or process disclosed. | 57,602 |
11861498 | DETAILED DESCRIPTION The following part will illustrate exemplary embodiments of the present disclosure with reference to the drawings, including various details of the embodiments of the present disclosure for a better understanding. The embodiments should be regarded only as exemplary ones. Therefore, those skilled in the art should appreciate that various changes or modifications can be made with respect to the embodiments described herein without departing from the scope and spirit of the present disclosure. Similarly, for clarity and conciseness, the descriptions of the known functions and mechanisms are omitted in the descriptions below. FIG.1is a schematic diagram according to a first embodiment of the present disclosure. As shown inFIG.1, a method for compressing a neural network model according to this embodiment includes the following steps: S101: acquiring a to-be-compressed neural network model; S102: determining a first bit width, a second bit width and a target thinning rate corresponding to the to-be-compressed neural network model; S103: obtaining a target value according to the first bit width, the second bit width and the target thinning rate; and S104: compressing the to-be-compressed neural network model using the target value, the first bit width and the second bit width to obtain a compression result of the to-be-compressed neural network model. The method for compressing a neural network model according to this embodiment includes the steps of firstly, determining the first bit width, the second bit width and the target thinning rate corresponding to the acquired to-be-compressed neural network model; secondly, obtaining the target value according to the target thinning rate, the first bit width and the second bit width; and finally, compressing the to-be-compressed neural network model using the target value, the first bit width and the second bit width to obtain the compression result of the to-be-compressed neural network model; in this embodiment, the neural network model is compressed by the determined first bit width, second bit width and target thinning rate, thereby ensuring that the obtained compression result has higher precision, simplifying compression steps of the neural network model, and improving a compression efficiency of the neural network model. In this embodiment, during the S101of acquiring the to-be-compressed neural network model, a neural network model input at an input end may be used as the to-be-compressed neural network model, or a neural network model selected on a network at the input end may be used as the to-be-compressed neural network model. In this embodiment, after the S101of acquiring the to-be-compressed neural network model, the S102of determining the first bit width, the second bit width and the target thinning rate corresponding to the acquired to-be-compressed neural network model is executed. In this embodiment, during the S102, the first bit width, the second bit width and the target thinning rate input or selected at the input end may be acquired as the first bit width, the second bit width and the target thinning rate corresponding to the acquired to-be-compressed neural network model. In order to enable the compression result of the to-be-compressed neural network model to be better matched with a processor for running the to-be-compressed neural network model, in this embodiment, during the S102of determining the first bit width corresponding to the to-be-compressed neural network model, an adopted optional implementation may include: determining a processor for running the acquired to-be-compressed neural network model; and taking a vector width of the determined processor as the first bit width corresponding to the to-be-compressed neural network model. It may be understood that one processor is equivalent to a vector system structure, and different processors have different vector widths; for example, the vector width of an Intel Avx2 processor is 256 bits, and the vector width of an Arm neon processor is 128 bits. In order to enable the compression result of the to-be-compressed neural network model to be better matched with an instruction set in the processor for running the to-be-compressed neural network model, in this embodiment, during the S102of determining the second bit width corresponding to the to-be-compressed neural network model, an adopted optional implementation may include: determining the processor for running the acquired to-be-compressed neural network model; and determining the second bit width corresponding to the to-be-compressed neural network model according to a vector width of the instruction set in the determined processor. In this embodiment, during the S102of determining a second bit width corresponding to the to-be-compressed neural network model according to a vector width of the instruction set in the determined processor, a vector width of an instruction supported by the instruction set may be directly used as the second bit width, or a vector width less than the vector width of the instruction supported by the instruction set may be used as the second bit width. For example, if the instruction set in the determined processor supports calculation of an int8 instruction, in this embodiment, a vector width of 8 bits may be used as the second bit width corresponding to the to-be-compressed neural network model, or a vector width of 4 bits or a vector width of 1 bit less than 8 bits may be used as the second bit width corresponding to the to-be-compressed neural network model. In addition, in this embodiment, during the S102of determining a target thinning rate corresponding to the to-be-compressed neural network model, an adopted optional implementation may include: acquiring attribute information of the to-be-compressed neural network model, the attribute information in this embodiment being type information, task information, or the like, of the to-be-compressed neural network model; and taking a thinning rate corresponding to the determined attribute information as the target thinning rate corresponding to the to-be-compressed neural network model. That is, in this embodiment, a corresponding relationship between the attribute information and the thinning rate may be preset, and then, the target thinning rate is determined according to the attribute information of the to-be-compressed neural network model, thus avoiding that the compression result of the to-be-compressed neural network model is affected by an inappropriate target thinning rate, and then improving accuracy of the determined target thinning rate. In this embodiment, after the S102of determining a first bit width, a second bit width and a target thinning rate corresponding to the to-be-compressed neural network model, the S103of obtaining a target value according to the determined first bit width, second bit width and target thinning rate is executed. In this embodiment, the target value obtained in the S103is used to thin parameters of the to-be-compressed neural network model. Specifically, in this embodiment, during the S103of obtaining a target value according to the determined first bit width, second bit width and target thinning rate, an adopted optional implementation may include: calculating a product between the second bit width and the target thinning rate; and taking a division result between the first bit width and the calculated product as the target value. In this embodiment, the target value may be obtained using the following calculation formula: N=VR×B where N represents the target value; R represents the target thinning rate; B represents the second bit width; and V represents the first bit width. In this embodiment, after the S103of obtaining a target value, the S104of compressing the to-be-compressed neural network model using the target value, the first bit width and the second bit width to obtain a compression result of the to-be-compressed neural network model is executed. Specifically, in this embodiment, during the S104of compressing the to-be-compressed neural network model using the target value, the first bit width and the second bit width to obtain the compression result of the to-be-compressed neural network model, an adopted optional implementation may include: thinning the parameters in the to-be-compressed neural network model according to the target value, the first bit width and the second bit width to obtain a first neural network model; and obtaining the compression result of the to-be-compressed neural network model according to the first neural network model. In this embodiment, during the S104of thinning the parameters in the to-be-compressed neural network model according to the target value, the first bit width and the second bit width to obtain the first neural network model, an adopted optional implementation may include: taking continuous parameters with a number corresponding to a number of the target values in the to-be-compressed neural network model as a parameter unit; sorting the parameters contained in the parameter unit according to an ascending order of absolute values; obtaining a zero setting quantity according to the first bit width, the second bit width and the target value; and setting parameters in the parameter unit of the to-be-compressed neural network model before the zero setting quantity to zero to obtain the first neural network model. That is, in this embodiment, with the method of setting the parameters with the smaller absolute values in the to-be-compressed neural network model to zero, the to-be-compressed neural network model is compressed, and since the zero setting quantity is determined by combining the first bit width, the second bit width and the target value, accuracy of parameter thinning may be improved, and the compression result of the to-be-compressed neural network model is ensured to have higher precision. In this embodiment, after the S104of setting parameters in each parameter unit of the to-be-compressed neural network model before the zero setting quantity to zero, an order of the parameters in each parameter unit may be restored; or a mask sequence corresponding to each parameter unit may be generated, the mask sequence includes 0/1 vectors with a number corresponding to the number of the target values, and the 0/1 vector is used to represent whether a parameter at a certain location is zero. In this embodiment, after the S104of obtaining a first neural network model, the obtained first neural network model may be used as the compression result of the to-be-compressed neural network model. In order to further improve a compression effect of the neural network model, in this embodiment, during the S104of obtaining a compression result of the to-be-compressed neural network model according to the first neural network model, an adopted optional implementation may include: according to the second bit width, quantifying parameters which are not set to zero in the first neural network model; and taking the neural network model after quantification as the compression result of the to-be-compressed neural network model. In this embodiment, during the S104of quantifying parameters which are not set to zero in the first neural network model according to the second bit width, an adopted optional implementation may include: determining a value range according to the second bit width; and representing the parameters which are not set to zero in the first neural network model as values in the determined value range. That is, in this embodiment, after the parameters in the neural network model are thinned, parameters which are not pruned in the neural network model may be further quantified; that is, the compression result of the to-be-compressed neural network model is obtained by combining thinning and quantification, thus further compressing a volume of the to-be-compressed neural network model. FIG.2is a schematic diagram according to a second embodiment of the present disclosure. As shown inFIG.2, in this embodiment, the S104of compressing the to-be-compressed neural network model using the target value, the first bit width and the second bit width to obtain a compression result of the to-be-compressed neural network model includes the following steps: S201: acquiring training data; S202: thinning the parameters in the to-be-compressed neural network model according to the target value, the first bit width and the second bit width to obtain a thinned neural network model; S203: training the thinned neural network model using the training data to obtain a loss function value and model precision of the thinned neural network model; S204: in response to determining that the model precision does not meet a first preset condition, and after adjusting the parameters of the to-be-compressed neural network model using the loss function value, proceeding to the step of obtaining a thinned neural network model until the model precision meets the first preset condition, and taking the thinned neural network model as a second neural network model; and S205: obtaining the compression result of the to-be-compressed neural network model according to the second neural network model. That is, in this embodiment, the to-be-compressed neural network model may also be trained in conjunction with the training data when compressed, so as to obtain the compression result of the trained to-be-compressed model, and by introducing the thinning process of the neural network model in the training process, training performance of the obtained compression result of the to-be-compressed neural network model may be improved. In this embodiment, during the S201of acquiring training data, the training data may be acquired according to task information corresponding to the to-be-compressed neural network model, and the acquired training data may correspond to image data of an image recognition task, voice data of a voice recognition task, or the like. In this embodiment, the process of the S202of thinning the parameters in the to-be-compressed neural network model according to the target value, the first bit width and the second bit width is the same as the process involved in the S104in the previous embodiment, and is not repeated herein. In this embodiment, in the S205, the second neural network model may be directly used as the compression result of the to-be-compressed neural network model. In addition, in this embodiment, during the S205of obtaining the compression result of the to-be-compressed neural network model according to the second neural network model, an adopted optional implementation may include: according to the second bit width, quantifying parameters which are not set to zero in the second neural network model to obtain a quantified neural network model; training the quantified neural network model using the training data to obtain a loss function value and model precision of the quantified neural network model; and in response to determining that the model precision does not meet a second preset condition, and after adjusting parameters of the second neural network model using the obtained loss function value, proceeding to the step of obtaining a quantified neural network model until the model precision meets the second preset condition, and taking the quantified neural network model as the compression result of the to-be-compressed neural network model. In addition, in this embodiment, during the S205of training the quantified neural network model using the training data, the quantified parameters may also be inversely quantified, and the quantified neural network model is trained using the inversely-quantified parameters. It may be understood that the first preset condition and the second preset condition in this embodiment may be preset at the input end. That is, in this embodiment, when the to-be-compressed neural network model is trained, the parameters of the to-be-compressed neural network model are thinned and quantified in the training process, thereby compressing the to-be-compressed neural network model by combining thinning and quantification; and a compression process includes the training process, such that the compression result of the to-be-compressed neural network model obtained in this embodiment has the higher model precision. FIG.3is a schematic diagram according to a third embodiment of the present disclosure. As shown inFIG.3, an apparatus300for compressing a neural network model according to this embodiment includes an acquiring unit301configured to acquire a to-be-compressed neural network model; a determining unit302configured to determine a first bit width, a second bit width and a target thinning rate corresponding to the to-be-compressed neural network model; a processing unit303configured to obtain a target value according to the first bit width, the second bit width and the target thinning rate; and a compressing unit304configured to compress the to-be-compressed neural network model using the target value, the first bit width and the second bit width to obtain a compression result of the to-be-compressed neural network model. When the acquiring unit301acquires the to-be-compressed neural network model, a neural network model input at an input end may be used as the to-be-compressed neural network model, or a neural network model selected on a network at the input end may be used as the to-be-compressed neural network model. In this embodiment, after the acquiring unit301acquires the to-be-compressed neural network model, the determining unit302determines the first bit width, the second bit width and the target thinning rate corresponding to the acquired to-be-compressed neural network model. The determining unit302may acquire a first bit width, a second bit width and a target thinning rate input or selected at the input end as the first bit width, the second bit width and the target thinning rate corresponding to the acquired to-be-compressed neural network model. In order to enable the compression result of the to-be-compressed neural network model to be better matched with a processor for running the to-be-compressed neural network model, when the determining unit302determines the first bit width corresponding to the to-be-compressed neural network model, an adopted optional implementation may include: determining a processor for running the acquired to-be-compressed neural network model; and taking a vector width of the determined processor as the first bit width corresponding to the to-be-compressed neural network model. In order to enable the compression result of the to-be-compressed neural network model to be better matched with an instruction set in the processor for running the to-be-compressed neural network model, when the determining unit302determines the second bit width corresponding to the to-be-compressed neural network model, an adopted optional implementation may include: determining the processor for running the acquired to-be-compressed neural network model; and determining the second bit width corresponding to the to-be-compressed neural network model according to a vector width of the instruction set in the determined processor. When the determining unit302determines the second bit width corresponding to the to-be-compressed neural network model according to the vector width of the instruction set in the determined processor, a vector width of an instruction supported by the instruction set may be directly used as the second bit width, or a vector width less than the vector width of the instruction supported by the instruction set may be used as the second bit width. In addition, when the determining unit302determines the target thinning rate corresponding to the to-be-compressed neural network model, an adopted optional implementation may include: acquiring attribute information of the to-be-compressed neural network model; and taking a thinning rate corresponding to the determined attribute information as the target thinning rate corresponding to the to-be-compressed neural network model. That is, the determining unit302may preset a corresponding relationship between the attribute information and the thinning rate, and then determine the target thinning rate according to the attribute information of the to-be-compressed neural network model, thus avoiding that the compression result of the to-be-compressed neural network model is affected by an inappropriate target thinning rate, and then improving accuracy of the determined target thinning rate. In this embodiment, after the determining unit302determines the first bit width, the second bit width and the target thinning rate corresponding to the to-be-compressed neural network model, the processing unit303obtains the target value according to the determined first bit width, second bit width and target thinning rate. The target value obtained by the processing unit303is used to thin parameters of the to-be-compressed neural network model. Specifically, when the processing unit303obtains the target value according to the determined first bit width, second bit width and target thinning rate, an adopted optional implementation may include: calculating a product between the second bit width and the target thinning rate; and taking a division result between the first bit width and the calculated product as the target value. In this embodiment, after the processing unit303obtains the target value, the compressing unit304compresses the to-be-compressed neural network model using the target value, the first bit width and the second bit width to obtain the compression result of the to-be-compressed neural network model. Specifically, when the compressing unit304compresses the to-be-compressed neural network model using the target value, the first bit width and the second bit width to obtain the compression result of the to-be-compressed neural network model, an adopted optional implementation may include: thinning the parameters in the to-be-compressed neural network model according to the target value, the first bit width and the second bit width to obtain a first neural network model; and obtaining the compression result of the to-be-compressed neural network model according to the first neural network model. When the compressing unit304thins the parameters in the to-be-compressed neural network model according to the target value, the first bit width and the second bit width to obtain the first neural network model, an adopted optional implementation may include: taking continuous parameters with a number corresponding to a number of the target values in the to-be-compressed neural network model as a parameter unit; sorting the parameters contained in the parameter unit according to an ascending order of absolute values; obtaining a zero setting quantity according to the first bit width, the second bit width and the target value; and setting parameters in the parameter unit of the to-be-compressed neural network model before the zero setting quantity to zero to obtain the first neural network model. That is, with the method of setting the parameters with the smaller absolute values in the to-be-compressed neural network model to zero, the compressing unit304compresses the to-be-compressed neural network model, and since the zero setting quantity is determined by combining the first bit width, the second bit width and the target value, accuracy of parameter thinning may be improved, and the compression result of the to-be-compressed neural network model is ensured to have higher precision. After setting the parameters in each parameter unit of the to-be-compressed neural network model before the zero setting quantity to zero, the compressing unit304may restore an order of the parameters in each parameter unit; or generate a mask sequence corresponding to each parameter unit, the mask sequence includes 0/1 vectors with a number corresponding to the number of the target values, and the 0/1 vector is used to represent whether a parameter at a certain location is zero. After obtaining the first neural network model, the compressing unit304may use the obtained first neural network model as the compression result of the to-be-compressed neural network model. In order to further improve a compression effect of the neural network model, when the compressing unit304obtains the compression result of the to-be-compressed neural network model according to the first neural network model, an adopted optional implementation may include: according to the second bit width, quantifying parameters which are not set to zero in the first neural network model; and taking the neural network model after quantification as the compression result of the to-be-compressed neural network model. When the compressing unit304quantifies the parameters which are not set to zero in the first neural network model according to the second bit width, an adopted optional implementation may include: determining a value range according to the second bit width; and representing the parameters which are not set to zero in the first neural network model as values in the determined value range. That is, after thinning the parameters in the neural network model, the compressing unit304may further quantify parameters which are not pruned in the neural network model; that is, the compression result of the to-be-compressed neural network model is obtained by combining thinning and quantification, thus further compressing a volume of the to-be-compressed neural network model. In addition, when the compressing unit304compresses the to-be-compressed neural network model using the target value, the first bit width and the second bit width to obtain the compression result of the to-be-compressed neural network model, an adopted method may include: acquiring training data; thinning the parameters in the to-be-compressed neural network model according to the target value, the first bit width and the second bit width to obtain a thinned neural network model; training the thinned neural network model using the training data to obtain a loss function value and model precision of the thinned neural network model; in response to determining that the model precision does not meet a first preset condition, and after adjusting the parameters of the to-be-compressed neural network model using the loss function value, proceeding to the step of obtaining a thinned neural network model until the model precision meets the first preset condition, and taking the thinned neural network model as a second neural network model; and obtaining the compression result of the to-be-compressed neural network model according to the second neural network model. That is, the compressing unit304may train the to-be-compressed neural network model in conjunction with the training data when compressing the to-be-compressed neural network model, so as to obtain the compression result of the trained to-be-compressed model, and by introducing the thinning process of the neural network model in the training process, training performance of the obtained compression result of the to-be-compressed neural network model may be improved. When acquiring the training data, the compressing unit304may acquire the training data according to task information corresponding to the to-be-compressed neural network model, and the acquired training data may correspond to image data of an image recognition task, voice data of a voice recognition task, or the like. The compressing unit304may directly use the second neural network model as the compression result of the to-be-compressed neural network model. In addition, when the compressing unit304obtains the compression result of the to-be-compressed neural network model according to the second neural network model, an adopted optional implementation may include: according to the second bit width, quantifying parameters which are not set to zero in the second neural network model to obtain a quantified neural network model; training the quantified neural network model using the training data to obtain a loss function value and model precision of the quantified neural network model; and in response to determining that the model precision does not meet a second preset condition, and after adjusting parameters of the second neural network model using the obtained loss function value, proceeding to the step of obtaining a quantified neural network model until the model precision meets the second preset condition, and taking the quantified neural network model as the compression result of the to-be-compressed neural network model. In addition, when training the quantified neural network model using the training data, the compressing unit304may inversely quantify the quantified parameters, and train the quantified neural network model using the inversely-quantified parameters. That is, when training the to-be-compressed neural network model, the compressing unit304may thin and quantify the parameters of the to-be-compressed neural network model in the training process, thereby compressing the to-be-compressed neural network model by combining thinning and quantification; and a compression process includes the training process, such that the compression result of the to-be-compressed neural network model obtained in this embodiment has the higher model precision. FIG.4is a schematic diagram according to a fourth embodiment of the present disclosure.FIG.4shows a flow chart of calculation of a fully-connected layer of the to-be-compressed neural network model in the above embodiment, and the fully-connected layer has input vector X and parameter unit Y; an N-bit mask sequence corresponding to the parameter unit Y is loaded; parameters at corresponding positions in the parameter unit Y are set to zero according to the loaded N-bit mask sequence, and non-zero parameters are unfolded to obtain parameter vector Y; the input vector X is loaded; and vector inner product calculation X*Y is performed. If the input vector X has a value range [−27+1, 27−1], the second bit width is B, and the parameter vector Y has a value range [−2B+1, 2B−1], when the fully-connected layer performs the vector inner product calculation, the maximum value of a vector inner product between X and Y is 2B+6. If a 16-bit signed value is used for storage, 215/2B+6=29−Baccumulation processes may be performed at most; when B is 8, two accumulation processes require overflow to a 32-bit signed value for storage (as shown on the right ofFIG.4); with a decrease of B, a number of accumulation times becomes larger, and therefore, quantification of the parameters using the second bit width in the above embodiment may reduce a requirement for a storage space, thereby compressing the to-be-compressed neural network model. FIG.5is a schematic diagram according to a fifth embodiment of the present disclosure.FIG.5shows a flow chart of obtaining a compression result of a to-be-compressed neural network model by means of training: firstly, acquiring the to-be-compressed neural network model, where the to-be-compressed neural network model may be a neural network model obtained through a common training step (i.e., training directly using acquired training data); then, performing a thinning training operation on the to-be-compressed neural network model, which specifically includes: thinning the to-be-compressed neural network model to obtain a thinned neural network model, training the thinned neural network model using the acquired training data, updating the model, repeating the above steps until model precision of the thinned neural network model reaches an expected value, and outputting the thinned neural network model; and finally, performing a quantitative training operation on the thinned neural network model, which specifically includes: quantifying parameters which are not set to zero in the thinned neural network model to obtain a quantified neural network model, training the quantified neural network model using training data, updating the model, repeating the above steps until model precision of the quantified neural network model reaches an expected value, and outputting the quantified neural network model as the compression result of the to-be-compressed neural network model. In the technical solution of the present disclosure, the acquisition, storage and application of involved user personal information are in compliance with relevant laws and regulations, and do not violate public order and good customs. According to the embodiment of the present disclosure, there are also provided an electronic device, a readable storage medium and a computer program product. FIG.6is a block diagram of an electronic device configured to implement a method for compressing a neural network model according to the embodiment of the present disclosure. The electronic device is intended to represent various forms of digital computers, such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other appropriate computers. The electronic device may also represent various forms of mobile apparatuses, such as personal digital assistants, cellular telephones, smart phones, wearable devices, and other similar computing apparatuses. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementation of the present disclosure described and/or claimed herein. As shown inFIG.6, the device600includes a computing unit601which may perform various appropriate actions and processing operations according to a computer program stored in a read only memory (ROM)602or a computer program loaded from a storage unit608into a random access memory (RAM)603. Various programs and data necessary for the operation of the device600may be also stored in the RAM603. The computing unit601, the ROM602, and the RAM603are connected with one other through a bus604. An input/output (I/O) interface605is also connected to the bus604. The plural components in the device600are connected to the I/O interface605, and include: an input unit606, such as a keyboard, a mouse, or the like; an output unit607, such as various types of displays, speakers, or the like; the storage unit608, such as a magnetic disk, an optical disk, or the like; and a communication unit609, such as a network card, a modem, a wireless communication transceiver, or the like. The communication unit609allows the device600to exchange information/data with other devices through a computer network, such as the Internet, and/or various telecommunication networks. The computing unit601may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit601include, but are not limited to, a central processing unit (CPU), a graphic processing unit (GPU), various dedicated artificial intelligence (AI) computing chips, various computing units running machine learning model algorithms, a digital signal processor (DSP), and any suitable processor, controller, microcontroller, or the like. The computing unit601performs the methods and processing operations described above, such as the method for compressing a neural network model. For example, in some embodiments, the method for compressing a neural network model may be implemented as a computer software program tangibly contained in a machine readable medium, such as the storage unit608. In some embodiments, part or all of the computer program may be loaded and/or installed into the device600via the ROM602and/or the communication unit609. When the computer program is loaded into the RAM603and executed by the computing unit601, one or more steps of the method for compressing a neural network model described above may be performed. Alternatively, in other embodiments, the computing unit601may be configured to perform the method for compressing a neural network model by any other suitable means (for example, by means of firmware). Various implementations of the systems and technologies described herein may be implemented in digital electronic circuitry, integrated circuitry, field programmable gate arrays (FPGA), application specific integrated circuits (ASIC), application specific standard products (ASSP), systems on chips (SOC), complex programmable logic devices (CPLD), computer hardware, firmware, software, and/or combinations thereof. The systems and technologies may be implemented in one or more computer programs which are executable and/or interpretable on a programmable system including at least one programmable processor, and the programmable processor may be special or general, and may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input apparatus, and at least one output apparatus. Program codes for implementing the method according to the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or a controller of a general purpose computer, a special purpose computer, or other programmable data processing apparatuses, such that the program code, when executed by the processor or the controller, causes functions/operations specified in the flowchart and/or the block diagram to be implemented. The program code may be executed entirely on a machine, partly on a machine, partly on a machine as a stand-alone software package and partly on a remote machine, or entirely on a remote machine or a server. In the context of the present disclosure, the machine readable medium may be a tangible medium which may contain or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine readable medium may be a machine readable signal medium or a machine readable storage medium. The machine readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of the machine readable storage medium may include an electrical connection based on one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read only memory (ROM), an erasable programmable read only memory (EPROM or flash memory), an optical fiber, a portable compact disc read only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. To provide interaction with a user, the systems and technologies described here may be implemented on a computer having: a display apparatus (for example, a cathode ray tube (CRT) or liquid crystal display (LCD) monitor) for displaying information to a user; and a keyboard and a pointing apparatus (for example, a mouse or a trackball) by which a user may provide input for the computer. Other kinds of apparatuses may also be used to provide interaction with a user; for example, feedback provided for a user may be any form of sensory feedback (for example, visual feedback, auditory feedback, or tactile feedback); and input from a user may be received in any form (including acoustic, speech or tactile input). The systems and technologies described here may be implemented in a computing system (for example, as a data server) which includes a back-end component, or a computing system (for example, an application server) which includes a middleware component, or a computing system (for example, a user computer having a graphical user interface or a web browser through which a user may interact with an implementation of the systems and technologies described here) which includes a front-end component, or a computing system which includes any combination of such back-end, middleware, or front-end components. The components of the system may be interconnected through any form or medium of digital data communication (for example, a communication network). Examples of the communication network include: a local area network (LAN), a wide area network (WAN) and the Internet. A computer system may include a client and a server. Generally, the client and the server are remote from each other and interact through the communication network. The relationship between the client and the server is generated by virtue of computer programs which run on respective computers and have a client-server relationship to each other. The server may be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so as to overcome the defects of high management difficulty and weak service expansibility in conventional physical host and virtual private server (VPS) service. The server may also be a server of a distributed system, or a server incorporating a blockchain. It should be understood that various forms of the flows shown above may be used and reordered, and steps may be added or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, which is not limited herein as long as the desired results of the technical solution disclosed in the present disclosure may be achieved. The above-mentioned implementations are not intended to limit the scope of the present disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made, depending on design requirements and other factors. Any modification, equivalent substitution and improvement made within the spirit and principle of the present disclosure all should be included in the extent of protection of the present disclosure. | 42,129 |
11861499 | DESCRIPTION OF EMBODIMENTS The following describes technical solutions of the embodiments in this application with reference to the accompanying drawings. The embodiments of this application can be applied to scenarios of various intelligent terminal devices with weak computing capabilities, such as a driverless car, a robot, and intelligent terminal cognition. In the embodiments of this application, a terminal-side device may also be referred to as user equipment (UE), a mobile station (MS), a mobile terminal, or the like. For example, the terminal-side device may be a mobile phone (or referred to as a “cellular” phone) or a computer with a mobile terminal, for example, may be a portable, pocket-sized, handheld, computer built-in, or in-vehicle mobile apparatus. In the embodiments of this application, a cloud-side device may be a server or a server cluster, and the cloud-side device may also be referred to as a computing node or a cloud-side computing cluster. A terminal-cloud collaboration system100provided in an embodiment of this application is first described below with reference toFIG.1, so as to help understand and describe a method for data processing provided in an embodiment of this application. As shown inFIG.1, the terminal-cloud collaboration system100includes a terminal-side device110and a cloud-side device120. The terminal-side device110includes a neural network basic platform111and an application program112(FIG.1schematically shows application programs1,2, . . . , and n). The neural network basic platform111includes two decoupled components: a neural network architecture component111aand a neural network parameter component111b. The application program112is implemented through encapsulation based on the neural network basic platform111running on the terminal-side device110, and the application program120is configured to provide a cognitive computing function for a user. Both the neural network architecture component111aand the neural network parameter component111bmay be updated and replaced. The neural network architecture component111ais configured to support function expansion of the application program120, for example, enhance an identification capability from identifying only a car to identifying a brand, a type, or the like of the car. The neural network parameter component111bis configured to support updating of accuracy and performance of the application program120, for example, obtain higher accuracy, obtain higher computing efficiency, or obtain lower energy consumption and a lower storage requirement during running of the application program. In other words, the neural network architecture component111aand the neural network parameter component111bneed to be updated to implement function expansion of the application program120on the terminal-side device110. Only the neural network parameter component111bmay be updated if only the accuracy and performance of the application program120on the terminal-side device110need to be improved. The terminal-side device110is configured to: receive a cognitive computing request, process the cognitive computing request by using the neural network basic platform111running on the terminal-side device110, and return a processing result. The terminal-side device110is further configured to send a request message to the cloud-side device120, so as to request to update the architecture component and/or the parameter component on the neural network basic platform. The cloud-side device120includes a neural network training and trimming platform121, and the neural network training and trimming platform121includes a neural network architecture component updating and trimming module112aand a neural network parameter component trimming module121b. The module112ais configured to update and trim a neural network architecture running on the terminal-side device110, and the model112bis configured to trim a parameter obtained by training a neural network model. The cloud-side device120is configured to: receive the request message from the terminal-side device110, obtain, by using the neural network training and trimming platform121, a neural network model required by the terminal-side device, and send a trimmed neural network component to the terminal-side device110. The neural network model refers to a program and data used to perform cognitive computing that are obtained by training a large amount of tagged data. The neural network model includes a neural network architecture component and a neural network parameter component. The neural network architecture component refers to a network related to a neural network algorithm and a hierarchical structure of the network that are in the neural network model, that is, the foregoing program in the neural network model that is used to perform cognitive computing. The neural network parameter component refers to a large quantity of parameters obtained when the neural network model is trained, and is used as a value of a neuron in a neural network architecture, that is, the foregoing data in the neural network model that is used to perform cognitive computing. It should be noted that, in some embodiments, the following description may be given: The cloud-side device delivers a neural network model (for example, a second neural network model shown inFIG.2) to the terminal-side device. The neural network model herein may include a neural network architecture component and a neural network parameter component, or the neural network model includes only a neural network parameter component. FIG.2is a schematic flowchart of a method200for data processing according to an embodiment of this application. As shown inFIG.2, the method200is performed by a terminal-side device and a cloud-side device. For example, the terminal-side device is the terminal-side device110shown inFIG.1, and the cloud-side device is the cloud-side device120shown inFIG.1. The method200includes the following operations. Operation210. The terminal-side device sends a request message to the cloud-side device, where the request message is used to request a neural network model used to process a cognitive computing task. Specifically, the terminal-side device receives a cognitive computing request sent by a user, and the cognitive computing request is used to request to process the cognitive computing task. Operation220. The cloud-side device determines, based on the request message, a first neural network model used to process the cognitive computing task. Specifically, the request message may carry information used to indicate the first neural network model. Optionally, in some embodiments, the request message carries an identifier used to indicate the first neural network model. In operation220, the cloud-side device determines the first neural network model based on the identifier. For example, the cloud-side device may deliver in advance, to the terminal-side device, a correspondence between a cognitive computing function and an identifier of a neural network model with the cognitive computing function. The terminal-side device may directly report a corresponding identifier to the cloud-side device when the terminal-side device requires a neural network model with a cognitive computing function. Optionally, in some embodiments, the request message carries function information, and the function information is used to describe a function of processing the cognitive computing task. In operation220, the cloud-side device determines the first neural network model based on the function information. It should be understood that different neural network models are corresponding to different cognitive computing functions. The cloud-side device may obtain a neural network model with a corresponding cognitive computing function based on the function information. Operation230. The cloud-side device trims the first neural network model to obtain a second neural network model, where a hardware resource required when the second neural network model runs is within an available hardware resource capability range of the terminal-side device. An available hardware resource capability of the terminal-side device is a computing capability and/or a storage capability of the terminal-side device. The computing capability is related to CPU performance of the terminal-side device, and the storage capability is related to storage performance of the terminal-side device. In one embodiment, the cloud-side device may determine the available hardware resource capability range of the terminal-side device based on hardware resource information of the terminal-side device that is reported by the terminal-side device. For example, the hardware resource information may include CPU performance information and storage performance information of the terminal device. It should be understood that the cloud-side device may further infer the available hardware resource capability of the terminal-side device based on an empirical value. This is not limited in this embodiment of this application. The second neural network model in this embodiment of this application is obtained by trimming the first neural network model. Therefore, the second neural network model also has the function of processing the cognitive computing task. Optionally, in some embodiments, a computation amount of the second neural network model is less than a computation amount of the first neural network model, and a required storage capacity of the second neural network model is less than a required storage capacity of the first neural network model. In this case, the second neural network model may be understood as a reduced model of the first neural network model. A computation amount of a neural network model mentioned in this embodiment of this application refers to a data amount generated when the neural network model is used to process data, and a required storage capacity of the neural network model refers to storage space required for storing the neural network model. Operation240. The cloud-side device sends the second neural network model to the terminal-side device. Operation250. The terminal-side device processes the cognitive computing task based on the second neural network model. In this embodiment of this application, the terminal-side device requests, from the cloud-side device, the neural network model used to process the cognitive computing task, and the cloud-side device sends a trimmed neural network model to the terminal-side device, where a hardware resource required when the trimmed neural network model runs is within the available hardware resource capability range of the terminal-side device, so that a neural network model that originally runs on the cloud-side device with a strong computing capability can also be applicable to the terminal-side device with a relatively weak computing capability, and the terminal-side device can process the cognitive computing task. Therefore, this embodiment of this application can improve performance of processing a neural network-related application by the terminal-side device, and help enhance expansion of an intelligent application capability of the terminal-side device. Optionally, in an embodiment, in operation250, the terminal-side device may directly process the cognitive computing task based on the second neural network model. For example, the second neural network model is a complete application program. The terminal-side device may directly process the corresponding cognitive computing task after downloading the second neural network model from the cloud-side device to the terminal-side device. Optionally, in an embodiment, the terminal-side device includes a neural network basic platform (for example, the neural network basic platform111shown inFIG.1), the neural network basic platform includes a neural network architecture component (for example, the neural network architecture component111ashown inFIG.1) and a neural network parameter component (for example, the neural network parameter component111bshown inFIG.1), and the neural network architecture component is decoupled from the neural network parameter component. In step250, the terminal-side device updates the neural network basic platform based on the second neural network model, and then processes the cognitive computing task based on an updated neural network basic platform. That the terminal-side device updates the neural network basic platform based on the second neural network model includes the following cases: Case 1: The second neural network model includes only a corresponding parameter component; in this case, the terminal-side device updates only the neural network parameter component on the neural network basic platform based on the second neural network model, and an update to the neural network parameter component has no impact on the neural network architecture component on the neural network basic platform. Case 2: The second neural network model includes a corresponding architecture parameter and parameter component; in this case, the terminal-side device updates both the neural network parameter component on the neural network basic platform and the neural network architecture component on the neural network basic platform based on the second neural network model, but updates to the neural network parameter component and the neural network architecture component are independent of each other and do not affect each other. In this embodiment of this application, the neural network basic platform on the terminal-side device includes the neural network architecture component and the neural network parameter component, and the neural network architecture component is decoupled from the neural network parameter component. In other words, an update to the neural network parameter component and an update to the neural network architecture component are independent of each other and do not affect each other. This can help extend an intelligent application function of the terminal-side device. It should be understood that, in the embodiment in which the terminal-side device includes the neural network basic platform, the request message sent by the terminal-side device to the cloud-side device may be referred to as a neural network component update request message. Optionally, in some embodiments, the terminal-side device sends the request message to the cloud-side device under any one of the following trigger conditions: Trigger condition 1: The terminal-side device lacks a neural network model (or an application program) used to process the cognitive computing task. Trigger condition 2: The terminal-side device includes a neural network model (or an application program) capable of processing the cognitive computing task, but cognitive accuracy of the neural network model does not meet cognitive accuracy tolerance, where the cognitive accuracy tolerance represents expected accuracy of processing the cognitive computing task by the terminal-side device. Trigger condition 3: The terminal-side device includes a neural network model (or an application program) capable of processing the cognitive computing task, but a hardware resource required when the neural network model runs exceeds an available hardware resource capability range of the terminal-side device. In one embodiment, when the terminal-side device includes the neural network basic platform, if the terminal-side device sends the request message to the cloud-side device based on trigger condition 1, the request message is specifically used to request to update the neural network architecture component and the neural network parameter component on the neural network basic platform; if the terminal-side device sends the request message to the cloud-side device based on trigger condition 2, the request message is specifically used to request to update the neural network parameter component on the neural network basic platform; or if the terminal-side device sends the request message to the cloud-side device based on trigger condition 3, the request message is specifically used to request to update the neural network architecture component and the neural network parameter component on the neural network basic platform. In this embodiment of this application, under any one of the foregoing conditions, the terminal-side device actively requests, from the cloud-side device, the neural network model used to process the cognitive computing task, so as to effectively achieve an objective that the terminal-side device has a function of processing a neural network-related application, and help enhance expansion of an intelligent application capability of the terminal-side device. Optionally, in some embodiments, the request message sent by the terminal-side device to the cloud-side device carries indication information used to indicate the available hardware resource capability of the terminal-side device. Specifically, in step230, the cloud-side device trims the first neural network model based on the available hardware resource capability of the terminal-side device, so as to obtain the second neural network model, where a hardware resource required when the second neural network model runs is within the available hardware resource capability range. In one embodiment, the terminal-side device may determine the available hardware resource capability of the terminal-side device based on a change status of a hardware resource required when the neural network basic platform runs on the terminal-side device. Specifically, a computing capability CCPUand a storage capability CMEMrequired when the neural network basic platform runs on the terminal-side device are respectively measured based on the following formulas (1) and (2): CCPU=CCPU(+NNC)-CCPU(-NNC)CCPU(-NNC)(1)CMEM=CMEM(+NNC)-CMEM(-NNC)CMEM(-NNC)(2) CCPUrepresents a current computing capability of the terminal-side device (for example, is represented by CPU usage of the terminal-side device), Cmemrepresents a current storage capability of the terminal-side device (for example, is represented by memory usage of the terminal-side device), +NNC indicates that the neural network basic platform runs, and −NNC indicates that the neural network basic platform does not run. In one embodiment, a plurality of different thresholds or change ranges may be set to measure a computing capability and a storage capability of the terminal-side device. When the computing capability or the storage capability reaches a threshold or falls within a change range, a state of the computing capability or the storage capability of the terminal-side device is entered, and then the state is used as a parameter of a trigger condition, and is used to indicate degrees of trimming a cloud-side neural network architecture and a cloud-side neural network model. As shown in Table 1, when trimming the neural network model, the cloud-side device selects a neural network model with suitable accuracy based on the computing capability and the storage capability of the terminal-side device. TABLE 1StorageComputingDimensionperformanceQuantizationperformancereductionStrongOne timeStrongOne timeModerateFive timesModerateFive timesWeak10 timesWeak10 times In one embodiment of this application, the cloud-side device trims, based on the available hardware resource capability of the terminal-side device, the neural network model requested by the terminal-side device, and delivers the trimmed neural network model to the terminal-side device, so that the hardware resource required when the neural network model delivered to the terminal-side device runs is within the available hardware resource capability range of the terminal-side device, and the terminal-side device can invoke, based on a hardware resource of the terminal-side device, the neural network model received from the cloud-side device to process the cognitive computing task. Therefore, in this embodiment of this application, performance of processing a neural network-related application by the terminal-side device can be improved. Optionally, in some embodiments, the request message sent by the terminal-side device to the cloud-side device carries indication information used to indicate cognitive accuracy tolerance, and the cognitive accuracy tolerance represents expected accuracy of processing the cognitive computing task by the terminal-side device. Specifically, in step230, the cloud-side device trims, based on the cognitive accuracy tolerance, the first neural network model to obtain the second neural network model, so that accuracy of processing the cognitive computing task by using the second neural network model meets the cognitive accuracy tolerance. In one embodiment, the terminal-side device collects and perceives accuracy that is of a cognitive computing result of the cognitive computing task and that is expected by an installed application program (APP), that is, cognitive computing tolerance, then adds the cognitive computing tolerance to the request message, and reports the request message to the cloud-side device, so that the cloud-side device delivers a neural network model that meets the cognitive computing tolerance to the terminal-side device. The cognitive accuracy tolerance is related to an input data amount, a training time, a neural network model compression ratio that are involved in a process of training the neural network model by the cloud-side device. The cognitive accuracy tolerance may be defined as a continuous function of the input data amount, the training time, and the neural network model compression ratio: Cognitive accuracy tolerance=f(training time, model compression ratio, and input data amount). The training time may be represented by a quantity of times of iteration in the training process. The cognitive accuracy tolerance is inversely proportional to the training time; the cognitive accuracy tolerance is inversely proportional to the input data amount; and the cognitive accuracy tolerance is proportional to the model compression ratio. In other words, a larger input data amount or a longer training time indicates lower cognitive accuracy tolerance; and a larger model compression ratio indicates higher cognitive accuracy tolerance. In one embodiment of this application, when requesting, from the cloud-side device, the neural network model used to process the cognitive computing task, the terminal-side device further reports the indication information used to indicate the cognitive accuracy tolerance, so that the neural network model delivered by the cloud-side device can meet a requirement for the cognitive accuracy tolerance. Therefore, in this embodiment of this application, the terminal-side device can invoke, based on a hardware resource of the terminal-side device, the neural network model received from the cloud-side device, to process the cognitive computing task; in addition, accuracy of processing the cognitive computing task can be ensured, so that the performance of processing the neural network-related application by the terminal-side device can be further improved. It should be understood that higher accuracy of processing the cognitive computing task by using the neural network model correspondingly indicates a larger computation amount and a larger storage capacity required. In this embodiment of this application, accuracy of processing the cognitive computing task by using the second neural network model delivered by the cloud-side device to the terminal-side device is consistent with accuracy corresponding to the cognitive accuracy tolerance, so as to reduce a computation amount and a required storage capacity of the second neural network model to a relatively large extent. In other words, the accuracy of processing the cognitive computing task by using the second neural network model is not much higher than the accuracy corresponding to the cognitive accuracy tolerance. For example, a pre-installed application program provided by a neural network cognitive computing platform for image classification is trained by using 1000 categories of ImageNet datasets by default; but only 20 of the 1000 categories need to be identified in an application scenario for the terminal-side device. In this case, because the default neural network cognitive computing platform provides an excessive quantity of functions, an architecture of a default neural network model integrated on the neural network cognitive computing platform is relatively complex, and a computation amount of the default neural network model is relatively large. Consequently, computing resources and storage resources of the terminal-side device are wasted when the neural network cognitive computing platform runs. In an actual use process, the terminal-side device determines a most frequently identified category, uses first 20 most frequently identified categories as parameters, adds these parameters to the request message, and sends the request message to the cloud-side device. The cloud-side device trims a neural network architecture component based on an accuracy requirement of the terminal-side device, trims a neural network parameter component obtained through training, and sends a corresponding neural network component to the terminal-side device after completing trimming. This can effectively avoid wasting the computing resources and the storage resources of the terminal-side device. Therefore, in one embodiment of this application, hardware resources required for the second neural network model are reduced to a relatively large extent on the premise that the accuracy of processing the cognitive computing task by using the second neural network model meets the cognitive accuracy tolerance of the terminal-side device, so as to reduce load of the hardware resources required when the neural network model runs on the terminal-side device. It should be understood that, in the foregoing embodiment in which image classification is used as an example, if there is a new requirement for an application scenario after the terminal-side device reduces a quantity of categories for the neural network cognitive computing platform to 20, for example, a quantity of to-be-identified categories is increased to 21, the terminal-side device re-submits a perceived accuracy requirement to the cloud-side device, to trigger trimming and training of a neural network architecture and parameter of the cloud-side device and a real-time update to the terminal-side device. To help a person skilled in the art better understand this embodiment of this application, a specific method in which the cloud-side device trims the first neural network model to obtain the second neural network model is described below with reference toFIG.3toFIG.7. Optionally, in an embodiment, operation230in which the cloud-side device trims the first neural network model to obtain a second neural network model includes: trimming, by the cloud-side device, a parameter component of the first neural network model to obtain the second neural network model, where a required storage capacity of a parameter component of the second neural network model is less than a required storage capacity of the parameter component of the first neural network model. In one embodiment, the cloud-side device first trains the first neural network model to obtain the parameter component (for example, a weight parameter component) of the first neural network model, and then trims the parameter component, for example, clusters weight parameter matrices of the first neural network model, so that a storage capacity required by a trimmed parameter component is less than a storage capacity required by the untrimmed parameter component. The second neural network model is formed after the parameter component of the first neural network model is trimmed. In other words, in this implementation, architecture components of the second neural network model and the first neural network model are the same, the parameter components of the second neural network model and the first neural network model are different, and the storage capacity required by the parameter component of the second neural network model is less than the storage capacity required by the parameter component of the first neural network model. In one embodiment of this application, an operation of trimming a parameter component of a neural network model is for a neural network model obtained after training is completed. It should be understood that a process of training the neural network model is a process of obtaining the parameter component of the neural network model. Therefore, trimming the parameter component of the neural network model may be mainly trimming a weight parameter matrix of the neural network model. A purpose of trimming the parameter component of the neural network model is to reduce a storage capacity occupied by the neural network model, and to reduce a computation amount and a required storage capacity of the neural network model in a running process. In one embodiment, a method for trimming the parameter component of the neural network model may include the following operations: Operation 1: Classify a row vector in the weight parameter matrix of the neural network model as a sub-vector, convert the sub-vector into a group of codewords (codeword) by using a sub-vector quantization method, and group codewords obtained by quantizing all sub-vectors to form a codebook (codebook). Operation 2: Cluster all weight parameters of the neural network model, and use a codeword to approximate all weight parameters of each type to form a codebook, where the codeword is a shared parameter for each type of weight parameters, that is, convert the weight parameter matrix into parameter-to-codebook location mapping and the codebook. In one embodiment,FIG.3toFIG.6are schematic diagrams of operations of trimming the parameter component of the neural network model. As shown inFIG.3, it is assumed that a weight parameter matrix obtained by training the neural network model is an s*t weight parameter matrix shown inFIG.3.FIG.4is a schematic diagram of classifying row vectors in the weight parameter matrix shown inFIG.3as sub-vectors. Specifically, the s*t weight parameter matrix shown inFIG.3is decomposed into s/m m*t sub-vectors by using a row vector as a unit, and each sub-vector Wiincludes m rows.FIG.5is a schematic diagram of clustering all weight parameters. Specifically, all sub-vectors Wiobtained in the operation shown inFIG.4are clustered in a manner of K-means or the like, and are classified into Kitypes based on similarity of values to obtain m*Kisub-vectors, that is, WJi. Each sub-vector has Kicolumn vectors in total, and a value of each column vector is an approximate value of all column vectors of a same type, for example, an intermediate value of all the column vectors.FIG.6is a schematic diagram of a result obtained after the weight parameter matrix W: s*t shown inFIG.3is processed in the operations shown inFIG.4andFIG.5. In one embodiment, the weight parameter matrix W: s*t is converted into a weight parameter approximation matrix WJ: s*K. In one embodiment of this application, the cloud-side device trims the parameter component of the neural network model, so as to reduce a storage capacity occupied by the neural network model, and reduce a computation amount and a required storage capacity of the neural network model in a running process, so that a hardware resource required when the neural network model (that is, the second neural network model) delivered to the terminal-side device runs is within the available hardware resource capability range of the terminal-side device. It may be learned from the foregoing description that the parameter component of the neural network model may be trimmed to effectively reduce the computation amount and the required storage capacity of the neural network model. An architecture component of the neural network model may be further trimmed before the neural network model is trained, so as to further reduce the computation amount and the required storage capacity of the neural network model. Optionally, in an embodiment, operation230in which the cloud-side device trims the first neural network model to obtain a second neural network model includes: trimming, by the cloud-side device, an architecture component of the first neural network model to obtain a third neural network model, where a computation amount of a computation kernel of the third neural network model is less than a computation amount of a computation kernel of the first neural network model; and trimming, by the cloud-side device, a parameter component of the third neural network model to obtain the second neural network model, where a required storage capacity of a parameter component of the second neural network model is less than a required storage capacity of the parameter component of the third neural network model. In one embodiment, the cloud-side device first trims the architecture component of the first neural network model to obtain the third neural network model. The computation amount of the computation kernel of the third neural network model is less than the computation amount of the computation kernel of the first neural network model. In other words, the computation kernel of the third neural network model is simpler than the computation kernel of the first neural network model. Then the cloud-side device trains the third neural network model to obtain the parameter component of the third neural network model. Finally, the cloud-side device trims the parameter component of the third neural network model to obtain the second neural network model. The storage capacity required by the parameter component of the second neural network model is less than the storage capacity required by the parameter component of the third neural network model. In this embodiment of this application, the architecture component of the neural network model is trimmed to simplify a computation kernel of the neural network model, and the architecture component is trimmed to reduce a computation amount and a required storage capacity of the neural network model in a training process. In one embodiment, a method for trimming the architecture component of the neural network model includes any one or any combination of the following methods: reducing operand accuracy, reducing an order, and using a dedicated instruction of a hardware computing unit. A manner of reducing an order includes convolution kernel decomposition, matrix decomposition, or the like. The dedicated instruction of a hardware computing unit includes, for example, a single-instruction multiple-data stream (Single Instruction Multiple Data, SIMD) instruction, a streaming single-instruction multiple-data expansions 2 (Streaming SIMD Expansions 2, SSE2) instruction, a streaming single-instruction multiple-data expansions 3 (Streaming SIMD Expansions 3, SSE3) instruction, or a supplemental streaming single-instruction multiple-data expansions 3 (Supplemental Streaming SIMD Expansions 3, SSSE3) instruction. The reducing an order is used as an example. An operation of a high-order vector is converted into a product operation of two low-order vectors. An operation of reducing an order of the high-order vector includes a plurality of mathematical methods. For example, Tucker decomposition may be performed to convert the high-order vector into a product of a plurality of low-order vectors. For example, in a convolutional neural network, a 4D convolution kernel tensor W is decomposed into an accumulated sum of products of K horizontal filters (Horizontal Filter) Hkand K vertical filters (Vertical Filter) Vk, that is, W=∑k=1KHk(Vk)T, and K is a parameter used to control an order (Rank). Specifically, as shown inFIG.7, a high-order vector W to be learned in a process of training the neural network model is converted into two low-order vectors H and V. In one embodiment, for a detailed method for trimming the parameter component of the third neural network model, refer to the foregoing related description, for example, the description with reference toFIG.3. For brevity, details are not described herein again. In one embodiment of this application, the cloud-side device trims the architecture component of the neural network model, so as to simplify the computation kernel of the neural network model, thereby reducing the computation amount and the required storage capacity of the neural network model in a training process. The cloud-side device trims the parameter component of the neural network model, so as to reduce the storage capacity occupied by the neural network model, and reduce the computation amount and the required storage capacity of the neural network model in a running process, so that a hardware resource required when the neural network model (that is, the second neural network model) delivered to the terminal-side device runs is within the available hardware resource capability range of the terminal-side device. It should be understood that a same cognitive computing task may correspond to different accuracy requirements in different application fields. In other words, the terminal-side device may correspond to different degrees of cognitive computing tolerance when processing a same cognitive computing task in different application fields. The following two solutions are proposed in this embodiment of this application, so that the terminal-side device can respond to different accuracy requirements in different application fields when processing a same cognitive computing task. In a first embodiment, the cloud-side device obtains, based on the request message, a neural network architecture capable of processing the cognitive computing task; trains the neural network architecture to obtain a plurality of neural network models with different degrees of cognitive accuracy; and delivers the plurality of neural network models with different degrees of cognitive accuracy to the terminal-side device. In other words, the plurality of neural network models with different degrees of cognitive accuracy are pre-stored on the terminal-side device. For example, when the terminal-side device needs to process the cognitive computing task in an application scenario A, the terminal-side device selects a neural network model with cognitive accuracy corresponding to the application scenario A to process the cognitive computing task; when the terminal-side device needs to process the cognitive computing task in an application scenario B, the terminal-side device selects a neural network model with cognitive accuracy corresponding to the application scenario B to process the cognitive computing task. It should be understood that, in the first solution, the neural network models with different degrees of cognitive accuracy are pre-stored on the terminal-side device, so that efficiency of processing the cognitive computing task in different application scenarios by the terminal-side device can be effectively improved. In a second embodiment, the terminal-side device determines, based on a to-be-processed application scenario, a requirement of the application scenario for cognitive accuracy, that is, cognitive accuracy tolerance, and then adds the cognitive accuracy tolerance to the request message; and the cloud-side device obtains, based on the cognitive accuracy tolerance, a neural network model that meets the cognitive accuracy tolerance, and then delivers the neural network model to the terminal-side device. In other words, the cloud-side device delivers, to the terminal-side device, only a neural network model that meets current cognitive accuracy tolerance of the terminal-side device. Therefore, in the second embodiment, the cloud-side device delivers, to the terminal-side device, only a neural network model that meets cognitive accuracy in a current application scenario, so that storage load of the terminal-side device can be reduced. To better understand the method for data processing in this embodiment of this application, the following describes the method for data processing in this embodiment of this application with reference toFIG.9by using a cognitive computing scenario shown inFIG.8as an example. It is assumed that a cognitive computing task to be processed by the terminal-side device is to identify an object in a picture shown inFIG.8, and the object in the figure is a Mercedes-Benz SUV. The terminal-side device obtains the picture shown inFIG.8, and uses the picture as input data of a cognitive application program (corresponding to a neural network basic platform) on the terminal-side device. The cognitive application program obtains an identification result by processing the input data, for example, a coarse-grained correct identification result is a “car”. It should be understood that the cognitive application program mentioned herein is implemented through encapsulation based on the neural network basic platform running on the terminal-side device, and a function of the cognitive application program is to provide a cognitive function for a user. In this embodiment, the function is used to identify the object in the picture shown inFIG.8. As shown inFIG.9, a specific processing procedure is as follows: Operation610. The terminal-side device obtains a to-be-processed cognitive computing task, that is, to identify a car in the picture shown inFIG.8. Operation620. The terminal-side device processes the cognitive computing task by using a cognitive application program; in other words, the terminal-side device processes the cognitive computing task by using a neural network basic platform running on the terminal-side device. Operation630. Determine whether a processing result in operation620meets an expected requirement; and if the processing result in operation620meets the expected requirement, end the procedure, or if the processing result in operation620does not meet the expected requirement, perform operation640. In one embodiment, that the processing result in operation620does not meet the expected requirement includes the following three cases: In a first case, an identification result of the cognitive application program on the terminal-side device does not meet the expected requirement. For example, if the object in the picture shown inFIG.8is identified as a “ship”, the identification result is incorrect. In a second case, an identification function of the cognitive application program on the terminal-side device does not meet the expected requirement. For example, the cognitive application program on the terminal-side device can only identify the object in the picture shown inFIG.8as a “car”, but cannot identify the object in the picture shown inFIG.8as “Mercedes-Benz”. In a third case, a hardware resource required when the cognitive application program on the terminal-side device that is capable of identifying the object in the picture shown inFIG.8runs exceeds an available hardware resource capability range of the terminal-side device. Operation640. If the processing result in operation620does not meet the expected requirement, the terminal-side device sends a request message to the cloud-side device, so as to trigger the cloud-side device to train and trim a neural network of the terminal-side device. In one embodiment, the terminal-side device may upload an obtained new picture set, that is, an incremental picture set, to the cloud-side device, so that the cloud-side device trains and trims the neural network of the terminal-side device based on an existing picture set and the newly uploaded incremental picture set. Operation650. The cloud-side device trains and trims a neural network model to obtain a second neural network model. In one embodiment, the cloud-side device obtains, based on the request message, a first neural network model capable of processing the cognitive computing task, and then trims the first neural network model to obtain the second neural network model. A computation amount and a required storage capacity of the second neural network model are respectively less than a computation amount and a required storage capacity of the first neural network model. The trimming the first neural network model to obtain the second neural network model includes: trimming a parameter component of the first neural network model to obtain the second neural network model. Alternatively, the trimming the first neural network model to obtain the second neural network model includes: trimming an architecture component of the first neural network model to obtain a third neural network model, where a computation amount of a computation kernel of the third neural network model is less than a computation amount of a computation kernel of the first neural network model; and trimming a parameter component of the third neural network model to obtain the second neural network model. For specific descriptions, refer to the foregoing related description. For brevity, details are not described herein again. Operation660. The cloud-side device stores the second neural network model. Operation670. The cloud-side device delivers the second neural network model to the terminal-side device. In one embodiment, operation670may also be: The cloud-side device pushes the neural network model to the terminal-side device. It should be understood that the neural network model delivered by the cloud-side device to the terminal-side device may include a neural network architecture component and a neural network parameter component, or may include only a neural network parameter component. Operation680. The terminal-side device updates, based on the second neural network model received from the cloud-side device, the neural network basic platform running on the terminal-side device, and processes the cognitive computing task based on an updated neural network basic platform. Operation690. The terminal-side device determines whether a processing result in operation680meets the expected requirement; and if the processing result in operation680meets the expected requirement, ends the procedure, or if the processing result in operation680does not meet the expected requirement, performs operation640. Therefore, in one embodiment of this application, the terminal-side device requests, from the cloud-side device, the neural network model used to process the cognitive computing task, and after trimming the neural network model capable of processing the cognitive computing task, the cloud-side device delivers the trimmed neural network model to the terminal-side device, where a hardware resource required when the trimmed neural network model runs is within the available hardware resource capability range of the terminal-side device, so that a neural network model that originally runs on the cloud-side device with a strong computing capability can also be applicable to the terminal-side device with a relatively weak computing capability, and the terminal-side device can process the cognitive computing task. Therefore, this embodiment of this application can improve performance of processing a neural network-related application by the terminal-side device, and help enhance expansion of an intelligent application capability of the terminal-side device. In one embodiment, in operation660in the embodiment shown inFIG.9, storage of the neural network model on the cloud-side device may be implemented by using a content delivery network (Context Delivery Network, CDN). In operation670, dynamic update push of the neural network model to the terminal-side device may be implemented by using a push notification server (Push Notification Server, PNS), and delivery of the neural network model to the terminal-side device may be implemented by using the content delivery network (CDN). Specifically, as shown inFIG.10, an embodiment of this application further provides a terminal-cloud collaboration system700. The terminal-cloud collaboration system700includes a cloud-side device710, a push notification server (PNS)720, a content delivery network (CDN)730, and a terminal-side device740. The cloud-side device710, the push notification server720, the content delivery network730, and the terminal-side device740may communicate with each other. As shown inFIG.10, the content delivery network (CDN)730includes an intelligent scheduling domain name server (e.g., Context Delivery Network Domain Name System, CDN DNS) node731and a plurality of CDN nodes732(FIG.10schematically shows three CDN nodes732). The CDN node732is configured to store a neural network model obtained by the cloud-side device. The CDN DNS node731is configured to maintain a correspondence between a neural network model stored in a network and a CDN node732. Specifically, in an example ofFIG.10, the CDN DNS node731maintains a correspondence between an identifier of a neural network model (Model ID) and an IP address of a CDN node732(CDN Node IP). Optionally, an identifier of one neural network model may correspond to IP addresses of a plurality of CDN nodes. As shown inFIG.10, the push notification server720includes a CDN DNS node registration module721, a terminal-side device registration module722, a transceiver module723, and a state update module724. The push notification server720further maintains a correspondence between an identifier of a neural network model (Model ID), an identifier of the terminal-side device (Device ID), and an IP address of the CDN DNS node (CDN DNS IP). The push notification server720may further maintain a state machine of the terminal-side device740, that is, maintain a state update of the terminal-side device740. A data structure used by the push notification server720to maintain the foregoing information may be a two-dimensional table. A processing procedure of implementing storage of the neural network model on the cloud-side device by using the content delivery network (CDN) is as follows: Step 1: When the CDN DNS node731communicates with the push notification server720for the first time, the CDN DNS node731first needs to send, to the push notification server720, a registration request for requesting to register the CDN DNS node731, where the registration request includes an IP address of the CDN DNS node731; and after the push notification server720receives the registration request, the CDN DNS registration module721processes the registration request, and stores the IP address of the CDN DNS node731on the push notification server720. Step 2: After completing training and trimming of a neural network model, the cloud-side device710sends a storage request to the CDN DNS node731to request to store the neural network model; the CDN DNS node731allocates a CDN node configured to store the neural network model (a neural network architecture component and/or a neural network parameter component), for example, a CDN node732ashown inFIG.10, and sends, to the cloud-side device710, a response message used to indicate the CDN node732athat stores the neural network model; and the CDN DNS node731is further configured to maintain a correspondence between an identifier of the neural network model and an IP address of the selected CDN node732. Specifically, a data structure for maintaining the correspondence may be a two-dimensional table shown inFIG.10. Step 3: The cloud-side device stores, based on the received response message, the neural network model obtained after training and trimming on the corresponding CDN node, for example, on the CDN node732a. In one embodiment, a log may be maintained on the cloud-side device710. Each log entry includes an identifier of a neural network model, an available version number of the neural network model, accuracy/precision corresponding to the version, completion time of the version, and the like. It should be understood that the log maintained on the cloud-side device710helps implement update push of the neural network model. In one embodiment, the cloud-side device710adds a new log entry to the log after completing training and trimming of the neural network model. The cloud-side device710sends the new log entry to the push notification server720, so as to trigger the push notification server720to send an update push notification to the terminal-side device740. The terminal-side device740may choose, based on information about a current neural network model available to the cloud-side device710, whether to update a neural network model on the terminal-side device740. As shown inFIG.11, the terminal-side device740includes a control module741, a receiving module742, a cache module743, an update control module744, a neural network basic platform745, and an application program745. The neural network basic platform745includes a neural network architecture component and a neural network parameter component, and the neural network architecture component is decoupled from the neural network parameter component. In other words, both a submodule in the neural network architecture component and a submodule in the neural network parameter component may be replaced based on a requirement. The application program745is an APP obtained through encapsulation based on the neural network basic platform745. A procedure of dynamically updating the neural network model (a component) on the terminal-side device740is as follows: Step 1: After the cloud-side device710has a new available version of a neural network model, the push notification server720sends an update push notification to a corresponding terminal-side device (for example, the terminal-side device740) based on the correspondence between an ID of a neural network model and an ID of the terminal-side device that is maintained on the push notification server720. Step 2: The control module741of the terminal-side device740receives the update push notification from the push notification server720, and feeds back, to the push notification server720, a signal indicating whether the terminal-side device740locally updates the neural network model; and the push notification server720modifies an update state in a two-dimensional table of the push notification server720based on the feedback. Step 3: If the terminal-side device740chooses to update the local neural network model (a component) after receiving the update push notification from the push notification server720, the terminal-side device740sends a neural network update request to the content delivery network (CDN)730. Step 4: The CDN DNS node731obtains, based on the neural network update request of the terminal-side device740, an identifier of a neural network model requested by the terminal-side device740; and then determines, based on the correspondence between an identifier of a neural network model and an IP address of a CDN node that is on the CDN DNS node731, an IP address of a CDN node (for example, the CDN node732a) that actually stores the neural network model requested by the terminal-side device740, and sends the IP address of the CDN node732ato the terminal-side device740. Step 5: The control module741of the terminal-side device740sends a neural network model request message to the CDN node732abased on the IP address of the CDN node732athat is sent by the CDN DNS node731, and the control module741further controls the receiving module742to receive information fed back by the CDN node732a. Step 6: The CDN node732asends the corresponding neural network model to the terminal-side device740, and specifically, may send an architecture component and/or a parameter component that are/is of the neural network model. Step 7: The receiving module742receives the neural network model sent by the CDN node732a, and caches the neural network model by using the cache model734. Step 8: The update control module744is configured to update a related component on the neural network basic platform745based on the neural network model cached by the cache model734, for example, update a neural network architecture component on the neural network basic platform745based on the architecture component of the neural network model cached by the cache model734, and update a neural network parameter component on the neural network basic platform745based on the parameter component of the neural network model cached by the cache model734. As shown inFIG.11, a new function (that is, a function corresponding to the neural network model delivered by the cloud-side device) is added to the application program746based on the updated neural network basic platform745by using an application programming interface (Application Programming Interface, API), so that the application program746can process user data. The user data is data in an intelligent application scenario, and the intelligent application scenario includes, for example, a driverless car, a robot, and intelligent terminal cognition. The method for data processing in this embodiment of this application is described above. A terminal-side device, a cloud-side device, and a terminal-cloud collaboration system in the embodiments of this application are described below with reference toFIG.12toFIG.16. FIG.12is a schematic block diagram of a terminal-side device900according to an embodiment of this application. The terminal-side device900includes: a sending module910, configured to send a request message to a cloud-side device, where the request message is used to request a neural network model used to process a cognitive computing task; a receiving module920, configured to receive a second neural network model that is obtained by trimming a first neural network model and that is sent by the cloud-side device, where the first neural network model is a neural network model on the cloud-side device that is used to process the cognitive computing task, and a hardware resource required when the second neural network model runs is within an available hardware resource capability range of the terminal-side device; and a processing module930, configured to process the cognitive computing task based on the second neural network model. In one embodiment of this application, the terminal-side device requests, from the cloud-side device, the neural network model used to process the cognitive computing task, and after trimming the neural network model capable of processing the cognitive computing task, the cloud-side device delivers the trimmed neural network model to the terminal-side device, where a hardware resource required when the trimmed neural network model runs is within the available hardware resource capability range of the terminal-side device, so that a neural network model that originally runs on the cloud-side device with a strong computing capability can also be applicable to the terminal-side device with a relatively weak computing capability, and the terminal-side device can process the cognitive computing task. Therefore, this embodiment of this application can improve performance of processing a neural network-related application by the terminal-side device, and help enhance expansion of an intelligent application capability of the terminal-side device. Optionally, in an embodiment, the terminal-side device includes a neural network basic platform, the neural network basic platform includes a neural network architecture component and a neural network parameter component, and the neural network architecture component is decoupled from the neural network parameter component. The processing module930is specifically configured to: when the second neural network model includes an architecture update component, update the neural network architecture component based on the architecture update component; when the second neural network model includes a parameter update component, update the neural network parameter component based on the parameter update component; and process the cognitive computing task based on an updated neural network basic platform. Optionally, in an embodiment, the sending module910is configured to send the request message to the cloud-side device under any one of the following conditions: the terminal-side device lacks a neural network model used to process the cognitive computing task; accuracy of processing the cognitive computing task by using a neural network model on the terminal-side device does not meet cognitive accuracy tolerance; and a hardware resource required when a neural network model on the terminal-side device that is used to process the cognitive computing task runs exceeds an available hardware resource capability of the terminal-side device, where the cognitive accuracy tolerance represents expected accuracy of processing the cognitive computing task by the terminal-side device. Optionally, in an embodiment, the request message carries indication information used to indicate the cognitive accuracy tolerance, so that the cloud-side device trims the first neural network model to obtain the second neural network model that meets the cognitive accuracy tolerance, where the cognitive accuracy tolerance represents the expected accuracy of processing the cognitive computing task by the terminal-side device. Optionally, in an embodiment, the request message carries indication information used to indicate the available hardware resource capability of the terminal-side device. Optionally, in an embodiment, the request message further carries an identifier used to indicate the first neural network model, so that the cloud-side device determines the first neural network model based on the identifier; or the request message further carries function information, and the function information is used to describe a function of processing the cognitive computing task, so that the cloud-side device determines the first neural network model based on the function information. Optionally, in an embodiment, a computation amount and a required storage capacity of the second neural network model are respectively less than a computation amount and a required storage capacity of the first neural network model. In one embodiment of this application, the processing module930may be implemented by a processor or a processor-related circuit. The sending module910may be implemented by a transmitter or a transmitter-related circuit. The receiving module920may be implemented by a receiver or a receiver-related circuit. As shown inFIG.13, an embodiment of this application further provides a terminal-side device1000. The terminal-side device1000includes a processor1010, a memory1020, a receiver1030, and a transmitter1040. The processor1010, the memory1020, the receiver1030, and the transmitter1040communicate with each other by using an internal connection path. The memory1020is configured to store an instruction. The processor1010is configured to execute the instruction stored in the memory1020, so as to control the receiver1030to receive a signal and control the transmitter1040to send a signal. The transmitter1040is configured to send a request message to a cloud-side device, where the request message is used to request a neural network model used to process a cognitive computing task; the receiver1030is configured to receive a second neural network model that is obtained by trimming a first neural network model and that is sent by the cloud-side device, where the first neural network model is a neural network model on the cloud-side device that is used to process the cognitive computing task, and a hardware resource required when the second neural network model runs is within an available hardware resource capability range of the terminal-side device; and the processor1010is configured to process the cognitive computing task based on the second neural network model. In one embodiment of this application, the terminal-side device requests, from the cloud-side device, the neural network model used to process the cognitive computing task, and after trimming the neural network model capable of processing the cognitive computing task, the cloud-side device delivers the trimmed neural network model to the terminal-side device, where a hardware resource required when the trimmed neural network model runs is within the available hardware resource capability range of the terminal-side device, so that a neural network model that originally runs on the cloud-side device with a strong computing capability can also be applicable to the terminal-side device with a relatively weak computing capability, and the terminal-side device can process the cognitive computing task. Therefore, this embodiment of this application can improve performance of processing a neural network-related application by the terminal-side device, and help enhance expansion of an intelligent application capability of the terminal-side device. Optionally, in an embodiment, the terminal-side device includes a neural network basic platform, the neural network basic platform includes a neural network architecture component and a neural network parameter component, and the neural network architecture component is decoupled from the neural network parameter component. The processor1010is specifically configured to: when the second neural network model includes an architecture update component, update the neural network architecture component based on the architecture update component; when the second neural network model includes a parameter update component, update the neural network parameter component based on the parameter update component; and process the cognitive computing task based on an updated neural network basic platform. Optionally, in an embodiment, the transmitter1040is configured to send the request message to the cloud-side device under any one of the following conditions: the terminal-side device lacks a neural network model used to process the cognitive computing task; accuracy of processing the cognitive computing task by using a neural network model on the terminal-side device does not meet cognitive accuracy tolerance; or a hardware resource required when a neural network model on the terminal-side device that is used to process the cognitive computing task runs exceeds an available hardware resource capability of the terminal-side device, where the cognitive accuracy tolerance represents expected accuracy of processing the cognitive computing task by the terminal-side device. Optionally, in an embodiment, the request message carries indication information used to indicate the cognitive accuracy tolerance, so that the cloud-side device trims the first neural network model to obtain the second neural network model that meets the cognitive accuracy tolerance, where the cognitive accuracy tolerance represents the expected accuracy of processing the cognitive computing task by the terminal-side device. Optionally, in an embodiment, the request message carries indication information used to indicate the available hardware resource capability of the terminal-side device. Optionally, in an embodiment, the request message further carries an identifier used to indicate the first neural network model, so that the cloud-side device determines the first neural network model based on the identifier; or the request message further carries function information, and the function information is used to describe a function of processing the cognitive computing task, so that the cloud-side device determines the first neural network model based on the function information. Optionally, in an embodiment, a computation amount and a required storage capacity of the second neural network model are respectively less than a computation amount and a required storage capacity of the first neural network model. It should be understood that the terminal-side device900shown inFIG.12or the terminal-side device1000shown inFIG.13may be configured to perform an operation or a procedure related to the terminal-side device in the method embodiment, and operations and/or functions of the modules in the terminal-side device900or the terminal-side device1000are separately used to implement a corresponding procedure in the method embodiment. For brevity, details are not described herein again. FIG.14is a schematic block diagram of a cloud-side device1100according to an embodiment of this application. The cloud-side device1100includes: a receiving module1110, configured to receive a request message sent by a terminal-side device, where the request message is used to request a neural network model used to process a cognitive computing task; a determining module1120, configured to determine, based on the request message, a first neural network model used to process the cognitive computing task; a trimming module1130, configured to trim the first neural network model to obtain a second neural network model, where a hardware resource required when the second neural network model runs is within an available hardware resource capability range of the terminal-side device; and a sending module1140, configured to send the second neural network model to the terminal-side device, so that the terminal-side device processes the cognitive computing task based on the second neural network model. In this embodiment of this application, the terminal-side device requests, from the cloud-side device, the neural network model used to process the cognitive computing task, and after trimming the neural network model capable of processing the cognitive computing task, the cloud-side device delivers the trimmed neural network model to the terminal-side device, where a hardware resource required when the trimmed neural network model runs is within the available hardware resource capability range of the terminal-side device, so that a neural network model that originally runs on the cloud-side device with a strong computing capability can also be applicable to the terminal-side device with a relatively weak computing capability, and the terminal-side device can process the cognitive computing task. Therefore, this embodiment of this application can improve performance of processing a neural network-related application by the terminal-side device, and help enhance expansion of an intelligent application capability of the terminal-side device. Optionally, in an embodiment, the trimming module1130is configured to trim a parameter component of the first neural network model to obtain the second neural network model, where a required storage capacity of a parameter component of the second neural network model is less than a required storage capacity of the parameter component of the first neural network model. Optionally, in an embodiment, the trimming module1130is configured to: trim an architecture component of the first neural network model to obtain a third neural network model, where a computation amount of a computation kernel of the third neural network model is less than a computation amount of a computation kernel of the first neural network model; and trim a parameter component of the third neural network model to obtain the second neural network model, where a required storage capacity of a parameter component of the second neural network model is less than a required storage capacity of the parameter component of the third neural network model. Optionally, in an embodiment, the request message carries indication information used to indicate cognitive accuracy tolerance, and the cognitive accuracy tolerance represents expected accuracy of processing the cognitive computing task by the terminal-side device. The trimming module1130is configured to trim, based on the cognitive accuracy tolerance, the first neural network model to obtain the second neural network model, where accuracy of processing the cognitive computing task by using the second neural network model meets the cognitive accuracy tolerance. Optionally, in an embodiment, the request message carries indication information used to indicate an available hardware resource capability of the terminal-side device. Optionally, in an embodiment, the request message further carries an identifier used to indicate the first neural network model. The determining module1120is specifically configured to determine the first neural network model based on the identifier. Optionally, in an embodiment, the request message further carries function information, and the function information is used to describe a function of processing the cognitive computing task. The determining module is specifically configured to determine the first neural network model based on the function information. Optionally, in an embodiment, a computation amount and a required storage capacity of the second neural network model are respectively less than a computation amount and a required storage capacity of the first neural network model. Specifically, in one embodiment of this application, the determining module1120and the trimming module1130may be implemented by a processor or a processor-related circuit. The receiving module1110may be implemented by a receiver or a receiver-related circuit. The sending module1140may be implemented by a transmitter or a transmitter-related circuit. As shown inFIG.15, an embodiment of this application further provides a cloud-side device1200. The cloud-side device1200includes a processor1210, a memory1220, a receiver1230, and a transmitter1240. The processor1210, the memory1220, the receiver1230, and the transmitter1240communicate with each other by using an internal connection path. The memory1220is configured to store an instruction. The processor1210is configured to execute the instruction stored in the memory1220, so as to control the receiver1230to receive a signal and control the transmitter1240to send a signal. The receiver1230is configured to receive a request message sent by a terminal-side device, where the request message is used to request a neural network model used to process a cognitive computing task; the processor1210is configured to: determine, based on the request message, a first neural network model used to process the cognitive computing task; and trim the first neural network model to obtain a second neural network model, where a hardware resource required when the second neural network model runs is within an available hardware resource capability range of the terminal-side device; and the transmitter1240is configured to send the second neural network model to the terminal-side device, so that the terminal-side device processes the cognitive computing task based on the second neural network model. In one embodiment of this application, the terminal-side device requests, from the cloud-side device, the neural network model used to process the cognitive computing task, and after trimming the neural network model capable of processing the cognitive computing task, the cloud-side device delivers the trimmed neural network model to the terminal-side device, where a hardware resource required when the trimmed neural network model runs is within the available hardware resource capability range of the terminal-side device, so that a neural network model that originally runs on the cloud-side device with a strong computing capability can also be applicable to the terminal-side device with a relatively weak computing capability, and the terminal-side device can process the cognitive computing task. Therefore, this embodiment of this application can improve performance of processing a neural network-related application by the terminal-side device, and help enhance expansion of an intelligent application capability of the terminal-side device. Optionally, in an embodiment, the processor1210is configured to trim a parameter component of the first neural network model to obtain the second neural network model, where a required storage capacity of a parameter component of the second neural network model is less than a required storage capacity of the parameter component of the first neural network model. Optionally, in an embodiment, the processor1210is configured to: trim an architecture component of the first neural network model to obtain a third neural network model, where a computation amount of a computation kernel of the third neural network model is less than a computation amount of a computation kernel of the first neural network model; and trim a parameter component of the third neural network model to obtain the second neural network model, where a required storage capacity of a parameter component of the second neural network model is less than a required storage capacity of the parameter component of the third neural network model. Optionally, in an embodiment, the request message carries indication information used to indicate cognitive accuracy tolerance, and the cognitive accuracy tolerance represents expected accuracy of processing the cognitive computing task by the terminal-side device. The processor1210is specifically configured to trim, based on the cognitive accuracy tolerance, the first neural network model to obtain the second neural network model, where accuracy of processing the cognitive computing task by using the second neural network model meets the cognitive accuracy tolerance. Optionally, in an embodiment, the request message carries indication information used to indicate an available hardware resource capability of the terminal-side device. Optionally, in an embodiment, the request message further carries an identifier used to indicate the first neural network model. The processor1210is configured to determine the first neural network model based on the identifier. Optionally, in an embodiment, the request message further carries function information, and the function information is used to describe a function of processing the cognitive computing task. The processor1210is configured to determine the first neural network model based on the function information. Optionally, in an embodiment, a computation amount and a required storage capacity of the second neural network model are respectively less than a computation amount and a required storage capacity of the first neural network model. It should be understood that the cloud-side device1100shown inFIG.14or the cloud-side device1200shown inFIG.15may be configured to perform an operation or a procedure related to the cloud-side device in the method embodiment, and operations and/or functions of the modules in the cloud-side device1100or the cloud-side device1200are separately used to implement a corresponding procedure in the method embodiment. For brevity, details are not described herein again. FIG.16is a schematic block diagram of a terminal-cloud collaboration system1300according to an embodiment of this application. The terminal-cloud collaboration system1300includes a terminal-side device1310and a cloud-side device1320. The terminal-side device1310is corresponding to the terminal-side device900or the terminal-side device1000in the foregoing embodiment, and the cloud-side device1320is corresponding to the cloud-side device1100or the cloud-side device1200in the foregoing embodiment. In one embodiment of this application, the terminal-side device requests, from the cloud-side device, a neural network model used to process a cognitive computing task, and after trimming the neural network model capable of processing the cognitive computing task, the cloud-side device delivers the trimmed neural network model to the terminal-side device, where a hardware resource required when the trimmed neural network model runs is within an available hardware resource capability range of the terminal-side device, so that a neural network model that originally runs on the cloud-side device with a strong computing capability can also be applicable to the terminal-side device with a relatively weak computing capability, and the terminal-side device can process the cognitive computing task. Therefore, this embodiment of this application can improve performance of processing a neural network-related application by the terminal-side device, and help enhance expansion of an intelligent application capability of the terminal-side device. It should be understood that, in this embodiment of this application, a processor may be a central processing unit (CPU), or may be another general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA) or another programmable logic device, a discrete gate or a transistor logic device, a discrete hardware component, or the like. The general-purpose processor may be a microprocessor, or the processor may be any conventional processor or the like. It should be further understood that, in this embodiment of this application, a memory may be a volatile memory or a nonvolatile memory, or may include both a volatile memory and a nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or a flash memory. The volatile memory may be a random access memory (RAM), and is used as an external cache. Based on a description that is used as an example instead of a limitation, many forms of RAMs are available, for example, a static random access memory (SRAM), a dynamic random access memory (DRAM), a synchronous dynamic random access memory (SDRAM), a double data rate synchronous dynamic random access memory (DDR SDRAM), an enhanced synchronous dynamic random access memory (ESDRAM), a synchlink dynamic random access memory (SLDRAM), and a direct rambus random access memory (DR RAM). It should be noted that the memory (a storage module) is integrated in the processor when the processor is a general-purpose processor, a DSP, an ASIC, an FPGA, or another programmable logic device, a discrete gate or a transistor logic device, or a discrete hardware component. It should be noted that the memory in the systems and methods described in this specification is intended to include but is not limited to these and any other proper types of memories. It should be further understood that various numerical symbols related to this specification are differentiated merely for ease of description, but are not used to limit the scope of the embodiments of this application. The term “and/or” in this specification describes only an association relationship for describing associated objects and represents that three relationships may exist. For example, A and/or B may represent the following three cases: Only A exists, both A and B exist, and only B exists. In addition, the character “/” in this specification generally indicates an “or” relationship between the associated objects. It should be understood that sequence numbers of the foregoing processes do not mean execution sequences in the embodiments. The execution sequences of the processes should be determined based on functions and internal logic of the processes, and should not be construed as any limitation on the implementation processes of the embodiments of this application. A person of ordinary skill in the art may be aware that, in combination with the examples described in the embodiments disclosed in this specification, units and algorithm steps may be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether the functions are performed by hardware or software depends on particular applications and design constraint conditions of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of this application. In the several embodiments provided in this application, it should be understood that the disclosed system, apparatus, and method may be implemented in other manners. For example, the described apparatus embodiment is merely an example. For example, the unit division is merely logical function division and may be other division in actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented by using some interfaces. The indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units may be selected based on actual requirements to achieve the objectives of the solutions of the embodiments. In addition, function units in the embodiments may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units are integrated into one unit. When the functions are implemented in the form of a software function unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of the embodiments of this application essentially, or the part contributing to the prior art, or some of the technical solutions may be implemented in a form of a software product. The software product is stored in a storage medium, and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform all or some of the steps of the methods described in the embodiments of this application. The foregoing storage medium includes: any medium that can store program code, such as a USB flash drive, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disc. The foregoing descriptions are merely specific implementations of the embodiments of this application, but are not intended to limit the protection scope of this application. Any variation or replacement readily figured out by a person skilled in the art within the technical scope disclosed in this application shall fall within the protection scope of this application. Therefore, the protection scope of this application shall be subject to the protection scope of the claims. | 87,055 |
11861500 | DETAILED DESCRIPTION OF THE EXAMPLE EMBODIMENTS The drawings are to be regarded as being schematic representations and elements illustrated in the drawings are not necessarily shown to scale. Rather, the various elements are represented such that their function and general purpose become apparent to a person skilled in the art. Any connection or coupling between functional blocks, devices, components, or other physical or functional units shown in the drawings or described herein may also be implemented by an indirect connection or coupling. A coupling between components may also be established over a wireless connection. Functional blocks may be implemented in hardware, firmware, software, or a combination thereof. Various example embodiments will now be described more fully with reference to the accompanying drawings in which only some example embodiments are shown. Specific structural and functional details disclosed herein are merely representative for purposes of describing example embodiments. Example embodiments, however, may be embodied in various different forms, and should not be construed as being limited to only the illustrated embodiments. Rather, the illustrated embodiments are provided as examples so that this disclosure will be thorough and complete, and will fully convey the concepts of this disclosure to those skilled in the art. Accordingly, known processes, elements, and techniques, may not be described with respect to some example embodiments. Unless otherwise noted, like reference characters denote like elements throughout the attached drawings and written description, and thus descriptions will not be repeated. The present invention, however, may be embodied in many alternate forms and should not be construed as limited to only the example embodiments set forth herein. It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, components, regions, layers, and/or sections, these elements, components, regions, layers, and/or sections, should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments of the present invention. As used herein, the term “and/or,” includes any and all combinations of one or more of the associated listed items. The phrase “at least one of” has the same meaning as “and/or”. Spatially relative terms, such as “beneath,” “below,” “lower,” “under,” “above,” “upper,” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “below,” “beneath,” or “under,” other elements or features would then be oriented “above” the other elements or features. Thus, the example terms “below” and “under” may encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly. In addition, when an element is referred to as being “between” two elements, the element may be the only element between the two elements, or one or more other intervening elements may be present. Spatial and functional relationships between elements (for example, between modules) are described using various terms, including “connected,” “engaged,” “interfaced,” and “coupled.” Unless explicitly described as being “direct,” when a relationship between first and second elements is described in the above disclosure, that relationship encompasses a direct relationship where no other intervening elements are present between the first and second elements, and also an indirect relationship where one or more intervening elements are present (either spatially or functionally) between the first and second elements. In contrast, when an element is referred to as being “directly” connected, engaged, interfaced, or coupled to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., “between,” versus “directly between,” “adjacent,” versus “directly adjacent,” etc.). The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments of the invention. As used herein, the singular forms “a,” “an,” and “the,” are intended to include the plural forms as well, unless the context clearly indicates otherwise. As used herein, the terms “and/or” and “at least one of” include any and all combinations of one or more of the associated listed items. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. Expressions such as “at least one of,” when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list. Also, the term “exemplary” is intended to refer to an example or illustration. When an element is referred to as being “on,” “connected to,” “coupled to,” or “adjacent to,” another element, the element may be directly on, connected to, coupled to, or adjacent to, the other element, or one or more other intervening elements may be present. In contrast, when an element is referred to as being “directly on,” “directly connected to,” “directly coupled to,” or “immediately adjacent to,” another element there are no intervening elements present. It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may in fact be executed substantially concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. Before discussing example embodiments in more detail, it is noted that some example embodiments may be described with reference to acts and symbolic representations of operations (e.g., in the form of flow charts, flow diagrams, data flow diagrams, structure diagrams, block diagrams, etc.) that may be implemented in conjunction with units and/or devices discussed in more detail below. Although discussed in a particularly manner, a function or operation specified in a specific block may be performed differently from the flow specified in a flowchart, flow diagram, etc. For example, functions or operations illustrated as being performed serially in two consecutive blocks may actually be performed simultaneously, or in some cases be performed in reverse order. Although the flowcharts describe the operations as sequential processes, many of the operations may be performed in parallel, concurrently or simultaneously. In addition, the order of operations may be re-arranged. The processes may be terminated when their operations are completed, but may also have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, subprograms, etc. Specific structural and functional details disclosed herein are merely representative for purposes of describing example embodiments of the present invention. This invention may, however, be embodied in many alternate forms and should not be construed as limited to only the embodiments set forth herein. Units and/or devices according to one or more example embodiments may be implemented using hardware, software, and/or a combination thereof. For example, hardware devices may be implemented using processing circuity such as, but not limited to, a processor, Central Processing Unit (CPU), a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA), a System-on-Chip (SoC), a programmable logic unit, a microprocessor, or any other device capable of responding to and executing instructions in a defined manner. Portions of the example embodiments and corresponding detailed description may be presented in terms of software, or algorithms and symbolic representations of operation on data bits within a computer memory. These descriptions and representations are the ones by which those of ordinary skill in the art effectively convey the substance of their work to others of ordinary skill in the art. An algorithm, as the term is used here, and as it is used generally, is conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of optical, electrical, or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise, or as is apparent from the discussion, terms such as “processing” or “computing” or “calculating” or “determining” of “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device/hardware, that manipulates and transforms data represented as physical, electronic quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. In this application, including the definitions below, the term ‘module’ or the term ‘controller’ may be replaced with the term ‘circuit.’ The term ‘module’ may refer to, be part of, or include processor hardware (shared, dedicated, or group) that executes code and memory hardware (shared, dedicated, or group) that stores code executed by the processor hardware. The module may include one or more interface circuits. In some examples, the interface circuits may include wired or wireless interfaces that are connected to a local area network (LAN), the Internet, a wide area network (WAN), or combinations thereof. The functionality of any given module of the present disclosure may be distributed among multiple modules that are connected via interface circuits. For example, multiple modules may allow load balancing. In a further example, a server (also known as remote, or cloud) module may accomplish some functionality on behalf of a client module. Software may include a computer program, program code, instructions, or some combination thereof, for independently or collectively instructing or configuring a hardware device to operate as desired. The computer program and/or program code may include program or computer-readable instructions, software components, software modules, data files, data structures, and/or the like, capable of being implemented by one or more hardware devices, such as one or more of the hardware devices mentioned above. Examples of program code include both machine code produced by a compiler and higher level program code that is executed using an interpreter. For example, when a hardware device is a computer processing device (e.g., a processor, Central Processing Unit (CPU), a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a microprocessor, etc.), the computer processing device may be configured to carry out program code by performing arithmetical, logical, and input/output operations, according to the program code. Once the program code is loaded into a computer processing device, the computer processing device may be programmed to perform the program code, thereby transforming the computer processing device into a special purpose computer processing device. In a more specific example, when the program code is loaded into a processor, the processor becomes programmed to perform the program code and operations corresponding thereto, thereby transforming the processor into a special purpose processor. Software and/or data may be embodied permanently or temporarily in any type of machine, component, physical or virtual equipment, or computer storage medium or device, capable of providing instructions or data to, or being interpreted by, a hardware device. The software also may be distributed over network coupled computer systems so that the software is stored and executed in a distributed fashion. In particular, for example, software and data may be stored by one or more computer readable recording mediums, including the tangible or non-transitory computer-readable storage media discussed herein. Even further, any of the disclosed methods may be embodied in the form of a program or software. The program or software may be stored on a non-transitory computer readable medium and is adapted to perform any one of the aforementioned methods when run on a computer device (a device including a processor). Thus, the non-transitory, tangible computer readable medium, is adapted to store information and is adapted to interact with a data processing facility or computer device to execute the program of any of the above mentioned embodiments and/or to perform the method of any of the above mentioned embodiments. Example embodiments may be described with reference to acts and symbolic representations of operations (e.g., in the form of flow charts, flow diagrams, data flow diagrams, structure diagrams, block diagrams, etc.) that may be implemented in conjunction with units and/or devices discussed in more detail below. Although discussed in a particularly manner, a function or operation specified in a specific block may be performed differently from the flow specified in a flowchart, flow diagram, etc. For example, functions or operations illustrated as being performed serially in two consecutive blocks may actually be performed simultaneously, or in some cases be performed in reverse order. According to one or more example embodiments, computer processing devices may be described as including various functional units that perform various operations and/or functions to increase the clarity of the description. However, computer processing devices are not intended to be limited to these functional units. For example, in one or more example embodiments, the various operations and/or functions of the functional units may be performed by other ones of the functional units. Further, the computer processing devices may perform the operations and/or functions of the various functional units without sub-dividing the operations and/or functions of the computer processing units into these various functional units. Units and/or devices according to one or more example embodiments may also include one or more storage devices. The one or more storage devices may be tangible or non-transitory computer-readable storage media, such as random access memory (RAM), read only memory (ROM), a permanent mass storage device (such as a disk drive), solid state (e.g., NAND flash) device, and/or any other like data storage mechanism capable of storing and recording data. The one or more storage devices may be configured to store computer programs, program code, instructions, or some combination thereof, for one or more operating systems and/or for implementing the example embodiments described herein. The computer programs, program code, instructions, or some combination thereof, may also be loaded from a separate computer readable storage medium into the one or more storage devices and/or one or more computer processing devices using a drive mechanism. Such separate computer readable storage medium may include a Universal Serial Bus (USB) flash drive, a memory stick, a Blu-ray/DVD/CD-ROM drive, a memory card, and/or other like computer readable storage media. The computer programs, program code, instructions, or some combination thereof, may be loaded into the one or more storage devices and/or the one or more computer processing devices from a remote data storage device via a network interface, rather than via a local computer readable storage medium. Additionally, the computer programs, program code, instructions, or some combination thereof, may be loaded into the one or more storage devices and/or the one or more processors from a remote computing system that is configured to transfer and/or distribute the computer programs, program code, instructions, or some combination thereof, over a network. The remote computing system may transfer and/or distribute the computer programs, program code, instructions, or some combination thereof, via a wired interface, an air interface, and/or any other like medium. The one or more hardware devices, the one or more storage devices, and/or the computer programs, program code, instructions, or some combination thereof, may be specially designed and constructed for the purposes of the example embodiments, or they may be known devices that are altered and/or modified for the purposes of example embodiments. A hardware device, such as a computer processing device, may run an operating system (OS) and one or more software applications that run on the OS. The computer processing device also may access, store, manipulate, process, and create data in response to execution of the software. For simplicity, one or more example embodiments may be exemplified as a computer processing device or processor; however, one skilled in the art will appreciate that a hardware device may include multiple processing elements or processors and multiple types of processing elements or processors. For example, a hardware device may include multiple processors or a processor and a controller. In addition, other processing configurations are possible, such as parallel processors. The computer programs include processor-executable instructions that are stored on at least one non-transitory computer-readable medium (memory). The computer programs may also include or rely on stored data. The computer programs may encompass a basic input/output system (BIOS) that interacts with hardware of the special purpose computer, device drivers that interact with particular devices of the special purpose computer, one or more operating systems, user applications, background services, background applications, etc. As such, the one or more processors may be configured to execute the processor executable instructions. The computer programs may include: (i) descriptive text to be parsed, such as HTML (hypertext markup language) or XML (extensible markup language), (ii) assembly code, (iii) object code generated from source code by a compiler, (iv) source code for execution by an interpreter, (v) source code for compilation and execution by a just-in-time compiler, etc. As examples only, source code may be written using syntax from languages including C, C++, C#, Objective-C, Haskell, Go, SQL, R, Lisp, Java®, Fortran, Perl, Pascal, Curl, OCaml, Javascript®, HTML5, Ada, ASP (active server pages), PHP, Scala, Eiffel, Smalltalk, Erlang, Ruby, Flash®, Visual Basic®, Lua, and Python®. Further, at least one embodiment of the invention relates to the non-transitory computer-readable storage medium including electronically readable control information (processor executable instructions) stored thereon, configured in such that when the storage medium is used in a controller of a device, at least one embodiment of the method may be carried out. The computer readable medium or storage medium may be a built-in medium installed inside a computer device main body or a removable medium arranged so that it can be separated from the computer device main body. The term computer-readable medium, as used herein, does not encompass transitory electrical or electromagnetic signals propagating through a medium (such as on a carrier wave); the term computer-readable medium is therefore considered tangible and non-transitory. Non-limiting examples of the non-transitory computer-readable medium include, but are not limited to, rewriteable non-volatile memory devices (including, for example flash memory devices, erasable programmable read-only memory devices, or a mask read-only memory devices); volatile memory devices (including, for example static random access memory devices or a dynamic random access memory devices); magnetic storage media (including, for example an analog or digital magnetic tape or a hard disk drive); and optical storage media (including, for example a CD, a DVD, or a Blu-ray Disc). Examples of the media with a built-in rewriteable non-volatile memory, include but are not limited to memory cards; and media with a built-in ROM, including but not limited to ROM cassettes; etc. Furthermore, various information regarding stored images, for example, property information, may be stored in any other form, or it may be provided in other ways. The term code, as used above, may include software, firmware, and/or microcode, and may refer to programs, routines, functions, classes, data structures, and/or objects. Shared processor hardware encompasses a single microprocessor that executes some or all code from multiple modules. Group processor hardware encompasses a microprocessor that, in combination with additional microprocessors, executes some or all code from one or more modules. References to multiple microprocessors encompass multiple microprocessors on discrete dies, multiple microprocessors on a single die, multiple cores of a single microprocessor, multiple threads of a single microprocessor, or a combination of the above. Shared memory hardware encompasses a single memory device that stores some or all code from multiple modules. Group memory hardware encompasses a memory device that, in combination with other memory devices, stores some or all code from one or more modules. The term memory hardware is a subset of the term computer-readable medium. The term computer-readable medium, as used herein, does not encompass transitory electrical or electromagnetic signals propagating through a medium (such as on a carrier wave); the term computer-readable medium is therefore considered tangible and non-transitory. Non-limiting examples of the non-transitory computer-readable medium include, but are not limited to, rewriteable non-volatile memory devices (including, for example flash memory devices, erasable programmable read-only memory devices, or a mask read-only memory devices); volatile memory devices (including, for example static random access memory devices or a dynamic random access memory devices); magnetic storage media (including, for example an analog or digital magnetic tape or a hard disk drive); and optical storage media (including, for example a CD, a DVD, or a Blu-ray Disc). Examples of the media with a built-in rewriteable non-volatile memory, include but are not limited to memory cards; and media with a built-in ROM, including but not limited to ROM cassettes; etc. Furthermore, various information regarding stored images, for example, property information, may be stored in any other form, or it may be provided in other ways. The apparatuses and methods described in this application may be partially or fully implemented by a special purpose computer created by configuring a general purpose computer to execute one or more particular functions embodied in computer programs. The functional blocks and flowchart elements described above serve as software specifications, which can be translated into the computer programs by the routine work of a skilled technician or programmer. Although described with reference to specific examples and drawings, modifications, additions and substitutions of example embodiments may be variously made according to the description by those of ordinary skill in the art. For example, the described techniques may be performed in an order different with that of the methods described, and/or components such as the described system, architecture, devices, circuit, and the like, may be connected or combined to be different from the above-described methods, or results may be appropriately achieved by other components or equivalents. Most of the aforementioned components, in particular the identification unit, can be implemented in full or in part in the form of software modules in a processor of a suitable control device or of a processing system. An implementation largely in software has the advantage that even control devices and/or processing systems already in use can be easily upgraded by a software update in order to work in the manner according to at least one embodiment of the invention. At least one embodiment of the present invention provides a meta-learning system comprising an inner function computation module adapted to compute output data from applied input data according to an inner model function depending on model parameters; an error computation module adapted to compute errors indicating mismatches between the computed output data and target values; and a state update module adapted to update the model parameters of the inner model function according to an updated state which is updated based on a current state of the state update module in response to an error received from the error computation module, wherein the state update module is learned to adjust the model parameters of the inner model function such that a following training of the inner model function with training data is improved or even optimized. In a possible embodiment of the meta-learning system according to the first aspect of the present invention, in a first learning phase the state update module is learned using labelled learning data applied to adjust the model parameters of the inner model function of the inner function computation module. In a still further possible embodiment of the meta-learning system according to the first aspect of the present invention, in a subsequent training phase following the learning phase, the inner model function of the inner model function module is trained using training data applied to the inner function computation module. In a further possible embodiment of the meta-learning system according to the first aspect of the present invention, the inner function computation module comprises a neural network. In a further possible embodiment of the meta-learning system according to the first aspect of the present invention, the inner function computation module comprises a deep neural network implementing the inner model function. In a further possible embodiment of the meta-learning system according to the first aspect of the present invention, the neural network comprises weights and biases changed according to the updated state of the state update module. In a still further possible embodiment of the meta-learning system according to the first aspect of the present invention, a state to parameter mapping module is configured to map the updated state of the state update module to model parameters used by the inner model function of the inner function computation module in the next time step. In a still further possible embodiment of the meta-learning system according to the first aspect of the present invention, the state to parameter mapping module is configured to map the updated state of the state update module to the model parameters used by the inner model function of the inner function computation module in the next time step according to a predetermined mapping function. In a possible embodiment of the meta-learning system according to the first aspect of the present invention, the predetermined mapping function is formed by an identity function. In a further possible embodiment of the meta-learning system according to the first aspect of the present invention, a state change penalizing module is provided adapted to compare the updated state with a current state of the state update module and to associate a state change penalty with an observed change in state. In a still further possible embodiment of the meta-learning system according to the first aspect of the present invention, the inner function computation module is trained to minimize the errors computed by the error computation module and to minimize changes in the state of the state update module expressed by associated state change penalties provided by the state change penalizing module. In a still further possible embodiment of the meta-learning system according to the first aspect of the present invention, a learning decision module is provided adapted to compute a learning strength based on the error computed by the error computation module and other data in particular gradients, input data, processed input data. In a still further possible embodiment of the meta-learning system according to the first aspect of the present invention, a state combination module is provided adapted to combine the current state and the updated state received from the state update module using the learning strength provided by the learning decision module to adjust the updated state supplied to the state to parameter mapping module. In a still further possible embodiment of the meta-learning system according to the first aspect of the present invention, a learning strength penalizing module is provided adapted to associate a penalty with a current magnitude of the learning strength. In a still further possible embodiment of the meta-learning system according to the first aspect of the present invention, the inner model function of the inner function computation module is trained to minimize the errors computed by the error computation module. In a further possible embodiment of the meta-learning system according to the first aspect of the present invention, the state update module is configured to update its state depending on the gradient of the error with respect to the model parameters. FIGS.1,2illustrate schematic diagrams for illustrating two possible variants or embodiments of a meta-learning system1according to an aspect of the present invention. The meta-learning system1comprises as main components an inner function computation module (IFCM)2, an error computation module (ECM)3and a state update module (SUM)4. These modules2,3,4are provided in both embodiments as illustrated inFIGS.1,2. The inner function computation module (IFCM)2is adapted to compute output data y from applied input data x according to an inner model function F depending on model parameters p as shown inFIGS.1,2. The error computation module (ECM)3of the meta-learning system1is configured to compute errors e indicating mismatches between the computed output data y and target values t indicating current target data. The computed errors e are supplied to the state update module (SUM)4as illustrated inFIGS.1,2. The state update module4is adapted to update model parameters p of the inner model function F of the inner function computation module (IFCM)2according to an updated state si+1which is updated based on the current state siof the state update module (SUM)4in response to the calculated error e received from the error computation module (ECM)3. The state update module (SUM)4is learned to adjust the model parameters p of the inner model function F such that a following training of the inner model function F within the inner function computation module (IFCM)2with training data is improved or even optimized. The inner model function F of the inner function computation module (IFCM)2can transform a current input data x at time step i (xi) into output data yifor the respective time step. The inner model function F also depends on parameters that are chosen by an optimizer formed by the state update module (SUM)4which is trained. The output data yidepends on the input data xiand the adjusted parameters pias follows: yi=F(xi,pi) (1) The inner model function F of the inner function computation module (IFCM)2comprises in a possible embodiment a neural network NN. This neural network NN can be formed in a possible implementation by a deep neural network comprising several layers. The deep neural network can comprise dense, convolution and other layers. In this case, the weights and biases of the neural network NN form the parameters p for the meta-learning system1. The neural network NN comprises weights w and biases b which can be changed according to the updated state s of the state update module (SUM)4. Parameter updates which correspond to training in a normal system are performed in a possible embodiment based on the value provided by an error function which does indicate a mismatch between the computed output yiand the target value tiat a time step i. The error function eicomputed by the error computation module (ECM)3can be expressed as follows: ei=e(yi,ti) (2) The state update module (SUM)4which forms an optimizing unit of the system1receives the calculated error value eiand has also access in a possible embodiment to the gradient of the calculated error value eiwith respect to the parameters pi. In each time step, the state update module (SUM)4can compute new states si+1based on a current state si. The new state si+1can be mapped to the parameters pi+1which are used by the inner model function F in the next time step via a mapping function h. As illustrated in the embodiments ofFIG.1,FIG.2, the meta-learning system1comprises in a possible embodiment a state to parameter mapping module (MM)5adapted to map the updated state si+1of the state update module (SUM)4to the model parameters pi+1used by the inner model function F of the inner function computation module (IFCM)2in the next time step. The parameter mapping module (MM)5is adapted to map the updated state si+1of the state update module (SUM)4to the model parameters pi+1used by the inner model function F of the inner function computation module (IFCM)2in the next time step according to a predetermined mapping function h. pi+1=h(si+1) (3) In a possible embodiment, the predetermined mapping function h can also be formed by an identity function. In this embodiment, the optimizer formed by the state update module (SUM)4does directly produce the new parameters pi+1for the inner function computation module (IFCM)2. By updating the parameters p, the system can adapt the behaviour of the function F to match a sequence of input data x and target values t as they are provided to the system. Once the optimizer SUM4has been run on enough labelled data, parameter updates can be disabled. Then, the system can predict outputs given input data x without needing further target values t to be supplied. The system1can be switched between an inference mode and a training mode depending on the availability of target values t as inputs. A key observation from biological systems used by the meta-learning system1according to an embodiment of the present invention is that learning tends to be expensive and should be avoided. For instance, humans or animals may experience negative emotions when their observations do not match their internal model of the world or their plans. These negative emotions tend to cause an adaption of the world model, i.e. a learning takes place. This is associated with a higher attention state and an increased energy expenditure. On the other hand, if the world model predictions match well with observations, the respective animal is calmer and more content which is associated with less energy expenditure. The meta-learning system1according to an embodiment of the present invention incorporates this insight by not only penalizing errors made but also by penalizing the learning itself. Learning is associated with change in the parameters piduring operation of the system. By penalizing errors and learning, it is assured in the meta-learning system1according to the present invention that only learning that leads to error reductions in the future is favoured. In the meta-learning system1as illustrated in the embodiment ofFIG.1, the state update module (SUM)4has a state s which is based on the current prediction error e, the previous state as well as other factors. In a possible embodiment, the updated new state si+1depends, inter alia, on the current state siand the current prediction error eiprovided by the error computation module (ECM)3as follows: si+1=s(si,ei. . . ) (4) The new updated state si+1of the state update module (SUM)4depends on the current state si, the current prediction error eiand may depend on other factors as indicated in equation (4) above. The updated new state si+1can be combined in a possible embodiment with the existing current state sias performed in the embodiment ofFIG.2. Further, the new calculated state si+1can form the input to the state to parameter mapping module (MM)5which produces the actual parameters pi+1for the inner model function F as indicated in equation (3) above. This makes it possible to have complex models for the inner function F with many parameters while the state which is the output of the state update model (SUM)4can be less complex. An insight here is also that for a given class of problems and model complexity, the state to parameter mapping module function h can supply initial parameters p for the inner model function F. In the illustrated embodiment ofFIG.1, a state change penalizing module (SCPM)6is provided. Based on the current prediction error eiand potential other inputs, the state update module (SUM)4can compute a new updated state si+1which becomes the current state in the next time step si+1. Other possible inputs to the state update module (SUM)4can be for example target values t, the input data x or any transformation of it. This transformation can also depend on the model parameters p. In a special implementation, the state update module (SUM)4can dynamically choose how it can transform the input to get transformation for its specific task. As illustrated inFIG.1, the current state of the state update module (SUM)4can be mapped to the model parameters p via the state to parameter mapping module (MM)5according to the predetermined mapping function h as indicated in equation (3). The model parameters pidetermine the operation of the inner model function F. In the illustrated embodiment ofFIG.1, the state change penalizing module (SCPM)6is adapted to compare the newly generated state si+1and the current state siof the state update module (SUM)4and associates a penalty ziwith a change in state s. This is because the degree to which the state s of the system1does change over time is seen as how much the system1learns. In the meta-learning system1according to the present invention, this learning is penalized (all other things being equal) such that the system1strives to produce good results without performing much learning. The objective function for training the meta-learning system1consequently is a combination of minimizing the overall prediction error eibut also the overall change in state as expressed by the state change penalty zi. In a possible embodiment, the two goals can be weighted using a factor α such that: s=Σi(ei+αzi) (5) The state change penalizing module (SCPM)6is mainly used for training the meta-learning system1as its output forms part of the loss for the meta-learning optimization problem. The inner model function F of the inner function computation module (IFCM)2can be trained to minimize the errors e computed by the error computation module (ECM)3and simultaneously to minimize changes in the state s of the state update module (SUM)4expressed by associated state change penalties zicalculated by the state change penalizing module (SCPM)6. FIG.2shows a second example embodiment of the meta-learning system1according to the present invention. Similar to the embodiment inFIG.1, the meta-learning system1comprises in the illustrated embodiment ofFIG.2an inner function computation module (IFCM)2, an error computation module (ECM)3, a state update module (SUM)4and a state to parameter mapping module (MM)5. In the second example embodiment ofFIG.2, the goal of penalizing the current amount learned by the system1is achieved by using a learning decision module (LDM)7. Based on the calculated prediction error e and other factors, the learning decision module (LDM)7can compute values diin [0;1] that signals the learning strength. The state update module (SUM)4is adapted to compute a new state si+1as in the embodiment ofFIG.1. However, in the meta-learning system1as illustrated in the embodiment ofFIG.2, the previous state siand the new current state si+1are combined in a state combination module (SCM)8taking into account the learning strength di. For example, a calculated learning strength di=0 may lead to the new calculated state si+1being ignored and the output being equal to the previous old state si. In contrast, if the calculated learning strength dicalculated by the learning decision module (LDM)7is di=1 this can lead to the output of the state combination module (SCM)8to be the new state s. An example combination rule implemented by the state combination module (SCM)8can be for instance: si+1′=di×si+1+(1−di)×si(6) The state combination module (SCM)8of the embodiment illustrated inFIG.2is adapted to combine the current state siand the updated state si+1received from the state update module (SUM)4using the learning strength diprovided by the learning decision module (LDM)7to adjust the updated state si+i′applied to the state to parameter mapping module (MM)5. In the illustrated embodiment ofFIG.2, the meta-learning system1can comprise a learning strength penalizing module (LSPM)9adapted to compare an updated learning strength di+1and a current learning strength diand to associate a learning strength penalty with an observed change of the learning strength. Penalizing the amount learned in addition to overall error can be achieved using the learning strength d in the training stage of the meta-learning system1such that: Loss=Σi(ei+αdi) (7) becomes minimal. The inner model function F of the inner function computation module (IFCM)2can be trained to minimize the errors eicomputed by the error computation module (ECM)3and to minimize the overall sum over all time steps of the learning strength d provided by the learning decision module (LDM)7expressed by the associated learning strength penalties provided by the learning strength penalizing module (LSPM)9. In a possible embodiment of the meta-learning system1according to the present invention, two phases can be distinguished. In a first learning phase, the state update module (SUM)4is learned using labelled learning data applied to adjust the model parameters p of the inner model function F within the inner function computation module (IFCM)2. Further, in a subsequent training phase following the learning phase, the inner model function F of the inner model function module (IFCM)2is then trained using training data applied to the inner function computation module (IFCM)2. In a possible embodiment, training of the meta-learning system1can be done using sequences of inputs x and outputs y from many different problems. It can be useful to combine sequences from different problems into mini-batches to perform a stochastic gradient descent. Given enough different training problems, the meta-learning system1can then generalize and learn from sequences from unseen problems. As an example of applying the system to a class of problems, the MNIST image data set can be used. This image data set contains handwritten images of the digits 0 to 9. The inner model function F can for example be setup as a binary classifier for distinguishing two different digits. The system is supposed to learn from a short sequence of labelled examples. Once the system has learned, the updating may be turned off and the inner model function F of the inner function computation module (IFCM)2can be run on its own. For example, given ten digits, the number of different binary classification problems between digits which can be constructed from this is e.g. 10×9=90. For training the meta-learning system1, for example, a subset of problems can be used (e.g. 1 vs. 3, 4 vs. 5). A disjoint subset of problems can be used for evaluation such that the evaluation set contains only unseen digits (e.g. all problems with digits 0 to 6 for training and all problems with digits 7 to 9 for evaluation. For training, one fixed length sequence of input/output pairs can act as single input to the meta-learning system1. Multiple of these inputs from different problems can then be put into a mini-batch and used for updating the parameters p of the meta-learning system1according to the above-described loss functions. After training, the system can be applied to variable length sequences of unseen example sequences from unseen digit classes. It can be shown from experiments that the meta-learning system1can reach average classification rates of 90% on sequences with a length of 64. This means that with only 64 examples, the system is able to learn the unknown problem such that over all 64 sequence examples the mean correct rate is that high. The meta-learning system1according to an embodiment of the present invention can also be used in a similar way, for instance with a CIFAR100 data set. The meta-learning system1according to an embodiment of the present invention is not limited to classification problems but can learn to associate any outputs with inputs. The meta-learning system1according to an embodiment of the present invention can be used but is not limited to segmentation, regression, registration, etc. The meta-learning system1shows a remarkable ability to quickly learn to solve unseen problems from the same problem class as problems that have been used during training. In the meta-learning system1according to an embodiment of the present invention, the amount learned is penalized in addition to the overall calculated prediction error e. By penalizing the amount learned in addition to the overall prediction error e of the system1, the meta-learning system1incentivizes not to make unnecessary updates while still minimizing the error e. The meta-learning system1according to an embodiment of the present invention can operate in an online setting where input data is changing all the time and can therefore be considered as unseen data. In this setting, the state update module (SUM)4does perform only updates that will help in predicting future unseen data. Penalizing updates is therefore a way to incentivize the quality of updates of the system1further than just by penalizing the overall error e. Accordingly, the meta-learning system1according to an embodiment of the present invention is able to learn very quickly to solve problems by looking at only a few examples or data sets of input/output data pairs. The meta-learning system1can be used in many areas, especially in any use case where there is little training data but much data for similar problems. The meta-learning system1is adapted to provide meta-learning, e.g. learning to learn. A network or model is trained that will train another model in its inference. Instead of employing standard mathematical optimizers for training, the meta-learning system1according to an embodiment of the present invention allows to learn the optimizer itself. The meta-learning system1according to an embodiment of the present invention can quickly learn to solve a problem given a limited training data set. The meta-learning system1according to an embodiment of the present invention allows to train a meta-model on a set of problems which also works well when applied to an unseen problem. The meta-model with attributes can be constructed and applied to complex real world problems in an efficient way. This can also be performed online. The meta-learning system1according to an embodiment of the present invention can be used for a wide range of use cases and/or different problem classes including landmark detection, computer-aided diagnostics, segmenting image registration or any kind of classification. The patent claims of the application are formulation proposals without prejudice for obtaining more extensive patent protection. The applicant reserves the right to claim even further combinations of features previously disclosed only in the description and/or drawings. References back that are used in dependent claims indicate the further embodiment of the subject matter of the main claim by way of the features of the respective dependent claim; they should not be understood as dispensing with obtaining independent protection of the subject matter for the combinations of features in the referred-back dependent claims. Furthermore, with regard to interpreting the claims, where a feature is concretized in more specific detail in a subordinate claim, it should be assumed that such a restriction is not present in the respective preceding claims. Since the subject matter of the dependent claims in relation to the prior art on the priority date may form separate and independent inventions, the applicant reserves the right to make them the subject matter of independent claims or divisional declarations. They may furthermore also contain independent inventions which have a configuration that is independent of the subject matters of the preceding dependent claims. None of the elements recited in the claims are intended to be a means-plus-function element within the meaning of 35 U.S.C. § 112(f) unless an element is expressly recited using the phrase “means for” or, in the case of a method claim, using the phrases “operation for” or “step for.” Example embodiments being thus described, it will be obvious that the same may be varied in many ways. Such variations are not to be regarded as a departure from the spirit and scope of the present invention, and all such modifications as would be obvious to one skilled in the art are intended to be included within the scope of the following claims. | 51,622 |
11861501 | DESCRIPTION OF EMBODIMENTS To make the objectives, technical solutions, and advantages of the present disclosure clearer and more comprehensible, the present disclosure is further described below in detail with reference to the accompanying drawings and the embodiments. It is to be understood that the specific embodiments described herein are merely used for explaining the present disclosure, but are not intended to limit the present disclosure. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present disclosure without creative efforts shall fall within the protection scope of the present disclosure. First, several terms described in the embodiments of the present disclosure are introduced. Semantic segmentation refers to dividing an image into several non-overlapping regions according to features such as the gray scale, color, texture, and shape of the image, and enabling the features to present similarity in the same region, and present obvious difference in different regions. A three-dimensional image is an image added with a spatial dimension (for example, a depth dimension) or a temporal dimension compared to a two-dimensional image. For example, a three-dimensional medical image may be considered as a three-dimensional image added with a depth dimension, and a video may be considered as a three-dimensional image added with a temporal dimension. A target object is an object belonging to a foreground region in semantic segmentation. For a three-dimensional medical image, the target object may be a target organ. The target organ is an internal organ or tissue of a human body, and/or an internal organ or tissue of an animal, such as the heart, lung, liver, spleen, and stomach. For a two-dimensional medical image, the target object may be a target organ. In the embodiments of the present disclosure, description is made mainly by using an example in which the target object is a human organ in a three-dimensional medical image. In a medical image, a shape or volume change of human organs or tissues has an important implication for clinical diagnosis. To avoid false determining that may be generated during manual analysis, semantic segmentation is performed on a medical image by using a convolutional network model in the related art. That is, the medical image is inputted in the convolutional network model, features of corresponding human organs or tissues in the medical image are extracted by using the constructed convolutional network model, and the features of the human organs or tissues are classified, to obtain specific regions in which the human organs or tissues are located in the medical image. A human organ or tissue region and a background region can be distinguished in the medical image after the semantic segmentation, and then, a doctor performs clinical diagnosis. The “medical image” herein may include an X-ray image obtained by irradiating a human body with X-rays, a CT image obtained through computerized tomography (CT), and an MRI image obtained through magnetic resonance imaging (MRI). A medical image acquired by using a medical image acquisition device may be a 2D medical image, or may be a 3D medical image. In an exemplary related art, a Pspnet is used for performing semantic segmentation on a 2D medical image. The Pspnet performs convolution on an inputted medical image by using convolution kernels of various different sizes, extracts features of the medical image, to form feature maps of various different sizes, and finally performs interpolation on the outputted feature maps to scale up the feature maps, to obtain a semantic segmentation result. For example, as shown inFIG.1, a medical image101is inputted into a Pspnet network model, to extract features of the medical image101and obtain a first feature map102having the same size as the medical image101. Then, the Pspnet network model performs convolution calculation on the simplified first feature map102respectively by using convolution kernels of four different scales, to obtain four feature submaps corresponding to sizes of the convolution kernel. Sizes of the four feature submaps are different from each other. Next, the sizes of the four feature submaps of different sizes are scaled up through interpolation by using up-sampling to a size of the medical image101, and the four scaled-up feature submaps are connected to the first feature map102, to obtain a second feature map103. Finally, a final probability map104is obtained after semantic segmentation is performed on the second feature map103through convolution. However, the Pspnet can only perform semantic segmentation on a 2D medical image, and cannot perform semantic segmentation on a 3D medical image. When the medical image is a 3D medical image that has relatively high definition and detection accuracy, such as a CT image or an MRI image, if semantic segmentation is forcibly performed on the 3D medical image by using the Pspnet, a “fault phenomenon” may easily occur, and edge fitting after image segmentation cannot meet a requirement. In addition, the Pspnet cannot process the 3D medical image either. The embodiments of the present disclosure provide a semantic segmentation method and apparatus for a three-dimensional image, a terminal, and a storage medium, to resolve the problem in the related art. In the method, semantic segmentation of a three-dimensional image can be implemented. Typically, the three-dimensional image is a three-dimensional medical image or a video. In the embodiments of the present disclosure, description is made by using an example in which the three-dimensional image is a three-dimensional medical image. FIG.2is a schematic diagram of an implementation environment according to an exemplary embodiment of the present disclosure. A medical image acquisition device100and a computer device200are included inFIG.2. It can be understood that, medical image is used as an exemplary embodiment and the disclosed method and system can be applied to other 3D images of other entities as well, such as a 3D image of a fossil. The medical image acquisition device100is configured to acquire a medical image of a human organ or tissue. The medical image includes a two-dimensional medical image and a three-dimensional medical image. The medical image acquisition device100is further configured to transmit the acquired medical image to the computer device200. The computer device200is configured to receive the medical image, and perform semantic segmentation on the medical image. In some embodiments, the medical image acquisition device100may be a device independent of the computer device200, or may be a device combined into the computer device200as a whole. The computer device200includes a central processing unit (CPU)210and a memory220. The CPU210is configured to invoke a neural network model for implementing semantic segmentation. The memory220is configured to store the neural network model for implementing semantic segmentation. The neural network model includes a first segmentation model221, a second segmentation model222, a third segmentation model223, and an adaptive fusion model224. In some embodiments, the first segmentation model221, the second segmentation model222, and the third segmentation model223are two-dimensional models for performing semantic segmentation based on a convolutional neural network. The adaptive fusion model224is a three-dimensional model for performing adaptive fusion on semantic segmentation results of the three two-dimensional semantic segmentation models to obtain a three-dimensional semantic segmentation result. The first segmentation model221is used for performing two-dimensional semantic segmentation on two-dimensional slice images of an x axis, to obtain a distribution probability map of a target organ on an x-axis directional plane. The second segmentation model222is used for performing two-dimensional semantic segmentation on two-dimensional slice images of a y axis, to obtain a distribution probability map of the target organ on a y-axis directional plane. The third segmentation model223is used for performing two-dimensional semantic segmentation on two-dimensional slice images of a z axis, to obtain a distribution probability map of the target organ on a z-axis directional plane. The adaptive fusion model224is used for performing three-dimensional fusion on the three distribution probability maps respectively corresponding to the x-axis directional plane, the y-axis directional plane, and the z-axis directional plane, to obtain a three-dimensional distribution binary image of the target organ. In some embodiments of the present disclosure, slicing is performed on a three-dimensional image according to three directional planes in which three-dimensional coordinate axes are located, semantic segmentation is then performed on two-dimensional slice images of the three directional planes by using three segmentation models, to obtain distribution probability maps of the three directional planes, and next, three-dimensional fusion is performed on the three distribution probability maps by using an adaptive fusion model, to obtain a final three-dimensional distribution binary image corresponding to a target object. FIG.3is a flowchart of a semantic segmentation method for a three-dimensional image according to an exemplary embodiment of the present disclosure. The method may be applied to the implementation environment shown inFIG.2, and includes: Step301. A terminal obtains a three-dimensional image. In some embodiments, the terminal acquires a three-dimensional image by using an image acquisition device. Step302. The terminal performs slicing on the three-dimensional image according to three directional planes in which three-dimensional coordinate axes are located, to obtain two-dimensional slice images of an x axis, two-dimensional slice images of a y axis, and two-dimensional slice images of a z axis. After obtaining the three-dimensional image, the terminal performs slicing on the three-dimensional image according to three directional planes in which three-dimensional coordinate axes are located, to obtain two-dimensional slice images of an x axis, two-dimensional slice images of a y axis, and two-dimensional slice images of a z axis. An x-axis directional plane is a plane on which the x axis and the z axis are located, a y-axis directional plane is a plane on which the y axis and the z axis are located, and a z-axis directional plane is a plane on which the x axis and the y axis are located. Step303. The terminal invokes a first segmentation model to perform semantic segmentation on the two-dimensional slice images of the x axis, to obtain a distribution probability map of a target object on an x-axis directional plane. The CPU invokes a first segmentation model stored in the memory to perform semantic segmentation on the two-dimensional slice images of the x axis. The first segmentation model completes a process of performing semantic segmentation on the two-dimensional slice images of the x axis according to features such as the gray scale, color, texture, and shape of the target object in the two-dimensional slice images of the x axis, thereby outputting a distribution probability map of the target object on an x-axis directional plane. Step304. The terminal invokes a second segmentation model to perform semantic segmentation on the two-dimensional slice images of the y axis, to obtain a distribution probability map of the target object on a y-axis directional plane. The CPU invokes a second segmentation model stored in the memory to perform semantic segmentation on the two-dimensional slice images of they axis. The second segmentation model completes a process of performing semantic segmentation on the two-dimensional slice images of they axis according to features such as the gray scale, color, texture, and shape of the target object in the two-dimensional slice images of they axis, thereby outputting a distribution probability map of the target object on a y-axis directional plane. Step305. The terminal invokes a third segmentation model to perform semantic segmentation on the two-dimensional slice images of the z axis, to obtain a distribution probability map of the target object on a z-axis directional plane. The CPU invokes a third segmentation model stored in the memory to perform semantic segmentation on the two-dimensional slice images of the z axis. The third segmentation model completes a process of performing semantic segmentation on the two-dimensional slice images of the z axis according to features such as the gray scale, color, texture, and shape of the target object in the two-dimensional slice images of the z axis, thereby outputting a distribution probability map of the target object on a z-axis directional plane. Step306. The terminal invokes an adaptive fusion model to perform three-dimensional fusion on the three distribution probability maps respectively corresponding to the x-axis directional plane, the y-axis directional plane, and the z-axis directional plane, to obtain a three-dimensional distribution binary image of the target object. The CPU invokes an adaptive fusion model stored in the memory to perform adaptive fusion on the three obtained distribution probability maps corresponding to the x axis, the y axis, and the z axis. Because the adaptive fusion model fuses two-dimensional distribution probability maps in three different dimensions, much background noise may be suppressed, and edges of the target object are smoothly and accurately segmented, to finally obtain a three-dimensional distribution binary image of the target object. An example in which the three-dimensional image is a three-dimensional medical image is used. Referring toFIG.4, a computer device respectively performs segmentation on an inputted three-dimensional medical image401on an x-axis directional plane, a y-axis directional plane, and a z-axis directional plane, to obtain two-dimensional slice images402of an x axis, two-dimensional slice images403of a y axis, and two-dimensional slice images404of a z axis, then performs two-dimensional semantic segmentation on the three groups of two-dimensional slice images, to obtain two-dimensional distribution probability maps405to407of the target object on the three directional planes, and then performs three-dimensional fusion on the three two-dimensional distribution probability maps405to407by using an adaptive fusion model, to obtain a three-dimensional distribution binary image408(3D Mask) of the target object. In conclusion, in the method provided in some embodiments, slicing is performed on an obtained three-dimensional image according to the three directional planes in which three-dimensional coordinate axes are located, to obtain two-dimensional slice images corresponding to three directional planes, and then two-dimensional distribution probability maps corresponding to the three directional planes are obtained by using three segmentation models corresponding to the three directional planes, so that a terminal implements two-dimensional semantic segmentation on a three-dimensional medical image. Then, three-dimensional fusion is performed on the three distribution probability maps by using an adaptive fusion model, to obtain a three-dimensional distribution binary image of the target object, so that the problem in the related art that the Pspnet network model is only applicable to semantic segmentation on a 2D natural image, and cannot perform semantic segmentation on a 3D medical image is resolved. Therefore, semantic segmentation can be performed on the 3D medical image by using three 2D segmentation models and one adaptive fusion model, and because the adaptive fusion model fuses two-dimensional distribution probability maps in three different dimensions, background noise is effectively suppressed during three-dimensional fusion, so that edges of the target object are smoothly and accurately segmented. FIG.5is a flowchart of a semantic segmentation method for a three-dimensional image according to another exemplary embodiment of the present disclosure. The method may be applied to the implementation environment shown inFIG.2. In some embodiments, description is made by using an example in which the three-dimensional image is a three-dimensional medical image and the target object is a target organ. The method includes the following steps: Step501. A terminal obtains a three-dimensional medical image. The computer device acquires a three-dimensional medical image by using a medical image acquisition device, and the three-dimensional medical image includes a three-dimensional target organ, and a background region other than the target organ. Step502. The terminal performs slicing on the three-dimensional medical image according to three directional planes in which three-dimensional coordinate axes are located, to obtain two-dimensional slice images of an x axis, two-dimensional slice images of a y axis, and two-dimensional slice images of a z axis. Therefore, after obtaining the three-dimensional medical image, the computer device performs slicing on the three-dimensional medical image according to three directional planes in which three-dimensional coordinate axes are located, to obtain two-dimensional slice images of an x axis, two-dimensional slice images of a y axis, and two-dimensional slice images of a z axis. In some embodiments, because a distribution location of each type of target organ in the three-dimensional medical image is relatively fixed, the computer device further reads pre-stored first clinical prior knowledge, the first clinical prior knowledge being used for indicating a target value range of a candidate appearing location of the target organ in each two-dimensional slice image. For example, a transverse coordinate range of a candidate appearing location of a target organ A in the two-dimensional slice images of the x axis is [a1, a2], and a longitudinal coordinate range of a candidate appearing location of the target organ A in the two-dimensional slice images of they axis is [b1, b2]. The target value range is used for performing first noise filtering in a post-processing process. In some embodiments, because an external shape of each type of target organ is an ellipsoidal shape, the computer device further reads pre-stored second clinical prior knowledge, the second clinical prior knowledge being used for indicating a 3D ellipsoidal model of the target organ. For example, the computer device obtains, through statistics by using the second clinical prior knowledge, possible longest axes and shortest axes of the target organ on the three x-axis, y-axis, and z-axis directional planes, thereby pre-establishing a three-dimensional ellipsoidal model of the target organ. The three-dimensional ellipsoidal model indicates a candidate appearing location of the target organ in the three-dimensional medical image, and the three-dimensional ellipsoidal model is used for performing second noise filtering in the post-processing process. Step503. The terminal performs, when an aspect ratio of a two-dimensional slice image exceeds a preset ratio range, scanning-box segmentation on the two-dimensional slice image according to a square border formed by a short side length of the two-dimensional slice image, to obtain several to-be-processed two-dimensional slice images. Because sizes of inputted images of segmentation models corresponding to the three coordinate axes are generally a square size, and in some implementations, a two-dimensional slice image is extremely long and narrow, the target organ is severely deformed after the long and narrow two-dimensional slice image is directly converted into an image of the square size, resulting in a failure in semantic segmentation. Therefore, the computer device may further process the two-dimensional slice image in the following image pre-processing manner. In some embodiments, when an aspect ratio of an obtained two-dimensional slice image is within the preset ratio range, the computer device converts a size of the two-dimensional slice image into an input size that meets a segmentation model. The preset ratio range may be [⅓, 3]. In some embodiments, as shown inFIG.6, when an aspect ratio of an obtained two-dimensional slice image exceeds the preset ratio range, that is, the aspect ratio of the two-dimensional slice image exceeds [⅓, 3], it is considered that the two-dimensional slice image is extremely long and narrow. If the computer device directly converts an original size of the two-dimensional slice image601into an input size602, and the input size is a size meeting a pixel size of a segmentation model, a target organ in the two-dimensional slice image601is squeezed into a bar, resulting in an inaccurate final prediction result. In this case, as shown inFIG.7, when training a segmentation model, the computer device performs segmentation on a two-dimensional slice image701that is obtained according to a sample image, according to a square border formed by a short side length of the two-dimensional slice image701, to obtain an intermediate to-be-processed two-dimensional slice image702. The computer device converts a size of the intermediate to-be-processed two-dimensional slice image702into an input size703of the segmentation model for training. In a test process or a prediction process, the computer device performs scanning-box segmentation on a two-dimensional slice image704that is obtained according to the three-dimensional medical image, according to a square border formed by a short side length of the two-dimensional slice image704, to obtain several to-be-processed two-dimensional slice images705(for example, three images inFIG.7). Then, the computer device converts sizes of the several to-be-processed two-dimensional slice images705into an input size703of a segmentation model, and respectively inputs the several to-be-processed two-dimensional slice images705into the segmentation model for prediction. Step504. The terminal invokes a first segmentation model to perform semantic segmentation on the two-dimensional slice images of the x axis, to obtain a distribution probability map of a target organ on an x-axis directional plane. The computer device invokes a first segmentation model stored in the memory to perform semantic segmentation on the two-dimensional slice images of the x axis. The first segmentation model completes a process of performing semantic segmentation on the two-dimensional slice images of the x axis according to features such as a distribution location, a size, and a shape of the target organ in the three-dimensional medical image, thereby outputting a distribution probability map of the target organ on an x-axis directional plane. In some embodiments, the first segmentation model includes a deep network encoding unit and a skip transfer decoding unit, the deep network encoding unit including n convolutional layers, and the skip transfer decoding unit including m deconvolution layers, both n and m being a positive integer. The deep network encoding unit is configured to perform down-sampling feature extraction on a two-dimensional slice image through the n convolutional layers, to obtain a down-sampled first intermediate feature map. The skip transfer decoding unit is configured to perform up-sampling processing on the first intermediate feature map and a second intermediate feature map through the m deconvolution layers, to obtain an up-sampled distribution probability map. The second intermediate feature map includes a feature map outputted by an ithconvolutional layer of the n convolutional layers, i being an integer less than or equal to n. In some embodiments, the deep network encoding unit is a neural network model constructed based on a residual network model, or the deep network encoding unit is a neural network model constructed based on another classification model, which is not limited in this embodiment. For example, as shown inFIG.8, the computer device inputs obtained two-dimensional slice image(s)801of the x axis into a deep network encoding unit802constructed based on a ResNet101 model. The deep network encoding unit802includes five convolutional layers, and the five convolutional layers are respectively Conv1, Conv2_x, Conv3_x, Conv4_x, and Conv5_x. Information about a size of a convolution kernel and a quantity of convolution kernels corresponding to each convolutional layer, and a stride through which a convolution kernel performs convolution each time is shown in Table 1. x in the table represents a convolutional sublayer number belonging to the convolutional layer. TABLE 1Name of convolutional layerResNet101Conv17 × 7, 64, and stride 2Conv2_x3 × 3, max pool, and stride 21 × 1, and 64x3 blocks3 × 3, and 641 × 1, and 256Conv3_x1 × 1, and 128x4 blocks3 × 3, and 1281 × 1, and 512Conv4_x1 × 1, and 256x23 blocks3 × 3, and 2561 × 1, and 1024Conv5_x1 × 1, and 512x3 blocks3 × 3, and 5121 × 1, and 2048 As shown in Table 1, the Conv1 layer of the deep network encoding unit802includes 64 7×7 convolution kernels, and each time of convolution has a stride of 2. Conv2_x includes one convolutional sublayer and three first blocks that are cascaded. The first convolutional sublayer includes a 3×3 convolution kernel, each time of convolution has a stride of 2, and max pooling is performed once after the convolution of the first convolutional sublayer. The three first blocks located behind the first convolutional sublayer are the same. As shown inFIG.9, the first block includes three convolutional sublayers. A first convolutional sublayer901includes 64 1×1 convolution kernels, a second convolutional sublayer902includes 64 3×3 convolution kernels, a third convolutional sublayer903includes 256 1×1 convolution kernels, and an activation layer, e.g., a rectified linear unit (ReLU) layer, and a batch normalization (BN) layer (not shown in the figure) are connected behind each convolutional sublayer. In addition, the first block is further used for mapping, through a skip connection, pixels corresponding to a feature map outputted by the first convolutional sublayer of a previous layer to a feature map outputted by the third convolutional sublayer903, and perform activation through the ReLU layer, to obtain a feature map of an input of a next block. The ReLU layer is used for converting linear data obtained after the convolution into non-linear data, thereby improving an expression capability of the ResNet101 model. The BN layer is used for accelerating a convergence speed of the ResNet101 model, and a gradient diffusion problem of the ResNet101 model having deep layers is alleviated, so that the ResNet101 model is more stable and easier to be trained. Conv3_x includes four cascaded second blocks, and the four second blocks are the same. The second block has the same structure as the first block, and the second block may be understood with reference to the structure of the first block. The second block includes three convolutional sublayers. A fourth convolutional sublayer includes 128 1×1 convolution kernels, and each time of convolution has a stride of 2. A fifth convolutional sublayer includes 128 3×3 convolution kernels, a sixth convolutional sublayer includes 512 1×1 convolution kernels, and a ReLU layer and a BN layer are connected behind each convolutional sublayer. In addition, the second block is further used for mapping, through a skip connection, pixels corresponding to a feature map outputted by a previous block to a feature map outputted by the sixth convolutional sublayer, and perform activation through the ReLU layer, to obtain a feature map of an input of a next block. Conv4_x includes 23 cascaded third blocks, and the 23 third blocks are the same. The third block has the same structure as the first block, and the third block may be understood with reference to the structure of the first block. The third block includes three convolutional sublayers. A seventh convolutional sublayer includes 256 1×1 convolution kernels, and each time of convolution has a stride of 1. To ensure that an area (also referred to as a receptive field) of a feature map outputted by each layer behind the seventh convolutional sublayer is not reduced, a stride of atrous convolution is set to 2. An eighth convolutional sublayer includes 256 3×3 convolution kernels, a ninth convolutional sublayer includes 1024 1×1 convolution kernels, and a ReLU layer and a BN layer are connected behind each convolutional sublayer. In addition, the third block is further used for mapping, through a skip connection, pixels corresponding to a feature map outputted by a previous block to a feature map outputted by the ninth convolutional sublayer, and perform activation through the ReLU layer, to obtain a feature map of an input of a next block. The atrous convolution, also referred to as dilated convolution, is a convolution manner of injecting holes between convolution kernels. Compared to common convolution, the atrous convolution introduces a hyperparameter referred to as “dilation rate”. The parameter defines a spacing between values when convolution kernels process data. Through atrous convolution processing, on one hand, a spatial scale of an image feature can be unchanged, thereby avoiding an information loss caused by reduction of information about pixels of the image feature; on the other hand, the receptive field can be expanded, thereby implementing more precise target detection. The receptive field is a region that a pixel located on a feature map outputted by a hidden layer in a neural network maps on an original image. A larger receptive field of the pixel on the original image indicates a larger range that the pixel maps on the original image and a more global feature with a higher semantic level. Conv5_x includes three cascaded fourth blocks, and the three fourth blocks are the same. The fourth block has the same structure as the first block, and the fourth block may be understood with reference to the structure of the first block. The fourth block includes three convolutional sublayers. A tenth convolutional sublayer includes 512 1×1 convolution kernels, an eleventh convolutional sublayer includes 512 3×3 convolution kernels, a twelfth convolutional sublayer includes 2048 1×1 convolution kernels, and a ReLU layer and a BN layer are connected behind each convolutional sublayer. In addition, the fourth block is further used for mapping, through a skip connection, pixels corresponding to a feature map outputted by a previous block to a feature map outputted by the twelfth convolutional sublayer, and perform activation through the ReLU layer, to obtain a feature map of an input of a next block. After features of the two-dimensional slice image801of the x axis are extracted through the five convolutional layers of the deep network encoding unit802, a first intermediate feature map (1) is obtained, and the first intermediate feature map (1) corresponds to an x-axis directional plane. For example, the first intermediate feature map (1) is a feature map obtained after 8-fold down-sampling. In some embodiments, down-sampling is performed through pooling after Conv5_x. Given that a case of a huge scale range distribution difference easily occurs when a 3D image is segmented into slice images, multi-scale or multi-resolution information needs to be added, and a size of a kernel for down-sampling is set to five types, namely, 1, 9, 19, 37, and 74. For example, the computer device then inputs the first intermediate feature map (1) into a skip transfer decoding unit803. The skip transfer decoding unit803includes two deconvolution layers. The computer device encodes the first intermediate feature map (1) in a stepwise manner through the deconvolution layer, a quantity of times of encoding is 2, and a multiple of encoding each time is 2. Encoding the first intermediate feature map (1) refers to performing skip connection and up-sampling processing on the first intermediate feature map (1) and a feature map outputted by a pre-determined layer in the deep network encoding unit802. In a first deconvolution layer, a skip connection and 2-fold up-sampling processing are performed on the first intermediate feature map (1) and a second intermediate feature map (2) outputted by the Conv3_x convolutional layer of the deep network encoding unit802, to obtain a 2-fold up-sampled first intermediate feature map (1′), and a skip connection and 2-fold up-sampling processing are performed on the up-sampled first intermediate feature map (1′) and a second intermediate feature map (2′) outputted by the Conv1 convolutional layer of the deep network encoding unit802, to obtain a 4-fold up-sampled second intermediate feature map (2′), and obtain a final distribution probability map. In some embodiments, sizes of the first intermediate feature map and the second intermediate feature map that are in a skip connection are the same. The computer device obtains a distribution probability map804of the target organ on the x-axis directional plane by using the first segmentation model. The distribution probability map804indicates a probability that each pixel on the two-dimensional slice image belongs to a foreground region and/or a probability that each pixel on the two-dimensional slice image belongs to a background region. The foreground region is a region in which the target organ is located, and the background region is a region without the target organ. Step505. The terminal invokes a second segmentation model to perform semantic segmentation on the two-dimensional slice images of the y axis, to obtain a distribution probability map of the target organ on a y-axis directional plane. In some embodiments, the second segmentation model and the first segmentation model have the same structure, and a difference only lies in a sample image used in the training process. Therefore, for a process of performing semantic segmentation on the two-dimensional slice images of they axis by using the second segmentation model, reference may be made to the description of step504, and details are not described again. Step506. The terminal invokes a third segmentation model to perform semantic segmentation on the two-dimensional slice images of the z axis, to obtain a distribution probability map of the target organ on a z-axis directional plane. In some embodiments, the third segmentation model and the first segmentation model have the same structure, and a difference only lies in a sample image used in the training process. Therefore, for a process of performing semantic segmentation on the two-dimensional slice images of they axis by using the second segmentation model, reference may be made to the description of step504, and details are not described again. Step507. The terminal invokes an adaptive fusion model to combine the three distribution probability maps respectively corresponding to the x-axis directional plane, the y-axis directional plane, and the z-axis directional plane, to obtain a three-dimensional distribution feature map. The computer device invokes an adaptive fusion model stored in the memory to combine the three distribution probability maps respectively corresponding to the x-axis directional plane, the y-axis directional plane, and the z-axis directional plane, to obtain a three-dimensional distribution feature map. As shown inFIG.10, the computer device performs three-dimensional fusion on a distribution probability map1001of the target organ on the x-axis directional plane, a distribution probability map1002of the target organ on the y-axis directional plane, and a distribution probability map1003of the target organ on the z-axis directional plane that are obtained, to obtain a three-dimensional distribution feature map1004. The distribution probability maps1001to1003on the three directional planes have the same size as the three-dimensional medical image, and have probabilities corresponding to respective directional planes. The three-dimensional distribution feature map1004includes probabilities that correspond to the target organ and that respectively correspond to the three directional planes, and a size of the three-dimensional distribution feature map1004is the same as a size of the three-dimensional medical image. Step508. The terminal performs three-dimensional fusion convolution on the three-dimensional distribution feature map, to obtain a three-dimensional segmentation probability map. The computer device invokes the adaptive fusion model (e.g., a three-convolution-layer model) stored in the memory to perform three-dimensional fusion convolution on the obtained three-dimensional distribution feature map1004, to obtain a three-dimensional segmentation probability map1005. The three-dimensional segmentation probability map1005is used for indicating a probability that each pixel in the three-dimensional medical image belongs to a foreground region and/or a probability that each pixel in the three-dimensional medical image belongs to a background region. The foreground region is a region in which the target organ is located, and the background region is a region without the target organ. InFIG.10, H*W*D*C indicates a size and a corresponding probability of an image. In some embodiments, the adaptive fusion model includes three shallow 3D convolutional layers. A first 3D convolutional layer includes 64 3*3*3 3D convolution kernels and a convolution stride is 1. A second 3D convolutional layer includes 64 3*3*3 3D convolution kernels and a convolution stride is 1. A third 3D convolutional layer includes one 3*3*3 3D convolution kernels and a convolution stride is 1. In some embodiments, a size of the three-dimensional segmentation probability map1005is the same as the size of the three-dimensional medical image. Step509. The terminal obtains a three-dimensional distribution binary image of the target organ through calculation according to a maximum probability category of each pixel in the three-dimensional segmentation probability map. In some embodiments, the adaptive fusion model determines a category of each pixel in the image according to a maximum probability category of the each pixel in the three-dimensional segmentation probability map. The category includes a foreground pixel belonging to the target organ and a background pixel that does not belongs to the target organ. In some embodiments, the three-dimensional segmentation probability map1005includes a first probability that each pixel belongs to the foreground pixel and a second probability that each pixel belongs to the background pixel, and the maximum probability category is a category corresponding to a lager probability between the first probability and the second probability. For example, if a probability that a pixel belongs to the foreground pixel is 80%, and a probability that the pixel belongs to the background pixel is 20%, a maximum probability category of the pixel is the foreground pixel. In some embodiments, in the three-dimensional distribution binary image, the foreground pixel is represented by 1, and the background pixel is represented by 0. Step510. The terminal performs filtering processing on noise pixels in the three-dimensional distribution binary image based on clinical prior knowledge. Because a distribution location of each type of target organ in the three-dimensional medical image is relatively fixed, the computer device may further filter out noise pixels in the three-dimensional distribution binary image by using clinical prior knowledge. First, the computer device filters out first noise pixels exceeding a target value range in the three-dimensional distribution binary image. The target value range is a coordinate value range in which the target organ possibly appears and that is obtained according to first clinical prior knowledge. In some embodiments, the target value range is a three-dimensional cubic box region. The first clinical prior knowledge may be constructed based on a plurality of sample images. Second, the computer device filters out second noise pixels outside a three-dimensional ellipsoidal model in the three-dimensional distribution binary image. The three-dimensional ellipsoidal model is an ellipsoidal model that corresponds to the target organ and that is obtained according to second clinical prior knowledge. The second clinical prior knowledge may be constructed based on a plurality of sample images. Because shapes of most organs are inclined to be ellipsoids, the terminal may obtain, through statistics in advance, longest axes and shortest axes of the target organ on the two-dimensional slice images on the x-axis, y-axis, and z-axis directional planes, to construct the three-dimensional ellipsoidal model of the target organ. Noise pixels outside the three-dimensional ellipsoidal model is filtered out from candidate pixels according to the constructed three-dimensional ellipsoidal model. In some embodiments, a method for the computer device filtering out noise pixels may be at least one of the foregoing two filtering manners. In conclusion, in the method provided in some embodiments, slicing is performed on an obtained three-dimensional image according to the three directional planes in which three-dimensional coordinate axes are located, to obtain two-dimensional slice images corresponding to three directional planes, and then two-dimensional distribution probability maps corresponding to the three directional planes are obtained by using three segmentation models corresponding to the three directional planes, so that a terminal implements two-dimensional semantic segmentation on a three-dimensional medical image. Then, three-dimensional fusion is performed on the three distribution probability maps by using an adaptive fusion model, to obtain a three-dimensional distribution binary image of the target object, so that the problem in the related art that the Pspnet network model is only applicable to semantic segmentation on a 2D natural image, and cannot perform semantic segmentation on a 3D medical image is resolved. Therefore, semantic segmentation can be performed on the 3D medical image by using three 2D segmentation models and one adaptive fusion model, and because the adaptive fusion model fuses two-dimensional distribution probability maps in three different dimensions, background noise is effectively suppressed during three-dimensional fusion, so that edges of the target object are smoothly and accurately segmented. In the method provided in some embodiments, filtering processing is performed on noise pixels by using clinical prior knowledge, and the terminal obtains pixels belonging to the target organ, which has a relatively strong noise-reduction capability and a good edge segmentation effect. In the method provided in some embodiments, a size of a two-dimensional slice image is changed from an original size to an input size, avoiding a problem that an error may be produced when the original size of the two-dimensional slice image is used, so that when semantic segmentation is performed on a three-dimensional medical image, a target organ can be accurately segmented. In actual application, determining of automatic lesion of a plurality of types of organs or tissues related to shapes can be implemented, thereby achieving an objective of assisting in diagnosis. In some embodiments, the first segmentation model, the second segmentation model, the third segmentation model, and the adaptive fusion model all belong to a convolutional network model. Before the convolutional network model is invoked, the computer device further needs to train the convolutional network model. As shown inFIG.11, a method for training the three two-dimensional segmentation models includes, but is not limited to, the following steps: Step1101. The terminal obtains at least one group of sample images. The computer device acquires at least one group of sample images by using a medical image acquisition device, and a quantity of sample images in each group is not limited, and may be set according to requirements of a trainer. The sample image may include an image having a sample organ and an image having no sample organ. For a sample image having a sample organ, pixels belonging to the sample organ are labeled in the sample image. For a first segmentation model, the sample image may be two-dimensional slice images on an x-axis directional plane, and pixels belonging to the sample organ are labeled on the two-dimensional slice images on the x-axis directional plane. For a second segmentation model, the sample image may be two-dimensional slice images on a y-axis directional plane, and pixels belonging to the sample organ are labeled on the two-dimensional slice images on the y-axis directional plane. For a third segmentation model, the sample image may be two-dimensional slice images on a z-axis directional plane, and pixels belonging to the sample organ are labeled on the two-dimensional slice images on the z-axis directional plane. Step1102. The terminal obtains a labeling result of a sample organ in a sample image, to obtain a sample image data group formed by the sample image and the sample organ corresponding to the sample image, the labeling result including a distribution location of the sample organ in the sample image. The labeling result may also be referred as ground truth. After the computer device obtains the sample image, the trainer or the computer device sets a labeling result for the sample image, the labeling result including pixels belonging to the sample organ. The labeling result is used for indicating at least one type of information about a distribution location of the sample organ in the sample image, a size of the sample organ, and an ellipsoidal shape corresponding to the sample organ. For example, a region in which a sample organ is located and a background region other than the sample organ are labeled in an image having a sample organ, and a region in which there is no sample organ is labeled in an image without a sample organ. The sample image data group is used for being compared with a training result corresponding to the sample image. Step1103. The terminal inputs the sample image into an original segmentation model, to obtain a training result. The computer device inputs the same group of labeled sample images into an original segmentation model, performs recognition on the sample images and sample organs in the sample images by using the original segmentation model, and uses a recognition result as a training result for output. In some embodiments, the original segmentation model is a model constructed based on a ResNet model, as shown inFIG.8. An initial weight of the segmentation model may be set by the trainer according to empirical values, or may be randomly set by the computer device. In a possible embodiment, a weight of a deep network encoding unit in the segmentation model may be initialized by using a ResNet parameter trained through an ImageNet dataset, and a weight of a skip transfer decoding unit is initialized by using a Gaussian distribution value that has a mean value of 0 and that divides a variance of 2 by an input quantity. Step1104. The terminal compares the training result with the labeling result of the sample organ according to each sample image data group, to obtain a calculation loss, the calculation loss being used for indicating an error between the training result and the labeling result of the sample organ. The computer device compares the obtained training result with a sample image data group corresponding to the same group of sample images, to calculate an error between the training result and the labeling result. In some embodiments, the error is a weighted loss function. The calculation loss is used for indicating an error between the training result and the labeling result of the sample organ. The weighted loss function uses a cross entropy loss function, and a weighted loss formula of the cross entropy loss function is: (wfgylog(p)+wwg(1-y)log(1-p)),wfg=1N∑i=1Ntini,wbg=1-wfg, where p represents a probability that the pixel belongs to target pixels corresponding to a target organ, y represents a category, that is, y is 0 or 1, wfgrepresent a weight of a foreground category, wwgrepresents a weight of a background category, tirepresents a quantity of pixels in the foreground of an ithsample image, n, represents a quantity of pixels in the entire ithsample image, N is a quantity of sample images of a batch size, and a weighted value is obtained by collecting statistics on a ratio of the foreground to the background in an sample image. Step1105. The terminal obtains, through training by using an error back propagation algorithm, the segmentation model according to calculation losses respectively corresponding to the at least one sample image data group. The terminal resets a weight by using an error back propagation algorithm according to calculation losses respectively corresponding to the at least one sample image data group, until a weighted loss obtained by the terminal according to the reset weight meets a preset threshold, or a quantity of times of training by the terminal reaches a preset quantity of times. For example, it is required that the terminal may stop training when the quantity of times of training reaches 20000. In this case, training of the segmentation model used for performing two-dimensional semantic segmentation is complete. In some embodiments, the error back propagation algorithm may use a gradient descent method based on stochastic gradient descent (SGD). A convolutional template parameter w and a bias parameter b of the segmentation model are resolved according to the gradient descent method based on SGD, and a training iteration parameter may be selected according to cross verification. After training of the segmentation models respectively corresponding to the three coordinate axes are complete, two-dimensional distribution probability maps are obtained in the trained segmentation models according to two-dimensional slice images of each three-dimensional sample image. The two-dimensional distribution probability maps and labeled three-dimensional binary maps are used as another sample image data group. The adaptive fusion model is trained by using the sample image data group, and a training process of the adaptive fusion model is the same as or similar to the foregoing method. Details are not described in the present disclosure. In some embodiments, a weighted loss is obtained by calculating a probability that each pixel in a feature map belongs to a target pixel. The target pixel is a pixel corresponding to each feature of the target organ. The training process of the adaptive fusion model is the same as the training processes of the three segmentation models, and the training process of the adaptive fusion model may be implemented with reference to the steps shown inFIG.11. After obtaining a training result, the adaptive fusion model uses a dice loss function as a loss function. The dice loss function is used for calculating an error between the training result (e.g., three-dimensional segmentation probability map1005) of the adaptive fusion model and a labeling result (e.g., ground truth1006) of the adaptive fusion model. The semantic segmentation method for a three-dimensional image provided in the present disclosure may also be applied to a semantic segmentation method for a two-dimensional image. FIG.12is a flowchart of a semantic segmentation method for a two-dimensional image according to another exemplary embodiment of the present disclosure. The method may be applied to the implementation environment shown inFIG.2. In some embodiments, description is made by using an example in which the two-dimensional image is a two-dimensional medical image and the target object is a target organ. The method includes the following steps: Step1201. A terminal obtains a two-dimensional medical image. The computer device acquires a two-dimensional medical image by using a medical image acquisition device, and the two-dimensional medical image includes a two-dimensional target organ, and a background region other than the target organ. The computer device performs analysis after obtaining the two-dimensional medical image. In some embodiments, because a distribution location of each type of target organ in the two-dimensional medical image is relatively fixed, the computer device further reads pre-stored third clinical prior knowledge, the third clinical prior knowledge being used for indicating a target value range of a candidate appearing location of the target organ in each two-dimensional medical image. For example, a transverse coordinate range of a candidate appearing location of a target organ A in a two-dimensional medical image of an x axis is [a1, a2], and a longitudinal coordinate range of a candidate appearing location of the target organ A in a two-dimensional medical image of a y axis is [b1, b2]. The target value range is used for performing third noise filtering in a post-processing process. Step1202. The terminal performs, when an aspect ratio of the two-dimensional medical image exceeds a preset ratio range, scanning-box segmentation on the two-dimensional medical image according to a square border formed by a short side length of the two-dimensional medical image, to obtain several to-be-processed two-dimensional medical images. Because sizes of inputted images of segmentation models corresponding to the two coordinate axes are generally a square size, and in some implementations, the two-dimensional medical image is extremely long and narrow, the target organ is severely deformed after the long and narrow two-dimensional medical image is directly converted into an image of the square size, resulting in a failure in semantic segmentation. Therefore, the computer device may further process the two-dimensional medical image in the following image pre-processing manner. In some embodiments, when an aspect ratio of an obtained two-dimensional medical image is within a preset ratio range, the computer device converts a size of the two-dimensional medical image into an input size that meets a segmentation model. The preset ratio range may be [⅓, 3]. In some embodiments, as shown inFIG.6, when an aspect ratio of an obtained two-dimensional medical image exceeds the preset ratio range, that is, the aspect ratio of the two-dimensional medical image exceeds [⅓, 3], it is considered that the two-dimensional medical image is extremely long and narrow. If the computer device directly converts an original size of the two-dimensional medical image into an input size, and the input size is a size meeting a pixel size of a segmentation model, a target organ in the two-dimensional medical image is squeezed into a bar, resulting in an inaccurate final prediction result. In this case, as shown inFIG.7, the computer device performs scanning-box segmentation on the two-dimensional medical image according to a square border formed by a short side length of the two-dimensional medical image, to obtain several to-be-processed two-dimensional medical images. Then, the computer device converts sizes of the several to-be-processed two-dimensional medical images into an input size of a segmentation model, and respectively inputs the several to-be-processed two-dimensional medical images into the segmentation model for prediction. Step1203. The terminal invokes a segmentation model to perform semantic segmentation on the two-dimensional medical image, to obtain a distribution probability map of the target organ. A structure of the segmentation model is the same as the structure of the first segmentation model. Therefore, for the structure of the segmentation model, reference may be made to the structure of the model shown inFIG.8. The segmentation model includes: a deep network encoding unit and a skip transfer decoding unit, the deep network encoding unit including n convolutional layers, and the skip transfer decoding unit including m deconvolution layers, both n and m being a positive integer. The deep network encoding unit is configured to perform, by the terminal, down-sampling feature extraction on a two-dimensional image through the n convolutional layers, to obtain a down-sampled third intermediate feature map. The skip transfer decoding unit is configured to perform, by the terminal, up-sampling processing on the third intermediate feature map and a fourth intermediate feature map through the m deconvolution layers, to obtain an up-sampled distribution probability map. The fourth intermediate feature map includes a feature map outputted by an ithconvolutional layer of the n convolutional layers, i being an integer less than or equal to n. In some embodiments, the segmentation model and the first segmentation model have the same structure, and a difference only lies in a sample image used in the training process. Therefore, for a process of performing semantic segmentation on the two-dimensional medical image by using the segmentation model, reference may be made to the description of step504, and details are not described again. Step1204. The terminal obtains a two-dimensional distribution binary image of the target organ through calculation according to a maximum probability category of each pixel in the distribution probability map. In some embodiments, the segmentation model determines a category of each pixel in the image according to a maximum probability category of each pixel in the distribution probability map. The category includes a foreground pixel belonging to the target organ and a background pixel that does not belongs to the target organ. In some embodiments, the distribution probability map includes a third probability that each pixel belongs to the foreground pixel and a fourth probability that each pixel belongs to the background pixel, and the maximum probability category is a category corresponding to a lager probability between the third probability and the fourth probability. For example, if a probability that a pixel belongs to the foreground pixel is 80%, and a probability that the pixel belongs to the background pixel is 20%, a maximum probability category of the pixel is the foreground pixel. In some embodiments, in the two-dimensional distribution binary image, the foreground pixel is represented by 1, and the background pixel is represented by 0. Step1205. The terminal performs filtering processing on noise pixels in the two-dimensional distribution binary image based on clinical prior knowledge. Because a distribution location of each type of target organ in the two-dimensional medical image is relatively fixed, the computer device may further filter out noise pixels in the two-dimensional distribution binary image by using clinical prior knowledge. The computer device filters out third noise pixels exceeding a target value range in the two-dimensional distribution binary image. The target value range is a coordinate value range in which the target organ possibly appears and that is obtained according to third clinical prior knowledge. In some embodiments, the target value range is a two-dimensional planar box region. The third clinical prior knowledge may be constructed based on a plurality of sample images. In conclusion, in the method provided in some embodiments, a distribution probability map of a target organ is obtained by performing semantic segmentation on an obtained two-dimensional image through a segmentation model, a two-dimensional distribution binary image of the target organ is obtained by determining a maximum probability category of each pixel in the distribution probability map, and an objective of performing semantic segmentation on the two-dimensional image is achieved by filtering out noise pixels from the obtained two-dimensional distribution binary image according to third clinical prior knowledge. In addition, by filtering out the noise pixels, an image segmentation edge after the semantic segmentation is clear, and edge processing is friendly. Moreover, it is proved that the semantic segmentation method for a three-dimensional image is not only applicable to semantic segmentation for a three-dimensional image, but also applicable to semantic segmentation for a two-dimensional image, and segmentation effects are relatively excellent. It is to be understood that, the steps of the embodiments of the present disclosure are not necessarily performed according to a sequence indicated by step numbers. Unless otherwise explicitly specified in the present disclosure, execution of the steps is not strictly limited, and the steps may be performed in other orders. Moreover, at least some of the steps in the embodiments may include a plurality of sub-steps or a plurality of stages. The sub-steps or stages are not necessarily performed at the same moment but may be performed at different moments. The sub-steps or stages are not necessarily performed sequentially, but may be performed in turn or alternately with other steps or at least some sub-steps or stages of other steps. In an embodiment, a terminal is further provided. The terminal includes a semantic segmentation apparatus for a three-dimensional image and a semantic segmentation apparatus for a two-dimensional image. The semantic segmentation apparatus for a three-dimensional image and the semantic segmentation apparatus for a two-dimensional image includes various modules, and each module may be entirely or partially implemented by using software, hardware, or a combination thereof. The following is apparatus embodiments of the present disclosure that can be used for performing the method embodiments of the present disclosure. For details not disclosed in the apparatus embodiments of the present disclosure, refer to the method embodiments of the present disclosure. FIG.13is a schematic diagram of a semantic segmentation apparatus for a three-dimensional image according to an exemplary embodiment of the present disclosure. The apparatus includes:a first obtaining module1310, configured to obtain a three-dimensional image;a slicing module1320, configured to slice the three-dimensional image according to three directional planes in which three-dimensional coordinate axes are located, to obtain two-dimensional slice images of an x axis, two-dimensional slice images of a y axis, and two-dimensional slice images of a z axis;a first segmentation module1330, configured to invoke a first segmentation model to perform semantic segmentation on the two-dimensional slice images of the x axis, to obtain a distribution probability map of a target object on an x-axis directional plane; invoke a second segmentation model to perform semantic segmentation on the two-dimensional slice images of the y axis, to obtain a distribution probability map of the target object on a y-axis directional plane; invoke a third segmentation model to perform semantic segmentation on the two-dimensional slice images of the z axis, to obtain a distribution probability map of the target object on a z-axis directional plane; anda fusion module1340, configured to invoke an adaptive fusion model to perform three-dimensional fusion on the three distribution probability maps respectively corresponding to the x-axis directional plane, the y-axis directional plane, and the z-axis directional plane, to obtain a three-dimensional distribution binary image of the target object. FIG.14is a schematic diagram of a semantic segmentation apparatus for a three-dimensional image according to another exemplary embodiment of the present disclosure. The apparatus includes: a first obtaining module1410, a slicing module1420, a first scanning module1430, a first segmentation module1440, and a fusion module1450. The first obtaining module1410is configured to obtain a three-dimensional image. The slicing module1420is configured to slice the three-dimensional image according to three directional planes in which three-dimensional coordinate axes are located, to obtain two-dimensional slice images of an x axis, two-dimensional slice images of a y axis, and two-dimensional slice images of a z axis. The first scanning module1430is configured to perform, when an aspect ratio of a two-dimensional slice image exceeds a preset ratio range, scanning-box segmentation on the two-dimensional slice image according to a square border formed by a short side length of the two-dimensional slice image, to obtain several to-be-processed two-dimensional slice images. The first segmentation module1440is configured to invoke a first segmentation model to perform semantic segmentation on the two-dimensional slice images of the x axis, to obtain a distribution probability map of a target object on an x-axis directional plane; invoke a second segmentation model to perform semantic segmentation on the two-dimensional slice images of the y axis, to obtain a distribution probability map of the target object on a y-axis directional plane; invoke a third segmentation model to perform semantic segmentation on the two-dimensional slice images of the z axis, to obtain a distribution probability map of the target object on a z-axis directional plane. In some embodiments, at least one model of the first segmentation model, the second segmentation model, and the third segmentation model includes: a deep network encoding unit and a skip transfer decoding unit, the deep network encoding unit including n convolutional layers, and the skip transfer decoding unit including m deconvolution layers, both n and m being a positive integer. The deep network encoding unit is configured to perform down-sampling feature extraction on a two-dimensional slice image through the n convolutional layers, to obtain a down-sampled first intermediate feature map. The skip transfer decoding unit is configured to perform up-sampling processing on the first intermediate feature map and a second intermediate feature map through the m deconvolution layers, to obtain an up-sampled distribution probability map. The second intermediate feature map includes a feature map outputted by an ithconvolutional layer of the n convolutional layers, i being an integer less than or equal to n. The fusion module1450is configured to invoke an adaptive fusion model to perform three-dimensional fusion on the three distribution probability maps respectively corresponding to the x-axis directional plane, the y-axis directional plane, and the z-axis directional plane, to obtain a three-dimensional distribution binary image of the target object. In some embodiments, as shown inFIG.15, the fusion module1450includes:a combination unit1451, configured to invoke the adaptive fusion model to combine the three distribution probability maps respectively corresponding to the x-axis directional plane, the y-axis directional plane, and the z-axis directional plane, to obtain a three-dimensional distribution feature map;a fusion unit1452, configured to perform three-dimensional fusion convolution on the three-dimensional distribution feature map, to obtain a three-dimensional segmentation probability map; anda calculation unit1453, configured to obtain the three-dimensional distribution binary image of the target object through calculation according to a maximum probability category of each pixel in the three-dimensional segmentation probability map. In some embodiments, the three-dimensional image is a three-dimensional medical image, and the apparatus further includes:a first filtering module1460, configured to filter out noise pixels in the three-dimensional distribution binary image based on clinical prior knowledge. The clinical prior knowledge is knowledge obtained by collecting statistics on a distribution location of the target object in the three-dimensional medical image. In some embodiments, the first filtering module1460is configured to filter out first noise pixels exceeding a target value range in the three-dimensional distribution binary image. The target value range is a coordinate value range corresponding to appearance locations of the target object obtained according to first clinical prior knowledge. In some embodiments, the first filtering module1460is configured to filter out second noise pixels outside a three-dimensional ellipsoidal model in the three-dimensional distribution binary image. The three-dimensional ellipsoidal model is an ellipsoidal model that corresponds to the target object and that is obtained according to second clinical prior knowledge. For related details, reference may be made to the method embodiments shown inFIG.3toFIG.5. The first obtaining module1410is further configured to implement any other function that is related to the obtaining step and that is implied or disclosed in the foregoing method embodiments. The slicing module1420is further configured to implement any other function that is related to a slicing step and that is implied or disclosed in the foregoing method embodiments. The first scanning module1430is further configured to implement any other function that is related to the scanning step and that is implied or disclosed in the foregoing method embodiments. The first segmentation module1440is further configured to implement any other function that is related to the segmentation step and that is implied or disclosed in the foregoing method embodiments. The fusion module1450is further configured to implement any other function that is related to a fusion step and that is implied or disclosed in the foregoing method embodiments. The first filtering module1460is further configured to implement any other function that is related to the filtering step and that is implied or disclosed in the foregoing method embodiments. The semantic segmentation apparatus for a three-dimensional image provided in the foregoing embodiment is described only by using an example of division of the functional modules. In actual application, the functions may be allocated to different functional modules as required, which means that the internal structure of the apparatus is divided into different functional modules to complete all or some of the above described functions. In addition, the semantic segmentation apparatus for a three-dimensional image provided in the foregoing embodiment belongs to the same idea as the method embodiment of the semantic segmentation method for a three-dimensional image. For a specific implementation process, refer to the method embodiment. Details are not described herein again. FIG.16is a schematic diagram of a semantic segmentation apparatus for a two-dimensional image according to an exemplary embodiment of the present disclosure. The apparatus includes:a second obtaining module1610, configured to obtain a two-dimensional image;a second scanning module1620, configured to perform, when an aspect ratio of the two-dimensional image exceeds a preset ratio range, scanning-box segmentation on the two-dimensional image according to a square border formed by a short side length of the two-dimensional image, to obtain several to-be-processed two-dimensional images;a second segmentation module1630, configured to invoke a segmentation model to perform semantic segmentation on the two-dimensional image, to obtain a distribution probability map of a target object,the segmentation model including: a deep network encoding unit and a skip transfer decoding unit, the deep network encoding unit including n convolutional layers, and the skip transfer decoding unit including m deconvolution layers, both n and m being a positive integer;the deep network encoding unit being configured to perform down-sampling feature extraction on the two-dimensional image through the n convolutional layers, to obtain a down-sampled third intermediate feature map; andthe skip transfer decoding unit being configured to perform up-sampling processing on the third intermediate feature map and a fourth intermediate feature map through the m deconvolution layers, to obtain an up-sampled distribution probability map,the fourth intermediate feature map including a feature map outputted by an convolutional layer of the n convolutional layers, i being an integer less than or equal to n; anda calculation module1640, configured to obtain a two-dimensional distribution binary image of the target object through calculation according to a maximum probability category of each pixel in the distribution probability map. In some embodiments, the two-dimensional image is a two-dimensional medical image, and the apparatus further includes:a second filtering module1650, configured to filter out noise pixels in the two-dimensional distribution binary image based on clinical prior knowledge. The clinical prior knowledge is knowledge obtained by collecting statistics on a distribution location of the target object in the two-dimensional medical image. In some embodiments, the second filtering module1650is configured to filter out third noise pixels exceeding a target value range in the two-dimensional distribution binary image. The target value range is a coordinate value range corresponding to appearance locations of the target object obtained according to third clinical prior knowledge. For related details, refer to the method embodiment shown inFIG.12. The second obtaining module1610is further configured to implement any other function that is related to the obtaining step and that is implied or disclosed in the foregoing method embodiment. The second scanning module1620is further configured to implement any other function that is related to the scanning step and that is implied or disclosed in the foregoing method embodiment. The second segmentation module1630is further configured to implement any other function that is related to the segmentation step and that is implied or disclosed in the foregoing method embodiment. The calculation module1640is further configured to implement any other function that is related to the calculation step and that is implied or disclosed in the foregoing method embodiment. The second filtering module1650is further configured to implement any other function that is related to the filtering step and that is implied or disclosed in the foregoing method embodiment. The semantic segmentation apparatus for a two-dimensional image provided in the foregoing embodiment is described only by using an example of division of the functional modules. In actual application, the functions may be allocated to different functional modules as required, which means that the internal structure of the apparatus is divided into different functional modules to complete all or some of the above described functions. In addition, the semantic segmentation apparatus for a two-dimensional image provided in the foregoing embodiment belongs to the same idea as the method embodiment of the semantic segmentation method for a two-dimensional image. For a specific implementation process, refer to the method embodiment. Details are not described herein again. FIG.17is a schematic structural diagram of a computer device according to an embodiment of the present disclosure. The computer device is configured to implement the semantic segmentation method for a three-dimensional image and the semantic segmentation method for a two-dimensional image provided in the foregoing embodiments. Specifically, the computer device1700includes a central processing unit (CPU)1701, a system memory1704including a random access memory (RAM)1702and a read-only memory (ROM)1703, and a system bus1705connecting the system memory1704and the CPU1701. The computer device1700further includes a basic input/output system (I/O system)1706used for helping information transmission between components in a computer, and a large-capacity storage device1707used for storing an operating system1713, an application program1714, and another program module1715. The basic I/O system1706includes a display1708configured to display information, and an input device1709, such as a mouse or a keyboard, configured to input information by a user. The display1708and the input device1709are both connected to the CPU1701by using an input/output controller1710connected to the system bus1705. The basic I/O system1706may further include the input and output controller1710to be configured to receive and process inputs from a plurality of other devices such as a keyboard, a mouse, and an electronic stylus. Similarly, the input/output controller1710further provides an output to a display screen, a printer, or another type of output device. The large-capacity storage device1707is connected to the CPU1701by using a large-capacity storage controller (not shown) connected to the system bus1705. The large-capacity storage device1707and an associated computer-readable medium thereof provide non-volatile storage for the computer device1700. In other words, the large-capacity storage device1707may include the computer-readable medium (not shown) such as a hard disk or a CD-ROM driver. In general, the computer-readable medium may include a computer storage medium and a communication medium. The computer storage medium includes volatile and non-volatile, removable and non-removable media that store information such as computer-readable instructions, data structures, program modules, or other data and that are implemented by using any method or technology. The computer storage medium includes a RAM, a ROM, an EPROM, an EEPROM, a flash memory, or another solid state storage technology, a CD-ROM, a DVD, or another optical storage, a magnetic cassette, a magnetic tape, a magnetic disk storage, or another magnetic storage device. Certainly, a person skilled in art can know that the computer storage medium is not limited to the foregoing several types. The system memory1704and the large-capacity storage device1707may be generally referred to as a memory. According to the embodiments of the present disclosure, the computer device1700may further be connected, through a network such as the Internet, to a remote computer on the network and run. That is, the computer device1700may be connected to a network1712through a network interface unit1711connected to the system bus1705, or may be connected to another type of network or a remote computer system (not shown) by using a network interface unit1711. The memory further includes one or more programs. The one or more programs are stored in the memory and configured to be executed by one or more processors. The one or more programs include instructions for performing the following operations: obtaining a three-dimensional image; performing slicing on the three-dimensional image according to three directional planes in which three-dimensional coordinate axes are located, to obtain two-dimensional slice images of an x axis, two-dimensional slice images of a y axis, and two-dimensional slice images of a z axis; invoking a first segmentation model to perform semantic segmentation on the two-dimensional slice images of the x axis, to obtain a distribution probability map of a target object on an x-axis directional plane; invoking a second segmentation model to perform semantic segmentation on the two-dimensional slice images of they axis, to obtain a distribution probability map of the target object on the y-axis directional plane; invoking a third segmentation model to perform semantic segmentation on the two-dimensional slice images of the z axis, to obtain a distribution probability map of the target object on the z-axis directional plane; and invoking an adaptive fusion model to perform three-dimensional fusion on the three distribution probability maps respectively corresponding to the x-axis directional plane, the y-axis directional plane, and the z-axis directional plane, to obtain a three-dimensional distribution binary image of the target object. Assuming that the foregoing is a first possible implementation, on the basis of the first possible implementation, in a second possible implementation, the memory of the computer device may further include an instruction for performing the following operations:obtaining a two-dimensional image; invoking a segmentation model to perform semantic segmentation on the two-dimensional image, to obtain a distribution probability map of a target object; and obtaining a two-dimensional distribution binary image of the target object through calculation according to a maximum probability category of each pixel in the distribution probability map. FIG.18is a diagram of an internal structure of a terminal according to an embodiment. As shown inFIG.18, the terminal includes a processor, a memory, a network interface, a display screen, and an input apparatus that are connected by using a system bus. The processor of the terminal is configured to provide computing and control capabilities. The memory of the terminal includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for running of the operating system and the computer program in the non-volatile storage medium. The memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the terminal stores an operating system, and may further store computer-readable instructions. The computer-readable instructions, when executed by the processor, may cause the processor to perform the semantic segmentation method for a three-dimensional image and a semantic segmentation method for a two-dimensional image. The internal memory may also store computer-readable instructions, and the computer-readable instructions, when executed by the processor, may cause the processor to perform the semantic segmentation method for a three-dimensional image and the semantic segmentation method for a two-dimensional image. The network interface of the terminal is configured to communicate with an external terminal through a network connection. The display screen of the terminal may be a liquid crystal display screen or an electronic ink display screen. The input apparatus of the terminal may be a touchscreen covering the display screen, or may be a key, a trackball, or a touchpad disposed on a housing of the terminal, or may be an external keyboard, a touchpad, a mouse, or the like. A person skilled in the art may understand that, in the structure shown inFIG.18, only a block diagram of a partial structure related to a solution in the present disclosure is shown, which does not constitute a limitation to the terminal to which the solution in the present disclosure is applied. Specifically, the terminal may include more or fewer components than those shown in the figure, or some components may be combined, or a different component deployment may be used. In an embodiment, the semantic segmentation apparatus for a three-dimensional image and the semantic segmentation apparatus for a two-dimensional image provided in the present disclosure may be implemented in a form of computer-readable instructions, and the computer-readable instructions may be run on the terminal shown inFIG.18. The memory of the terminal may store program modules forming the semantic segmentation apparatus for a three-dimensional image and the semantic segmentation apparatus for a two-dimensional image, for example, the first obtaining module1410, the slicing module1420, the first scanning module1430, the first segmentation module1440, and the fusion module1450. Computer-readable instructions formed by the program modules cause the processor to perform the steps in the semantic segmentation method for a three-dimensional image and the semantic segmentation method for a two-dimensional image in the embodiments of the present disclosure described in this specification. A person skilled in the art may understand that, in the structure shown inFIG.18, only a block diagram of a partial structure related to a solution in the present disclosure is shown, which does not constitute a limitation to the terminal to which the solution in the present disclosure is applied. Specifically, the terminal may include more or fewer components than those shown in the figure, or some components may be combined, or a different component deployment may be used. An embodiment of the present disclosure provides a computer-readable storage medium, storing computer-readable instructions, the computer-readable instructions being loaded and executed by a processor to perform operations performed in the semantic segmentation method for a three-dimensional image and the semantic segmentation method for a two-dimensional image according to the foregoing embodiments. A person of ordinary skill in the art may understand that all or some of the procedures of the method in the foregoing embodiments may be implemented by a computer program instructing relevant hardware. The program may be stored in a non-volatile computer-readable storage medium. When the program is executed, the procedures of the foregoing method embodiments may be implemented. Any reference to a memory, a storage, a database, or another medium used in the various embodiments provided in the present disclosure may include a non-volatile and/or volatile memory. The non-volatile memory may include a ROM, a programmable ROM (PROM), an electrically programmable ROM (EPROM), an electrically erasable programmable ROM (EEPROM) or a flash memory. The volatile memory may include a RAM or an external high-speed cache. For the purpose of description instead of limitation, the RAM is available in a plurality of forms, such as a static RAM (SRAM), a dynamic RAM (DRAM), a synchronous DRAM (SDRAM), a double data rate SDRAM (DDRSDRAM), an enhanced SDRAM (ESDRAM), a synchronous link (Synchlink) DRAM (SLDRAM), a rambus direct RAM (RDRAM), a direct rambus dynamic RAM (DRDRAM), and a rambus dynamic RAM (RDRAM). It is to be understood that “a plurality of” described in this specification refers to two or more. “And/or” describes an association relationship for describing associated objects and represents that three relationships may exist. For example, A and/or B may represent the following three cases: Only A exists, both A and B exist, and only B exists. The character “/” in this specification generally indicates an “or” relationship between the associated objects. The sequence numbers of the foregoing embodiments of the present disclosure are merely for description purpose, and do not indicate the preference among the embodiments. A person skilled in the art can easily figure out other implementation solutions of the present disclosure after considering the specification and practicing the present disclosure disclosed herein. The present disclosure is intended to cover any variation, use, or adaptive change of the present disclosure. The variations, uses, or adaptive changes follow the general principles of the present disclosure and include common general knowledge or common technical means in the art that are not disclosed in the present disclosure. The specification and the embodiments are merely considered as examples, and the real scope and spirit of the present disclosure are pointed out in the following claims. It is to be understood that the present disclosure is not limited to the accurate structures that are described above and that are shown in the accompanying drawings, and modifications and changes may be made without departing from the scope of the present disclosure. The scope of the present disclosure is subject only to the appended claims. | 88,277 |
11861502 | Corresponding reference characters indicate corresponding components throughout the several views of the drawings. Skilled artisans will appreciate that elements in the FIGURES are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of various embodiments. Also, common but well-understood elements that are useful or necessary in a commercially feasible embodiment are often not depicted in order to facilitate a less obstructed view of these various embodiments. DETAILED DESCRIPTION Disclosed below are representative embodiments of methods, computer-readable media, and systems having particular applicability to systems and methods for building neural networks that describe physical structures. Described embodiments implement one or more of the described technologies. Various alternatives to the implementations described herein are possible. For example, embodiments described with reference to flowchart diagrams can be altered, such as, for example, by changing the ordering of stages shown in the flowcharts, or by repeating or omitting certain stages. “Optimize” means to improve, not necessarily to perfect. For example, it may be possible to make further improvements in a value or an algorithm which has been optimized. “Determine” means to get a good idea of, not necessarily to achieve the exact value. For example, it may be possible to make further improvements in a value or algorithm which has already been determined. A “cost function,” generally, compares the output of a simulation model with the ground truth—a time curve that represents the answer the model is attempting to match. This gives us the cost—the difference between simulated truth curve values and the expected values (the ground truth). The cost function may use a least squares function, a Mean Error (ME), Mean Squared Error (MSE), Mean Absolute Error (MAE), a Categorical Cross Entropy Cost Function, a Binary Cross Entropy Cost Function, and so on, to arrive at the answer. In some implementations, the cost function is a loss function. In some implementations, the cost function is a threshold, which may be a single number that indicates the simulated truth curve is close enough to the ground truth. In other implementations, the cost function may be a slope. The slope may also indicate that the simulated truth curve and the ground truth are of sufficient closeness. When a cost function is used, it may be time variant. It also may be linked to factors such as user preference, or changes in the physical model. The cost function applied to the simulation engine may comprise models of any one or more of the following: energy use, primary energy use, energy monetary cost, human comfort, the safety of building or building contents, the durability of building or building contents, microorganism growth potential, system equipment durability, system equipment longevity, environmental impact, and/or energy use CO2 potential. The cost function may utilize a discount function based on discounted future value of a cost. In some embodiments, the discount function may devalue future energy as compared to current energy such that future uncertainty is accounted for, to ensure optimized operation over time. The discount function may devalue the future cost function of the control regimes, based on the accuracy or probability of the predicted weather data and/or on the value of the energy source on a utility pricing schedule, or the like. A “goal state” may read in a cost (a value from a cost function) and determine if that cost meets criteria such that a goal has been reached. Such criteria may be the cost reaching a certain value, being higher or lower than a certain value, being between two values, etc. A goal state may also look at the time spent running the simulation model overall, if a running time has been reached, the neural network running a specific number of iterations, and so on. A machine learning process is one of a variety of computer algorithms that improve automatically through experience. Common machine learning processes are Linear Regression, Logistic Regression, Decision Tree, Support Vector Machine (SVM), Naive Bayes, K-Nearest Neighbors (kNN), K-Means Clustering, Random Forest, Backpropagation with optimization, etc. An “optimization method” is a method that takes a reverse gradient of a cost function with respect to an input of a neural network, and determines an input that more fully satisfies the cost function; that is, the new input leads to a lower cost, etc. Such optimization methods may include gradient descent, stochastic gradient descent, min-batch gradient descent, methods based on Newton's method, inversions of the Hessian using conjugate gradient techniques, Evolutionary computation such as Swarm Intelligence, Bee Colony optimization; SOMA, and Particle Swarm, etc. Non-linear optimization techniques, and other methods known by those of skill in the art may also be used. In some machine learning techniques, backpropagation may be performed by automatic differentiation, or by a different method to determine partial derivatives of the node values within a neural network. A “state” as used herein may be Air Temperature, Radiant Temperature, Atmospheric Pressure, Sound Pressure, Occupancy Amount, Indoor Air Quality, CO2 concentration, Light Intensity, or another state that can be measured and controlled. I. Overview Artificial neural networks are powerful tools that have changed the nature of the world around us, leading to breakthroughs in classification problems, such as image and object recognition, voice generation and recognition, autonomous vehicle creation and new medical technologies, to name just a few. However, neural networks start from ground zero with no training. Training itself can be very onerous, both in that an appropriate training set must be assembled, and that the training often takes a very long time. For example, a neural network can be trained for human faces, but if the training set is not perfectly balanced between the many types of faces that exist, even after extensive training, it may still fail for a specific subset; at best, the answer is probabilistic; with the highest probability being considered the answer. Existing approaches offer three steps to develop a deep learning AI model. The first step builds the structure of a neural network through defining the number of layers, number of nodes in each layer, and determines the activation function that will be used for the neural network. The second step determines what training data will work for the given problem, and locates such training data. The third step attempts to optimize the structure of the model, using the training data, through checking the difference between the output of the neural network and the desired output. The network then uses an iterative procedure to determine how to adjust the weights to more closely approach the desired output. Exploiting this methodology is cumbersome, at least because training the model is laborious. Once the neural network is trained, it is basically a black box, composed of input, output, and hidden layers. The hidden layers are well and truly hidden, with no information that can be gleaned from them outside of the neural network itself. Thus, to answer a slightly different question, a new neural network, with a new training set must be developed, and all the computing power and time that is required to train a neural network must be employed. We describe herein a way to use a neural network to determine optimal control states for equipment (on, off, running at some intermediate value) within a physical space when given the energy input amounts needed for various zones within the state—zone energy inputs, or demand curves. “Physical space” should be understood broadly—it can be a building, several buildings, buildings and grounds around it, a defined outside space, such as a garden or an irrigated field, etc. A portion of a building may be used as well. For example, a floor of a building may be used, a random section of a building, a room in a building, etc. This may be a space that currently exists, or may be a space that exists only as a design. Other choices are possible as well. The physical space may be divided into zones. Different zones may have different sets of requirements for the amount of state needed in the zone to achieve the desired values. For example, for the state “temperature,” a user Chris may like their office at 72° from 8 am-5 pm, while a user Avery may prefer their office at 77° from 6 am-4 pm. These preferences can be turned into comfort curves, which are a chronological (time-based) state curve. Chris's office comfort curve may be 68° from Midnight to 8 am, 72° from 8 am to 5 pm, then 68° from 5 pm to midnight. The comfort curves (for a designated space, such as Chris's office), are then used to calculate demand curves, which are the amount of state that may be input into the associated zones to achieve the state desired over time. For Chris's office, that is the amount of heat (or cold) that may be pumped into their office for the 24 hour time period covered by the comfort curve, that is, a zone energy input. These zones are controlled by one or more equipment pieces, allowing state in the space to be changed. Such zones may be referred to as controlled building zones. Once we have one or more demand curves, we then run an equipment neural network model forward with a control sequence as input to determine demand output for that demand curve. That is, when we run equipment at various times in a structure, the structure, in turn, has some amount of state over time. This can be thought of as running a furnace from time T to time T+20, and from time T+120 to time T+145 in a structure with two zones. The heat propagates through both zones in the neural network which includes the walls, the air, etc, and diffuses. The model outputs the amount of heat, in this case, from time T to time T+240 in both zones, giving us two demand curves. We then check the demand curve output with the desired “ground truth” demand curve using a cost function, and then machine learning curves are used to tune the input values to create a new control sequence (or sequences). In some embodiments, a gradient of the cost function is calculated through backpropagation to the input (e.g, a resource), and then optimized by, e.g., a type of gradient descent, etc., giving us a new control curve to try. This is repeated until a goal state is reached. The last control sequence run is the control sequence that is then used to determine optimal equipment operation. The neural networks disclosed herein have potentially different activation functions that may be equations that model different resources. For example, a pump will be described by different equations than a motor, a boiler, a heating coil, etc. II. Computing Environment FIG.1illustrates a generalized example of a suitable computing environment100in which described embodiments may be implemented. The computing environment100is not intended to suggest any limitation as to scope of use or functionality of the disclosure, as the present disclosure may be implemented in diverse general-purpose or special-purpose computing environments. With reference toFIG.1, the core processing is indicated by the core processing130box. The core processing130includes at least one central processing unit110and memory120. The central processing unit110executes computer-executable instructions and may be a real or a virtual processor. It may also comprise a vector processor112, which allows same-length node strings to be processed rapidly. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power and as such the vector processor112, GPU115, and CPU can be running simultaneously. The memory120may be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two. The memory120stores software185implementing the described methods of generating control sequences. A computing environment may have additional features. For example, the computing environment100includes storage140, one or more input devices150, one or more output devices155, one or more network connections (e.g., wired, wireless, etc.)160as well as other communication connections170. An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of the computing environment100. Typically, operating system software (not shown) provides an operating environment for other software executing in the computing environment100, and coordinates activities of the components of the computing environment100. The computing system may also be distributed; running portions of the control sequence generation software185on different CPUs. Embodiments may also be implemented in cloud computing environments. In this description and the following claims, “cloud computing” may be defined as a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned via virtualization and released with minimal management effort or service provider interaction, and then scaled accordingly. A cloud model can be composed of various characteristics (e.g., on-demand self-service, broad network access, resource pooling, rapid elasticity, measured service, etc.), service models (e.g., Software as a Service (“SaaS”), Platform as a Service (“PaaS”), Infrastructure as a Service (“IaaS”), and deployment models (e.g., private cloud, community cloud, public cloud, hybrid cloud, etc.). The storage140may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, CD-RWs, DVDs, flash drives, or any other medium which can be used to store information and which can be accessed within the computing environment100. The storage140stores instructions for the software, such as control sequence generation software185to implement methods of node discretization and creation. The input device(s)150may be a device that allows a user or another device to communicate with the computing environment100, such as a touch input device such as a keyboard, video camera, a microphone, mouse, pen, or trackball, and a scanning device, touchscreen, or another device that provides input to the computing environment100. For audio, the input device(s)150may be a sound card or similar device that accepts audio input in analog or digital form, or a CD-ROM reader that provides audio samples to the computing environment. The output device(s)155may be a display, printer, speaker, CD-writer, or another device that provides output from the computing environment100. The communication connection(s)170enable communication over a communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions, compressed graphics information, or other data in a modulated data signal. Communication connections170may comprise input devices150, output devices155, and input/output devices that allows a client device to communicate with another device over network160. A communication device may include one or more wireless transceivers for performing wireless communication and/or one or more communication ports for performing wired communication. These connections may include network connections, which may be a wired or wireless network such as the Internet, an intranet, a LAN, a WAN, a cellular network or another type of network. It will be understood that network160may be a combination of multiple different kinds of wired or wireless networks. The network160may be a distributed network, with multiple computers, which might be building controllers, acting in tandem. A computing connection170may be a portable communications device such as a wireless handheld device, a cell phone device, and so on. Computer-readable media are any available non-transient tangible media that can be accessed within a computing environment. By way of example, and not limitation, with the computing environment100, computer-readable media include memory120, storage140, communication media, and combinations of any of the above. Computer readable storage media165which may be used to store computer readable media comprises instructions175and data180. Data Sources may be computing devices, such as general hardware platform servers configured to receive and transmit information over the communications connections170. The computing environment100may be an electrical controller that is directly connected to various resources, such as HVAC resources, and which has CPU110, a GPU115, Memory,120, input devices150, communication connections170, and/or other features shown in the computing environment100. The computing environment100may be a series of distributed computers. βThese distributed computers may comprise a series of connected electrical controllers. Although the operations of some of the disclosed methods are described in a particular, sequential order for convenient presentation, it should be understood that this manner of description encompasses rearrangement, unless a particular ordering is required by specific language set forth below. For example, operations described sequentially can be rearranged or performed concurrently. Moreover, for the sake of simplicity, the attached figures may not show the various ways in which the disclosed methods, apparatus, and systems can be used in conjunction with other methods, apparatus, and systems. Additionally, the description sometimes uses terms like “determine,” “build,” and “identify” to describe the disclosed technology. These terms are high-level abstractions of the actual operations that are performed. The actual operations that correspond to these terms will vary depending on the particular implementation and are readily discernible by one of ordinary skill in the art. Further, data produced from any of the disclosed methods can be created, updated, or stored on tangible computer-readable media (e.g., tangible computer-readable media, such as one or more CDs, volatile memory components (such as DRAM or SRAM), or nonvolatile memory components (such as hard drives) using a variety of different data structures or formats. Such data can be created or updated at a local computer or over a network (e.g., by a server computer), or stored and accessed in a cloud computing environment. FIG.2depicts a distributed computing system with which embodiments disclosed herein may be implemented. Two or more computerized controllers205may comprise all or part of a computing environment100,210. These computerized controllers205may be connected215to each other using wired or wireless connections215. These computerized controllers may comprise a distributed system that can run without using connections (such as internet connections) outside of the computing system200itself. This allows the system to run with low latency, and with other benefits of edge computing systems. FIG.3discloses a system300that determines control sequence curves from demand curves. In an exemplary environment, a control sequence generation system comprises inputting a demand curve305(that is, state needs, such as desired temperature over time) into a neural network equipment model315. Using machine learning techniques, the equipment model315produces a control state sequence310that gives control sequences that can then be used by controllable equipment320. This may produce the desired amount of state (as represented by the original demand curve) in a given space. III. Exemplary Method Embodiment FIG.4depicts one method400for control sequence generation. The operations of method400presented below are intended to be illustrative. In some embodiments, method400may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method400are illustrated inFIG.4and described below is not intended to be limiting. In some embodiments, method400may be implemented in one or more processing devices (e.g., a distributed system, a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The one or more processing devices may include one or more devices executing some or all of the operations of method400in response to instructions stored electronically on an electronic storage medium. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method400. At operation405, a neural network of a plurality of controlled devices is received. The neural network model may have been stored in memory, and so may be received from the processing device that the model is being run on. In some implementations, the neural network model may be stored within a distributed system, and received from more than one processor within the distributed system, etc. A controlled device is a device that has controls, such as on-off switches, motors, variable controls, etc. such that a computer can modify its behavior. These controls may be wired, wireless, etc. In some embodiments described herein, in a neural network, the fundamentals of physics are utilized to model single components or pieces of equipment on a one-to-one basis with neural network nodes. Some nodes use physics equations as activation functions. Different types of nodes may have different equations for their activation functions, such that a neural network may have multiple activation functions within its nodes. When multiple components are linked to each other in a schematic diagram, a neural network is created that models the components as nodes. The values between the objects flow between the nodes as weights of connected edges. These neural networks may model not only the real complexities of systems but also their emergent behavior and the system semantics. Therefore, they may bypass two major steps of the conventional AI modeling approaches: determining the shape of the neural net, and training the neural network from scratch. As the nodes are arranged in order of an actual system (or set of equations) and because the nodes themselves comprise an equation or a series of equations that describe the function of their associated object, and certain relationships between them are determined by their location in the neural net, a huge portion of training is no longer necessary, as the neural network itself comprises location information, behavior information, and interaction information between the different objects represented by the nodes. Further, the values held by nodes in the neural network at given times represent real-world behavior of the objects so represented. The neural network is no longer a black box but itself contains important information. This neural network structure also provides much deeper information about the systems and objects being described. Since the neural network is physics- and location-based, unlike the conventional AI structures, it is not limited to a specific model, but can run multiple models for the system that the neural network represents without requiring separate creation or training. In some embodiments, the neural network that is described herein chooses the location of the nodes to tell you something about the physical nature of the system. The nodes are arranged in a way that references the locations of actual objects in the real work. The neural network also may use actual equations that can be used to determine object behavior into the activation function of the node. The weights that move between nodes are equation variables. Different nodes may have unrelated activation functions, depending on the nature of the model being represented. In an exemplary embodiment, each activation function in a neural network may be different. As an exemplary embodiment, a pump could be represented in a neural network as a network node with multiple variables (weights on edges), some variables that represent efficiency, energy consumption, pressure, etc. The nodes will be placed such that one set of weights (variables) feeds into the next node (e.g., with equation(s) as its activation function) that uses those variables. Unlike other types of neural networks, two required steps in earlier neural network versions—shaping the neural net, and training the model—may already be performed. Using embodiments discussed herein the neural network model need not be trained on some subset of information that is already known. In some embodiments, the individual nodes represent physical representations. Individual nodes may hold parameter values that help define the physical representation. As such, when the neural network is run, the parameters helping define the physical representation can be tweaked to more accurately represent the given physical representation. This has the effect of pre-training the model with a qualitative set of guarantees, as the physics equations that describe objects being modeled are true, which saves having to find training sets and using huge amounts of computational time to run the training sets through the models to train them. A model does not need to be trained with information about the world that is already known. With objects connected in the neural network similar to how they are connected in the real world, emergent behavior arises in the model that, in certain cases, maps to the real world. This model behavior that is uncovered is often otherwise too computationally complex to determine. Further, the nodes represent actual objects, not just black boxes. The behavior of the nodes themselves can be examined to determine behavior of the object, and can also be used to refine the understanding of the object behavior. One example of such heterogenous models is described in U.S. patent application Ser. No. 17/143,796, filed on Jan. 7, 2021, which is incorporated herein in its entirety by reference. At operation410, a simulated control sequence is received. Initially, the values of the control sequence curve may be random, may be a control sequence from another similar model run, etc. The control curve comprises instructions to control one or more controllable devices over time for at least one of the plurality of controlled building zones that are modeled by the neural network. As a brief overview, in an illustrative embodiment, we have the demand curves we want zones (e.g., areas) to conform to, such as Chris's office, as described above, and we wish to find the control sequences (i.e., how to run the equipment) time to meet the state amount indicated by the demand curve. To do so, we use simulated control curves that control (simulated) equipment by turning it on, off, and set them to intermediate values, etc. as input in the model, run the model which outputs the simulated comfort curve for the given demand curve. FIG.5is a diagram500showing an high level exemplary embodiment of the input and output of a neural network model. With reference toFIG.5, this entails using a control sequence (e.g., equipment behavior over time t0to t24) as input into an equipment model515. The equipment model runs forward and produces a simulated demand curve510for the same time period (t0to t24) as the control sequence. A new control sequence is determined and then fed back520into the model515, until a suitable simulated demand curve is created. This entails the simulated demand curve being sufficiently close to the desired demand curve. At operation420, a machine learning process is performed to run the neural network using a simulated control sequence as input and receiving the simulated demand curve425as output. Running the model may entail feedforward—running the control sequence though the model to the outputs over time T(0)-T(n), capturing state output values—within neurons that represent resources that modify state—over the same time T(0)-T(n). At operation425, simulated demand curve(s) are output. In some embodiments, the demand curve is output425successively in timesteps during the model run, or other methods are used. The first time the neural network is run, a control sequence410may be supplied. This initial control sequence may be determined randomly, or another method may be used, such as a control sequence stored previously that was used as the solution to a similar demand curve problem. At operation415, the desired demand curve(s) are received. These are the curves that describe the amount of state that is needed over time. These may also be called ground truth demand curves. Ground truth is the information provided by direct evidence or is the desired output from the neural network. At operation430, a cost function is computed using the time series of desired comfort curve(s) and the model output—a simulated demand curve. The cost function measures the difference between the time series of desired demand curve(s)415and the simulated demand curve(s) output425from the neural network420. Details of the cost function are described elsewhere. At operation435, a goal state is checked to determine if a stopping state has been reached. The goal state may be that the cost from the cost function is within a certain value, that the program has run for a given time, that the model has run for a given number of iterations, that the model has run for a given time, that a threshold value has been reached, such as the cost function should be equal or lower than the threshold value, or a different criterion may be used. If the goal state has not been reached, then a new set of inputs needs to be determined that are incrementally closer to an eventual answer—a lowest (or highest or otherwise determined) value for the cost function, as described elsewhere. At operation445, if the goal state435has determined that a stopping state been reached, then the control sequence that was used for the last heterogenous model run is set as the solved control sequence; that is, the control sequence that will meet the requirements for the desired demand curve, within some range. This method can save as much as 30% of energy costs over adjusting the state when the need arises. If the goal state has not been reached, then the determine new control sequence step440, the run neural network step420, the output simulation demand curve step425, and compute cost function state430are iteratively performed, which incrementally optimizes the demand curve until the goal state435is reached. In some implementations, once the control sequence has been determined445, it can then be used to run the equipment that is associated with the equipment modeled in the neural network. Controlling equipment in such a predetermined fashion can save greatly on energy costs, as well as more accurately controlling state in a defined space for the people and objects therein. At operation440new simulated control sequences are determined for the next run of the neural network. This may be performed using the cost function; by using machine learning algorithms, etc. In some embodiments, backpropagation is used to determine a gradient of the cost function in relation to the various values in the neural network. This gradient may then be used to optimize the control sequence for the next model run. FIG.6is a functional block diagram600showing different machine learning functions. At operation440, control sequences are determined. These control sequences may be determined by using machine learning605. Machine learning techniques605may comprise determining gradients of the various variables within the neural network with respect to the cost function. Once the gradients are determined, gradient methods may be used to incrementally optimize the control sequences. The gradient at a location shows which way to move to minimize the cost function with respect to the inputs—i.e., the control sequences. In some embodiments, gradients of the internal variables with respect to the cost function are determined610. In some embodiments, internal parameters of the nodes have their partial derivatives calculated, which gives the gradient. Different nodes may have different parameters. For example, a node modeling a pump may have parameters such as density, shaft speed, volume flow ratio, hydraulic power, etc. If the derivatives are differentiable, then backpropagation615can be used to determine the partial derivatives. Backpropagation finds the derivative of the error (given by the cost function) for the parameters in the neural network, that is, backpropagation computes the gradient of the cost function with respect to the parameters within the network. More specifically, backpropagation615calculates the derivative between the cost function and parameters by using the chain rule from the last neurons calculated during the feedforward propagation through the internal neurons, to the first neurons calculated—a backward pass; that is, taking the gradient of the thermodynamic model backward in relation to the cost. In some embodiments, backpropagation will be performed by automatic differentiation620. According to Wikipedia, “automatic differentiation is accomplished by augmenting the algebra of real numbers and obtaining a new arithmetic. An additional component is added to every number to represent the derivative of a function at the number, and all arithmetic operators are extended for the augmented algebra.” Other methods may be used to determine the parameter partial derivatives. These include Particle Swarm and SOMA ((Self-Organizing Migrating Algorithm), etc. The backpropagation may work on a negative gradient of the cost function, as the negative gradient points in the direction of smaller values. After the partial derivatives are determined, the control sequence is optimized625to lower the value of the cost function with respect to the inputs. This process is repeated incrementally. Many different optimizers may be used, which can be roughly grouped into 1) gradient descent methods630and2) other methods635. Among the gradient descent methods630are standard gradient descent, stochastic gradient descent, and mini-batch gradient descent. Among the other methods635are Momentum, Adagrad, AdaDelta, ADAM (adaptive movement estimation), and so on. Once a new sequence is determined, the neural network is run again420. FIG.7depicts a physical system700whose behavior can be determined by using a neural network. This physical system700comprises a simple heating system comprising a pump725, a boiler740, and a heating coil750that produces hot air. The pump itself comprises a control705to send a signal to turn the pump on to a relay710, which then sends power to a motor720, that drives a pump725. The pump sends water to a boiler740, which is likewise turned on by a control730-relay735power745system. The boiler then sends hot water to a heating coil, which transforms the hot water into hot air. FIG.8depicts a heterogenous neural network800that may be used to model behaviors of the physical system ofFIG.7. Nodes are placed in locations with reference to the physical equipment behavior, such that the control node805is connected to relay node810, the relay node is connected to Power node815. Relay node810is also connected to motor node820and pump node825. When the control node805receives an input to turn on, that information is relayed through the relay node, which signals the power node to turn on and the motor node to turn on. These, in turn signal the pump node to turn on. The power node may, for example, send a voltage signal to the relay node, which may pass it on to the motor node. An activation function of the motor node may have associated with it a series of equations that take the signal from the relay node and turn it into mechanical rotation for the pump node825to use. The pump node825may also have a water input885with its own properties. Similarly, the control node830, when input with an “on,” or some other method to indicate an on action, will turn on the boiler node840through passing on an “on”855to a relay node835, which then turns on the power node840through variables sent through edge860. Power node845then passes electricity along edge865through the relay node835edge875to the boiler node840which then, e.g., uses variables from the pump node825and its own activation function equations that model its physics properties to do the model equivalent of heating water. This, in turn, passes variables that heats up the heating coil node850. Heating coil node850intakes air values along edge870and produces hot air values880. The values880may be the simulated demand curve440for this model. In some embodiments, this system would produce a neural network that used two control sequences, one for control node805, and one for control node830. It would produce one demand curve, the output from the heating coil node850. In some implementations, some nodes within a neural network have many variables that are passed among the nodes, and have different (heterogenous) activation functions. For example, an exemplary boiler activation function may describe, using equations, the activation of a boiler, e.g., boiler node840. This may be, in whole or in part: inputPower=inputVoltage*inputCurrent; PLR=inputPower/Nominal power; Resistance Resistance=f(Nominal pressure drop, Nominal flow rate); Efficiency=f(Efficiency coefficients, PLR, nominal temperature); Power=f(PLR, Efficiency, Full load efficiency, Capacity); specificEnthalpy=f(input specificEnthalpy, Power, fluid flow rate); Pressure drop=f(Flow, resistance); Pressure=Pressure−Pressure drop, and so forth. Exemplary weight values in a neural network that might be used as variables in a activation node for a boiler may be: Nominal temperature; Nominal power; Full load efficiency; Nominal pressure drop; Nominal flow rate; inputPower=inputVoltage*inputCurrent; PLR=inputPower/Nominal power. These variables may arrive at the node through an edge from another node, or as an input. One node may send multiple variables to another node. Exemplary equations to describe a pump that are used as an activation function in a node, e.g., pump node825may be: Volume flow rate=f(qFlow, density); Volume flow rate ratio=Volume flow rate/Max volume flow rate; Shaft speed ratio=qAngularVelocity/Max shaft speed; Pressure head=pressure curve (Volume flow rate, shaft speed ratio, and so forth. Exemplary weight values in a neural network that might be used as variables in an activation node for e.g., a pump may be: Properties Pressure curve points; Power curve points, Efficiency curve points, Max volume flow rate, Max pressure head, Max shaft speed, and so forth. IV. Exemplary System Embodiment FIG.9depicts one system900for control sequence generation. The system may include a computer environment100. The system includes a time series control sequence905that comprises instructions to control a controllable device over time. Initially, the values of the control sequence may be random, may be a time series control sequence from another similar model run, etc. A target demand curve910that comprises amount of state in a location over time is also included. As described with reference toFIG.5, a time series control sequence505is used as input. The pictured control sequence describes a piece of equipment turning on525and off530over time. Different types of equipment may have intermediate values and other sorts of values that are also indicated in a control sequence. A target demand curve910is also included. A target demand curve910is the ground truth value that the model is trying to match. This demand curve is the amount of state over time that should be in a space associated with the model, a space associated with a portion of the model (sometimes called zones), etc. A thermodynamic model915is used that models the equipment that is to be used (or is being used) in a space. Examples of such models are shown with reference toFIG.7andFIG.8. The model is a thermodynamic model because it incorporates thermodynamic qualities of objects in the model by way of the activation functions. The thermodynamic model may be a heterogenous physics network; heterogenous as the activation functions of the nodes may be different, as different resources behave differently. This may include nodes that are thermodynamic representations of equipment or portions of equipment. They may be thermodynamic representations by being associated with equations that model thermodynamic behavior of the equipment/portions of equipment. A computing engine920runs the thermodynamic model using the time series control sequence as input, and outputs a simulated demand curve. With reference toFIGS.5and8, in some embodiments, a computing engine920runs the thermodynamic model by inputting a control sequence over time505, and running the values through the model, eg., according to the arrows shown inFIG.8. The model then outputs a demand curve880. A cost function determiner925compares difference between the simulated demand curve and the target demand curve producing a cost. This cost is a performance metric describes how close the simulated demand curve is to the target demand curve. A new model input determiner930determines a new time series control sequence that should give an output demand curve with a lower cost, that is closer to the target demand curve. The new model input determiner930may use a backpropagator935to help determine the new time series control sequence. This backpropagator may use an automatic differentiator940to determine gradients of the various variables (weights, values, etc.) with the thermodynamic model. This may produce the negative gradient of the cost function for the weights and/or variables used in the cost functions; examples of which can be seen above. Once the negative gradients are determined, then an optimizer945can use the negative gradients to incrementally change the time series control sequence such that the thermodynamic model output more closely approaches the target demand curve910. As described with reference toFIG.6, the optimizer may optimize inputs use gradient methods630or other methods. An iterator950runs the thermodynamic model915using the computing engine920. It then uses the cost function determiner to determine how close the simulated demand curve computed by the computing engine is to the target demand curve. If a goal function has been met, e.g., if the two curves are close enough, if a certain number of iterations have been run, if the model has run for a certain time, etc., then the iterator quits, and the last determined time series control sequence is considered the control sequence shown inFIG.3at310that is output from the larger model. If a goal function has not been met, then the iterator runs the thermodynamic model915using the computing engine920until the cost function determiner determines that the goal state has been met. Once a control sequence has been generated, it may be used to control equipment that was modeled. Cost functions can minimize many different aspects of a thermodynamic model by which variables are viewed when calculating the cost. For example a cost function determiner may comprise a cost function which looks at variables within the neural network to minimize short cycling frequency by calculating a cost based on, e.g., length of cycles. Other cost functions may minimize a next time series control sequence of operation error, a next time series output path error, an energy cost, a state cost, a frequency of sequence changes, a equipment life cost, a comfort value, etc. FIG.10discloses a node system1000that has multiple inputs. A node may be affected by other forces not directly built into the thermodynamic model. For example, the value of a solar panel node1015may depend, at least partly, on the weather1005, the orientation, the time of year, etc. These forces may be included in the thermodynamic model as a time series curve input1010that models the outside influence as a state curve with values over a series of time steps. These values may then be incorporated into a node1015as a direct input1020, as feed forward value from one node to another1025, or incorporated in a different method. In view of the many possible embodiments to which the principles of the disclosed invention may be applied, it should be recognized that the illustrated embodiments are only preferred examples of the invention and should not be taken as limiting the scope of the invention. Rather, the scope of the invention is defined by the following claims. We therefore claim as our invention all that comes within the scope and spirit of these claims. | 45,484 |
11861503 | DETAILED DESCRIPTION The figures and the following description describe certain embodiments by way of illustration only. Wherever practicable, similar or like reference numbers are used in the figures to indicate similar or like functionality. One skilled in the art will readily recognize that alternative embodiments of the structures and methods may be employed without departing from the principles described. FIG.1illustrates one embodiment of a system environment100suitable for analyzing claims. In the embodiment shown, the system environment100includes a client device105, a network110, a claim database115, and a claim analysis system125. In other embodiments, the system environment100includes different and/or additional elements. In addition, the functions may be distributed among the elements in a different manner than described. The client device105is one or more computing devices capable of receiving user input as well as transmitting and/or receiving data via a network110. In one embodiment, a client device105is a computer system, such as a desktop or a laptop computer. Alternatively, a client device105may be a device having computer functionality, such as a personal digital assistant (PDA), a mobile telephone, a smartphone, or another suitable device. A client device105is configured to communicate via the network110. The client device105may execute an application allowing a user of the client device105to interact with the claim analysis system125via a user interface. For example, a web browser application may enable interaction between the client device105and the claim analysis system125via the network110or a graphical user interface may be provided as part of a software application published by the claim analysis system125and installed on the user device105. Alternatively, a client device105interacts with the claim analysis system125through an application programming interface (API) running on a native operating system of the client device105, such as IOS® or ANDROID™. The claim database115is one or more machine-readable media that stores claims125. Claims120may be based on standard forms for outpatient and provider services and billing forms for inpatient services. For example, claims may be based on the Center for Medicare and Medicaid Services (CMS)1500form. Claims include patient information such as patient demographics (e.g., name, address, birth date, gender, and marital status), employment and insurance status, occupational limitations, dates of service, diagnoses and procedures, service provider information, and charges for services. In some embodiments, claim data is temporally bound such that the claim primarily reflects the diagnoses and services that occurred on the date when the claim was submitted. As such, claim data may be configured to not convey information that occurred during previous appointments. The claim database115may store the claims as raw claim data and/or as claim sequences generated by the claim analysis system125that include multiple elements (“features”) representing the claim. The claims database115may also include training data used to train one or more models of the claim analysis system125. Training data may include claim response information, such as whether the claim was denied, a response date for the claim, and reasons for claim denial. In one embodiment, a module with similar or identical functionality to the claim database115is integrated into the claim analysis system125. The claim analysis system125analyzes claims to predict a payer response. The claim analysis system125predicts the likelihood the claim will be denied, a response date for the claim, and/or reasons for claim denial using a claim sequence of features representing the claim. The claim analysis system125provides the prediction for display (e.g., on a user interface on a client device105or a display of the claim analysis system). The claim analysis system125may also determine which aspects of the claim contributed most significantly to a claim's denial prediction. In one embodiment, the claim analysis system125does this by predicting a suspiciousness score for a portion of the claim features in a corresponding claim sequence. Further, the claim analysis system125provides users with a user interface to view suspiciousness scores and modify claim data accordingly. In this way, the claim analysis system125allows users to identify and rectify data that may have been entered incorrectly (e.g., due to human error). The claim analysis system125may also compare multiple claims across one or more health systems to identify patterns in claim data. The claim analysis system125may then determine correlations between claim data and denial probabilities, claim data and claim denial reason codes, patterns in response dates, and the like. The user device105, claim database115, and claim analysis system125are configured to communicate via a network110, which may include any combination of local area and/or wide area networks, using both wired and/or wireless communication systems. In one embodiment, a network110uses standard communications technologies and/or protocols. For example, a network110includes communication links using technologies such as Ethernet, 802.11, worldwide interoperability for microwave access (WiMAX), 3G, 4G, code division multiple access (CDMA), digital subscriber line (DSL), etc. Examples of networking protocols used for communicating via the network110include multiprotocol label switching (MPLS), transmission control protocol/Internet protocol (TCP/IP), hypertext transport protocol (HTTP), simple mail transfer protocol (SMTP), and file transfer protocol (FTP). Data exchanged over a network110may be represented using any suitable format, such as hypertext markup language (HTML) or extensible markup language (XML). In some embodiments, all or some of the communication links of a network110may be encrypted using any suitable technique or techniques. FIG.2shows one embodiment of the claim analysis system125ofFIG.1. In the embodiment shown, the claim analysis system125includes a claim store205, a claim feature store210, a model store215, a claim feature module220, a model training module225, an interpretability module230, and a user interface235. In other embodiments, the claim analysis system125includes different and/or additional elements. In addition, the functions may be distributed among the elements in a different manner than described. The claim analysis system125maintains claims in the claim store205. The claim store may include local copies of some or all of the claims120stored in the claim database115. Claims120may be based on standard forms for outpatient and provider services and billing forms for inpatient services. As previously discussed, claim data includes, but is not limited to, a combination of patient information such as patient demographics (e.g., name, address, birth date, gender, and marital status), employment and insurance status, occupational limitations, dates of service, diagnoses and procedures, service provider information, and charges for services. The claim store205may also include the duration of the corresponding service, total charges, time between the services and claim submission date, and the like. Claim data may also include metadata, such as claim creation date, edit date, and claim author. Further, the claim store205may store a log of changes made to claims for auditing and troubleshooting purposes. The claim data store205may be encrypted to protect the privacy of patients and subscribers corresponding to the claims. The claim store205may also store claim response predictions of analyzed claims. The predictions stored by the claim store205may include a likelihood the claim will be denied, claim-level reason code classifications, service-level reason code classifications, a response date estimation, or a combination thereof. Reason codes may include Remittance Advice Remark Codes (RARCs), Claim Adjustment Reason Codes (CARCs), or any other suitable code. The claim store205may store claim-level reason codes and service-level reason codes as a vector, where each element of the vector corresponds to a reason code, and the value of each element represents the likelihood the corresponding reason code contributed to the denial of the claim. In addition, the claim store205may store training data used to train and validate one or more models of the claim analysis system125. Training data may be extracted from historical claims, and may include claim data and corresponding claim response information, such as whether the claim was denied, a response date for the claim, and reasons for claim denial. The claim feature store210maintains claim sequences of claims stored in the claim store205and generated by the claim feature module220. Claim sequences are an unordered collection of medical events and aggregations of diverse code types that have implicit interrelations (e.g., between demographics and diagnoses). As such, each claim sequence is composed of multiple features that describe the claim data of a corresponding claim. Features may include, but are not limited to, patient gender, an individual relationship code, a payer state, the duration of the corresponding service, the subscriber's age, the patient's age, a payer identifier, the total charges, the service date, and the claim submission and/or transmission date. The claim sequence also includes an indication of the procedures performed and the diagnoses received. The value of each feature is assigned a single unique token for singular elements or a subsequence of tokens. For example, demographic information may be assigned single unique tokens, and procedures and diagnoses may be assigned subsequences of tokens. Further, the values of some features may be binary, and the values of other features may be normalized counts between zero and unity. Accordingly, claim sequences may comprise two or more sub-vectors. In addition, the claim feature store210may store suspiciousness scores for claim sequence features. Suspiciousness scores reflect the impact individual features have on the claim's denial prediction. The claim analysis system125maintains the model parameters and hyper-parameters generated by the model training module225and/or the interpretability module230in the model store215. Examples may include network layers, embedding representations, weight matrices, and the like. The model store215may also include optimization techniques, such as optimizer types, loss functions, etc. In addition, the model store215may include baseline models that are used to evaluate the performance of the claim analysis system125. In some embodiments, models maintained by the model store215are system-specific. For example, the model store215may maintain health care system-specific models for the analysis of claims associated with different health care systems, claim form-specific models for the analysis of claims provided on different claim forms (e.g.,837forms and835forms), and the like. The claim feature module220generates claim sequences that represent claims stored in the claim store205. The claim feature module220does this by tokenizing claim data of corresponding claims. When the feature is a singular element (e.g., demographic information, insurance information, etc.), the claim feature module220assigns the feature a single unique token. When the feature includes multiple elements, e.g., when the feature is a procedure or diagnoses, the claim feature module220assigns the feature a subsequence of tokens. In some embodiments, the claim feature module220maps less frequent tokens to an out-of-vocabulary token. For example, procedures tokens that appear less than a threshold number of times in a dataset may be mapped to an out-of-vocabulary token. In these embodiments, the context of features mapped to out-of-vocabulary tokens within the corresponding claim may be identified by the machine learning model configured to predict payer response. Based on the context of out-of-vocabulary tokens, users can identify response patterns for less frequent procedures and diagnoses. For example, users may be able to determine claims associated with infrequently performed procedures are more likely to be denied or have an increased response date. The use of out-of-vocabulary tokens may reduce sparsity. Further, the claim feature module220may assign tokens with binary values to some features and tokens with numeric values to other features based on characteristics of the feature. For example, a patient's age may be discretized in years, while date features may be mapped to tokens in years, months, and days. Similarly, charge amount features may be mapped to tokens quantized to thousands, hundreds, tens, and one. In some embodiments, the claim feature module220normalizes tokens with numeric values. In some embodiments, features representing demographic information may be assigned binary tokens, and features representing procedures and diagnoses may be assigned numeric tokens. In these embodiments, the claim feature module220expresses diagnosis and procedure tokens as normalized count sequences, xCand xD. The length of xCand xDmay correspond to the number of possible procedure and diagnosis tokens, respectively. Similarly, in these embodiments, the claim feature module220expresses demographic tokens as a binary sequence, xO. The length of xOmay be the total number of single unique tokens. In this way, the claim feature module220may express a claim sequence, x, as a combination of three subsequences, xC, xD, and xO(Equation 1). A claim sequence, x, may have a length in thousands and include both numeric and binary tokens. x(xC,xD,xO) (1) From the claim sequences, the claim feature module220generates input vectors that are applied to a trained machine learning model. The input vector may include all of the features included in a claim sequence. Alternatively, the input vector may include a portion of the features included in a claim sequence. The claim feature module220may select a subset of features to include in an input vector based on the requested output of the trained machine learning model, size requirements, user preferences, features of the claim data, and the like. The model training module225trains a machine learning model to predict a payer's response to a claim. In some embodiments, the machine learning model is a trained neural network. In these embodiments, the model training module225trains a first portion of the neural network to generate an embedding from an input vector. The embedding may be a fixed-sized vector with a lower-dimensionality than the input vector. For example, the input vector may include thousands of dimensions (features) and the embedding may have 94 dimensions, 128 dimensions, 200 dimensions, etc. The model training module225also trains a second portion of the neural network to predict the payer's response from the embedding. The prediction includes a likelihood the claim will be denied, a response date estimation, and/or one or more sets of reason codes delineating reasons why the claim may be denied. The model training module225does this by training task-specific and task-agnostic neural network layers. The layers of the neural network are discussed in detail below, with reference toFIGS.3-4. In other embodiments, other generalized linear models are trained by the model training module225to predict a payer's response to a claim, such as logistic regression models and support vector machine models. The interpretability module230identifies which aspects of the claim should be reviewed. The interpretability module230does this by computing a gradient magnitude of the prediction score for each feature of the input vector. The gradient magnitude of the prediction score (referred to as a “suspiciousness score”) represents the contribution of an input feature on the denial prediction of a corresponding claim. In some embodiments, the interpretability module230calculates suspiciousness scores using a single back-propagation pass through the neural network. In other embodiments, the interpretability module230calculates suspiciousness scores by taking the gradients of the outputs with respect to the input and multiplying the gradient by the input feature values. Additionally, or alternatively, the interpretability module230may calculate suspiciousness scores by replacing each input feature with a reference value and computing the different in the output. Input features may be grouped and ablated together. The interpretability module230may flag input features with suspiciousness scores above a threshold suspiciousness score such that users may review and modify claim data. Threshold suspiciousness scores may be determined by the claim analysis system125, a user, and the like. In some embodiments, the interpretability module230calculates suspiciousness scores when the denial prediction has a denial probability greater than a threshold probability (e.g., over 45%, 50%, 55%, 75%). In other embodiments, the interpretability module230calculates suspiciousness scores for all claims, when explicitly requested by a user of the claim analysis system125, and the like. The claim analysis system125includes a user interface235that enables users to interact with the claim analysis system125. Through the user interface235, the user may request claims to be analyzed, view prediction results and suspiciousness scores, modify claim features and/or claim data, and the like. Users may also use the user interface235to aggregate and analyze data across multiple claims and/or across multiple health systems. This allows users to identify which claim features contribute to claim denials most frequently, which data fields are most susceptible to data entry errors, and the like. The user interface235may also include additional elements that allow the user to generate training data, select model parameters and/or training schema, and the like. FIG.3is a high-level block diagram illustrating a method300of analyzing a claim, according to one embodiment. In the method shown, the claim feature module220tokenizes claim data305to generate a claim sequence. A portion of the claim sequence is included in an input vector, x, which includes three sub-vectors, namely sub-vectors xC310, xD315, and xO320. As previously discussed, xC310includes a sequence of procedure tokens with numeric values, xD315includes a sequence of diagnoses tokens with numeric values, and xOincludes a sequence of single unique feature tokens with binary values. The sub-vectors xC310, xD315, and xO320are applied to a first portion of the neural network that includes a first set of neural network layers325. The first set of neural network layers generates an embedding, f330, from the input vector, x. The generation of the embedding f330is discussed in detail below with reference toFIG.4. The embedding f330is applied to a second portion of the neural network that includes a second set of neural network layers335to generate a prediction of whether the claim will be denied, y340, which is a vector defined by Equation (2). Accordingly, the second set of neural network layers335includes one or more task-specific output layers configured to generate a prediction for a corresponding element of y340. y(y0,y1,y2,y3) (2) In Equation 2, y340includes four output elements. The first output element, y0, is a claim denial variable representing the likelihood the claim will be denied. For example, a claim denial variable with a value of 0.54 indicates there is a 54% chance the corresponding claim will be denied. The second and third output elements, y1and y2, are vectors of reason codes for claim-level reasons and service-level reasons, respectively. Each vector element represents a reason for the claim denial, and the value of each element indicates the contribution the reason code had on the claim denial prediction. The element values in y1and y2may be normalized counts in frequency. The fourth output element, y3, is a response date variable. In some embodiments, y3is a day interval between a remittance date and the corresponding claim submission date. Therefore, the prediction y340includes a probability the claim will be denied under a set of possible denial reason codes in how many days. The model training module225applies a multi-task learning approach to train the neural network. This approach helps ensure the neural network properly captures each claim by sharing the embedding while keeping task-specific output layers. To optimize the parameters of the neural network, the model training module225may minimize the loss,, according to Equation (3). In some embodiments, the loss is minimized using an ADAM optimizer. In other embodiments, other suitable optimizers are used. =λ00+λ1svc+λ2claim+λ3date(3) In Equation (3),0is a binary cross-entropy loss for the denial probability prediction,svcis a categorical cross-entropy loss for the set of service-level denial reason code classifications,claimis a categorical cross-entropy loss for the set of claim-level denial reason code classifications,dateis a distance for the first response days prediction for the response date estimation, and λ0, λ1, λ2, λ3are hyper-parameters. In some embodiments, the model training module225uses a sigmoid function for predicting the claim denial variable, y0, softmax functions for predicting denial reason codes, y1and y2, and a linear function for predicting the response date variable, y3. In these embodiments, Equation (3) may be rewritten according to Equations (4)-(7). argmin(f,H,W)BCE(y0,σ(W0f+b0)) (4) +λCCE(y1,softmax(W1f+b1)) (5) +λ1CCE(y2,softmax(W2f+b2)) (6) +λ22(y3,W3f+b3) (7) In Equations (4)-(7),BCEis the binary cross-entropy loss,CCEis the categorical cross-entropy loss, a is a sigmoid function, W1, W2, W3are embedding matrices, b1, b2, and b3are bias terms. The constraints as defined in Equations (5)-(7) act as barrier functions to guide the convergence of the embedding, f. FIG.4is a high-level block diagram illustrating a method of generating an embedding, according to one embodiment. The embedding is a low-dimension representation of the claim that removes high redundancies in the input layer and reduces memory requirements. The machine learning model does this by compressing claim sequences with thousands of dimensions into a fixed-sized latent vector, f (“embedding”). The embedding, f, may include any suitable number of dimensions, such as 64 dimensions, 94 dimensions, or 200 dimensions. As shown inFIG.3, the first portion of the neural network generates the embedding for a claim. However, in some embodiments, the steps of generating an output vector from an input vector (i.e., a claim sequence or portion thereof) are not delineated as shown. In these embodiments, the neural network may include fewer or additional portions that are collectively configured to generate an output vector from an input vector. Further, in some embodiments, the neural network generates the input vector from the claim data of a corresponding claim. In other embodiments, the neural network is applied to a previously-generated input vector. In the illustration400shown, claim data405corresponding to a claim is tokenized to generate an input vector x407, which includes three sub-vectors, xC410, xD415, and xO420. As previously discussed, the elements in each of the sub-vectors may include numeric or binary values based on the data they represent. The sub-vectors are applied to a set of sub-vector-specific layers. As shown, xC410is applied to xClayers1425, xD415is applied to xDlayers1430, and xO420is applied to xOlayers1435. In some embodiments, each set of sub-vector-specific layers includes one or more of a weighting function, a batch normalization function, and an activation function. The batch normalization and activation functions raise embedding expressivity over baseline embeddings. The layers in each set may include the same or similar configuration of layers, different configurations of layers, etc. The outputs of the sub-vector-specific layers are applied to multiplicative layers440. The multiplicative layers440increase the representational power of the embedding by capturing pairwise interactions between sub-vectors more effectively. In some embodiments, the multiplicative layers440include element-wise multiplication operations. The outputs of the multiplicative layers440are applied to additional layers of the neural network to further increase the representation of the embedding. As shown, the outputs of the multiplicative layers440are applied to a second set of sub-vector-specific layers, namely xClayers2445, xDlayers2450, and xOlayers2455. In some embodiments, the second sets of sub-vector-specific layers include a weighting function and a batch normalization function. The outputs of the second set of sub-vector-specific layers are applied to one or more sets of sub-vector-agnostic layers. In the illustration shown, the outputs are applied to a first set of layers, x layers1460. In some embodiments, the x layers1460include an addition function and an activation function. The output of x layers1460is applied to a second set of layers, x layers2465, and/or a third set of layers, x layers3470. In some embodiments, the x layers2465include one or more of a weighted function, a batch normalization function, and an activation function, and the x layers3470include an addition function. The output of the x layers2465is applied to the x layers3470, and the output of the x layers3470is applied to a fourth set of layers, x layers4475. In some embodiments, the x layers4475includes an activation function. The process of applying model output to one or more of the x layers2465, x layers3470, and x layers4475may be repeated480to generate an enriched embedding f485(e.g., 2 times, 3 times, 5 times, 10 times, etc.). FIG.5is a high-level block diagram illustrating an exemplary user interface500of the claim analysis system125, according to one embodiment. The user interface500shown includes the prediction results of a claim505. An area of the user interface500includes a suspiciousness graph510. The suspiciousness graph510is generated by the interpretability module230. The suspiciousness graph510represents suspiciousness scores on an ordinate axis515and an input vector on the abscissa axis520. Each feature in the input vector is represented with a circle, e.g., circle525, such that a user may visually determine the suspiciousness scores of claim features and identify which features have higher suspiciousness scores. Another area of the user interface500displays the features included in the input vector, the values of the features, and their corresponding suspiciousness scores. For example, the tenth feature in the input vector, x10530has a value535of 0.32 and a suspiciousness score540of 0.39. This indicates that the probability the value of x10530contributes to the denial of the claim505is 39%. In some embodiments, the value of each feature shown is the value of the assigned token. In other embodiments, the value of each feature shown is the raw data value. A third area of the user interface500may display the response prediction545of the claim505, as determined by the claim analysis system125. The response prediction545includes the claim denial variable, y0550, the claim-level reasons the claim will be denied, y1555, the service-level reasons the claim505will be denied, y2560, and a response date estimation565. In the example shown, there is a 65% chance the claim505will be denied because of the reason codes delineated in y1555and y2560, and a response is likely to arrive within 14 days of the claim submission date. Based on the response prediction545and the suspiciousness scores of individual features, the user may edit the claim505. The user may do this may modifying the values of particular features using a user interface element570of the user interface500. In some embodiments, the features the user may edit may be restricted. For example, the user may only be able to edit features that are likely to have data-entry errors, have suspiciousness scores above a threshold suspiciousness score, correspond to certain data fields, etc. The user may edit feature values by modifying the values of the assigned tokens, and/or the values in the claim data. The claim analysis system updates the response prediction545of the claim505and the suspiciousness scores of the sequence features based on the modified values. This allows the user to determine the impact modifications have on the denial probability of the claim, reasons for claim denial, and/or response date estimation. The user interface500shown includes an additional user interface element575that allows the user to select an additional claim to analyze. In some embodiments, the user interface500includes interface elements that allow the user to compare multiple claims across one or more health care systems to identify patterns in claim data. Users may then determine correlations between claim data and denial probabilities, claim data and claim denial reason codes, patterns in response date estimations, and the like. FIG.6is a flowchart illustrating an exemplary process600for analyzing a claim, according to one embodiment. In the process600shown, claim data associated with a claim is received605. A set of claim features of the claim data is identified610to generate a claim sequence. An input vector is generated615with at least a portion of the set of claim features. The set may include demographic information, procedure information and diagnoses information. The input is applied620to a trained neural network. A first portion of the neural network is configured to generate an embedding representing the input vector with a lower dimensionality than the input vector. A second portion of the neural network is configured to generate a prediction of whether the claim will be denied based on the embedding. The prediction may include a probability the claim will be denied. In some embodiments, the prediction further includes a first reason code sequence that includes likelihood scores for claim-level reason codes in a set of claim-level reason codes. In these embodiments, the neural network includes a first set of task-specific output layers configured to generate the first reason code sequence. The prediction may further include a second reason code sequence that includes likelihood scores for service-level reason codes in a set of service-level reason codes. In these embodiments, the neural network includes a second set of task-specific output layers configured to generate the second reason code sequence. The prediction may further include a response date estimation that represents a day interval between a remittance date and the corresponding claim submission date. In these embodiments, the neural network includes a third set of task-specific output layers configured to generate the response date estimation. The prediction is provided for display625on a user interface235of a user device. In some embodiments, the prediction further includes a gradient-based score for each feature in the input vector and a probability the claim will be denied. Each gradient-based score indicates the extent to which the corresponding feature contributes to the probability of the claim being denied. In these embodiments, the gradient-based scores for a portion of the features in the input vector are provided for display on the user interface. The user interface may also include an interface element that allows the user to modify one or more values of the claim data. Responsive to determining the user modifies the one or more values, an updated input vector is generated that includes the one or more modified values. The updated vector is inputted into the neural network to generate an updated prediction. The updated prediction is provided from display on the user interface235. This allows the user to determine the impact of the modification on the prediction. FIG.7is a flowchart illustrating an exemplary process700for training the claim analysis system125, according to one embodiment. In the process700shown, claim data associated with a set of claims is accessed705. Each claim in the set of claims includes a label representing a payer response. For example, the label may include a claim deniable variable representing whether the claim was denied. The label may also include a first reason code sequence including claim-level reasons the claim was denied, a second reason code sequence including service-level reasons the claim was denied, and/or a response date representing a day interval between a remittance data of the claim and a submission date of the claim. For each claim in the set of claims, claim features of the claim data are identified710and an input vector with at least a portion of the claim features is generated715. The weights of a neural network are initialized720. The input vectors of the set of claims are applied725to the neural network to generate predictions of payer responses to the claims. The neural network may be configured to generate a prediction that further includes a gradient-based score for each feature of the corresponding input vector that indicates the extent to which the corresponding feature contributes to the prediction of the payer response. The weights of the neural network are updated730based on the predictions and corresponding labels for the set of claims. FIG.8is a block diagram illustrating an example of a computer suitable for use as the claim analysis system ofFIG.1, according to one embodiment. The example computer800includes a processor802coupled to a chipset804. For convenience and readability, this disclosure refers to a processor802performing various functions, but all such references should be understood to also include multiple processors working together to perform such functions. The chipset804includes a memory controller hub820and an input/output (I/O) controller hub822. A memory806and a graphics adapter812are coupled to the memory controller hub820, and a display818is coupled to the graphics adapter812. A storage device808, keyboard810, pointing device814, and network adapter816are coupled to the I/O controller hub822. Other embodiments of the computer800have different architectures. In the embodiment shown inFIG.8, the storage device808is a non-transitory computer-readable storage medium such as a hard drive, compact disk read-only memory (CD-ROM), DVD, or a solid-state memory device. The memory806holds instructions and data used by the processor802. The pointing device814is a mouse, track ball, touch-screen, or other type of pointing device, and is used in combination with the keyboard810(which may be an on-screen keyboard) to input data into the computer system800. The graphics adapter812displays images and other information on the display818. The network adapter816couples the computer system800to one or more computer networks. The types of computers used can vary depending upon the embodiment and the processing power required by the entity. Furthermore, the computers can lack some of the components described above, such as keyboards810, graphics adapters812, and displays818. Additional Considerations Some portions of above description describe the embodiments in terms of algorithmic processes or operations. These algorithmic descriptions and representations are commonly used by those skilled in the computing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs comprising instructions for execution by a processor or equivalent electrical circuits, microcode, or the like. Furthermore, it has also proven convenient at times, to refer to these arrangements of functional operations as modules, without loss of generality. As used herein, any reference to “one embodiment” or “an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment. Similarly, use of “a” or “an” preceding an element or component is done merely for convenience. This description should be understood to mean that one or more of the element or component is present unless it is obvious that it is meant otherwise. Where values are described as “approximate” or “substantially” (or their derivatives), such values should be construed as accurate+/−10% unless another meaning is apparent from the context. From example, “approximately ten” should be understood to mean “in a range from nine to eleven.” As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present). Upon reading this disclosure, those of skill in the art will appreciate still additional alternative structural and functional designs for providing the disclosed functionality. Thus, while particular embodiments and applications have been illustrated and described, it is to be understood that the described subject matter is not limited to the precise construction and components disclosed. The scope of protection should be limited only by any claims that issue. | 38,306 |
11861504 | Throughout the drawings and the detailed description, unless otherwise described or provided, the same drawing reference numerals will be understood to refer to the same elements, features, and structures. The drawings may not be to scale, and the relative size, proportions, and depiction of elements in the drawings may be exaggerated for clarity, illustration, and convenience. DETAILED DESCRIPTION The following detailed description is provided to assist the reader in gaining a comprehensive understanding of the methods, apparatuses, and/or systems described herein. However, various changes, modifications, and equivalents of the methods, apparatuses, and/or systems described herein will be apparent after an understanding of the disclosure of this application. For example, the sequences of operations described herein are merely examples, and are not limited to those set forth herein, but may be changed as will be apparent after an understanding of the disclosure of this application, with the exception of operations necessarily occurring in a certain order. Also, descriptions of features that are known in the art may be omitted for increased clarity and conciseness. The features described herein may be embodied in different forms, and are not to be construed as being limited to the examples described herein. Rather, the examples described herein have been provided merely to illustrate some of the many possible ways of implementing the methods, apparatuses, and/or systems described herein that will be apparent after an understanding of the disclosure of this application. The use of the term ‘may’ herein with respect to an example or embodiment, e.g., as to what an example or embodiment may include or implement, means that at least one example or embodiment exists where such a feature is included or implemented while all examples and embodiments are not limited thereto. It should be understood that, when a part “comprises” or “includes” an element in the specification, unless otherwise defined, other elements are not excluded from the part and the part may further include other elements. Throughout the specification, when an element, such as a layer, region, or substrate, is described as being “on,” “connected to,” or “coupled to” another element, it may be directly “on,” “connected to,” or “coupled to” the other element, or there may be one or more other elements intervening therebetween. In contrast, when an element is described as being “directly on,” “directly connected to,” or “directly coupled to” another element, there can be no other elements intervening therebetween. Although terms such as “first,” “second,” and “third” may be used herein to describe various members, components, regions, layers, or sections, these members, components, regions, layers, or sections are not to be limited by these terms. Rather, these terms are only used to distinguish one member, component, region, layer, or section from another member, component, region, layer, or section. Thus, a first member, component, region, layer, or section referred to in examples described herein may also be referred to as a second member, component, region, layer, or section without departing from the teachings of the examples. The terminology used herein is for describing various examples only, and is not to be used to limit the disclosure. The articles “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms “comprises,” “includes,” and “has” specify the presence of stated features, numbers, operations, members, elements, and/or combinations thereof, but do not preclude the presence or addition of one or more other features, numbers, operations, members, elements, and/or combinations thereof. Reference will now be made in detail to the following embodiments, examples of which are illustrated in the accompanying drawings. The embodiments may, however, be embodied in many different forms and should not construed as being limited to the following description. FIG.1is a diagram illustrating an example of a relationship between an input feature map and an output feature map in a neural network100. The neural network100may be trained to perform a desired operation by mapping input data and output data that have a nonlinear relationship therebetween through deep learning to perform tasks such as, for example, object classification, object recognition, audio or speech recognition, and image recognition. The deep learning is a machine learning method used to solve a problem given from a big dataset. The deep learning may also be construed as a problem-solving process for optimization to find a point where energy is minimized while training the neural network using provided training data. Through the deep learning, for example, supervised or unsupervised learning, a weight corresponding to an architecture or a model of the neural network may be obtained, and the input data and the output data may be mapped to each other based on the obtained weight. In an example, the neural network100may be implemented as an architecture having a plurality of layers including an input image, feature maps, and an output. In the neural network100, a convolution operation between the input image, and a filter referred to as a kernel, is performed, and as a result of the convolution operation, the feature maps are output. Here, the feature maps that are output are input feature maps, and a convolution operation between the output feature maps and the kernel is performed again, and as a result, new feature maps are output. Based on such repeatedly performed convolution operations, results of recognition of characteristics of the input image via the neural network may be output. The term “recognition” is used as a concept including verification and identification. The verification is an operation of determining whether input data is true of false. For example, the verification may be an operation of determining whether input data is true or false. The identification is an operation of determining a label indicated by input data from among a plurality of labels. For example, the neural network is a model that receives a sequence and performs operations such as, for example, translation, interpretation, and speech recognition. In another example, the neural network100may include an input source sentence, (e.g., voice entry) instead of an input image. In such an example, a convolution operation is performed on the input source sentence with a kernel, and as a result, the feature maps are output. The convolution operation is performed again on the output feature maps as input feature maps, with a kernel, and new feature maps are output. When the convolution operation is repeatedly performed as such, a recognition result with respect to features of the input source sentence may be finally output through the neural network. Input data for the neural network100may include image data, voice data, and text data. However, they are provided as examples only, and other types of data are considered to be well within the scope of the present disclosure. Referring toFIG.1, a first feature map FM1may correspond to an input feature map and a second feature map FM2may correspond to an output feature map of the neural network100. A feature map may denote a data set in which various features of input data are expressed. The first and second feature maps FM1and FM2may have elements of a two-dimensional matrix or elements of a three-dimensional matrix, and a pixel value may be defined in each of the elements. The first and second feature maps FM1and FM2may have a width W (or a column), a height H (or a row), and a depth D. In an example, the depth D may correspond to the number of channels. In an example, a convolution operation with respect to the first feature map FM1and a kernel may be performed, and as a result, the second feature map FM2may be generated. In an example, the kernel is defined in each element and filters features of the first feature map FM1by performing a convolution operation with the first feature map FM1. In an example, the kernel performs a convolution operation with windows (or referred to as tiles) of the first feature map FM1while shifting the first feature map FM1in a sliding window manner. During each shift, each pixel value included in the kernel may be multiplied and added with each of the pixel values of the overlapped window in the first feature map FM1. As the first feature map FM1and the kernel are convolved, one channel of the second feature map FM2may be generated. InFIG.1, although one kernel is depicted, in practice, each of the plurality of kernels may be convolved with the first feature map FM1, and thus, the second feature map FM2of the plurality of channels may be generated. Meanwhile, the second feature map FM2may correspond to an input feature map of the next layer. For example, the second feature map FM2may be an input feature map of a pooling (or subsampling) layer. InFIG.1, for convenience of explanation, only a schematic architecture of a neural network is depicted. However, it should be understood that the neural network100may be realized with a larger number or fewer number of layers, feature maps, kernels, and also, the sizes thereof may be variously modified. The neural network100includes a plurality of layers, each including a plurality of nodes. Also, the neural network includes connection weights that connect the plurality of nodes included in the plurality of layers to a node included in another layer. The neural network100may include, for example, an input layer, at least one hidden layer, and an output layer. The input layer receives an input for performing training or recognition and transfers the input to the hidden layer. The output layer generates an output of the neural network based on a signal received from the hidden layer. The hidden layer is interposed between the input layer and the output layer, and changes data transferred though the input layer to a value to be easily predicted. Input nodes included in the input layer and hidden nodes included in the hidden layer are connected through edges having connection weights. The hidden nodes included in the hidden layer and output nodes included in the output layer are connected through edges having connection weights. In an example, the neural network100may correspond to a recurrent neural network (RNN) or a convolutional neural network (CNN). In an example, the CNN may be a deep neural network (DNN). Ain an example, the DNN may include a region proposal network (RPN), a classification network, a reinforcement learning network, a fully-connected network (FCN), a deep convolutional network (DCN), a long-short term memory (LSTM) network, and a grated recurrent units (GRUs). The DNN may include a plurality of layers. The plurality of layers may include an input layer, at least one hidden layer, and an output layer. In an example, neural network may include a sub-sampling layer, a pooling layer, a fully connected layer, etc., in addition to a convolution layer. FIG.2is a diagram illustrating an example of an operation performed by an autoencoder200. Referring toFIG.2, in an example the autoencoder200includes an input layer, an encoder, a decoder, and an output layer. The encoder may also be referred to as a recognition network, and the decoder may also be referred to as a generative network. In the input layer of the autoencoder200, high-dimensional data, such as, for example, an image stored in a database is used as input data. In an example, in the encoder of the autoencoder200, encoding is performed in which the high-dimensional input data is converted to a latent variable z of a lower dimension is performed. In an example, the latent variable {circumflex over (Z)} generally may be data of 2 to 50 dimensions. In an example, in the decoder, a latent variable {circumflex over (Z)} of a low dimension is decoded, and thus, reconstructed data (high-dimensional data) may be output in the output layer For example, when an image of a human shape is used as input data, the latent variable may be information in which a shape of a subject, camera coordinates (view point), and a light source are nonlinearly mixed. In an example, when a numeric image is used as input data, the latent variable may be information in which an angle of a line and an aspect ratio are non-linearly mixed. A difference between the input data of the input layer and the reconstruction data of the output layer is referred to as a loss function. In other words, as the input data and the restored data coincide with each other, the value of a loss function of the autoencoder200is reduced. The autoencoder200may be taught to minimize the loss function. In an example, the autoencoder200may be taught to minimize the loss function by using a back-propagation technique, and a mean squared error (MSE) may be used as the loss function. FIG.3is a diagram illustrating an example of generating an input embedding. Referring toFIG.3, input images310may be input to a converter320. The converter320may convert each of the input images310into input embeddings330, which are vector values. Each of the input embeddings330may be displayed in a vector space. In an example, the converter320may be a convolutional feature extractor. In this case, the converter320may convert the input images310into the input embeddings330by performing a convolution operation between the input images310and kernels. The size of the input embeddings330may be determined based on the size of the input images310and the size of the kernel, and, for example, each of the input embeddings300may be an 8192-dimensional vector. InFIG.3, it is defined that the input images310are input to the converter320, but various data sets besides the input images310may be input to the converter320. As described below with reference toFIGS.4A,4B, and5, the input embeddings330may be utilized as input data that are later used for learning and inference of an autoencoder. FIGS.4A and4Bare diagrams illustrating examples of training and retraining an autoencoder420. Referring toFIG.4A, a process of training the autoencoder420is depicted. An input embedding410may be input to an encoder421of the autoencoder420as input data. As described with reference toFIG.3, in an example, the input embedding410is a high-dimensional vector and may be converted from an image. In the encoder421, encoding is performed where the high-dimensional input embedding410is converted into a latent variable423of a lower-dimension. A reconstruction embedding430of a high-dimension may be output by decoding the latent variable of a lower-dimension in a decoder422. In an example, the input embedding410and the reconstruction embedding430are vectors of the same dimension. A difference between the input embedding410and the reconstruction embedding430may be referred to as a first loss function. A neural network apparatus may train the autoencoder420to minimize the first loss function. In an operation of training the autoencoder420, the input embeddings410with respect to a first class group may be input to the encoder421. The first class group includes a plurality of classes, and each of the input embeddings410may belong to any one of the classes. The first loss function Lbasein an operation of training the autoencoder420may be expressed as Equation 1 below. The neural network apparatus may train the autoencoder420so that the first loss function Lbaseis minimized by applying a back-propagation technique to the auto encoder420. Lbase=λMSELMSE+λcosLcos+λL1LL1[Equation 1] The neural network apparatus may train the autoencoder420so that the first loss function Lbaseis minimized, and, for this purpose, LMSE, Lcos, and LL1, which are each of the terms of the first loss function Lbase, should be minimized. In Equation 1, LMSEof the first term indicates a difference between the input embedding410and the reconstruction embedding430. LMSEmay be calculated such that differences between the input embedding410and the reconstruction embedding430are squared and summed, and then averaged by using a mean squared error (MSE) technique. In Equation 1, Lcosof the second term relates to cosine similarity with respect to the paired latent variables423. In an example, the neural network apparatus may convert a number of input embeddings410input to encoder421into a number of latent variables423. In an example, the neural network apparatus may pair each of the number of latent variables423with each other and calculate cosine similarity between the paired latent variables423. Since each of the input embeddings410belongs to any one of a plurality of classes included in the first class group, the latent variables423converted from the input embeddings410may also belong to any one of the classes. When the paired latent variables423belong to the same class, the neural network apparatus may calculate a value related to the cosine similarity as ‘1-cosine similarity’, and when the paired latent variables423belong to different classes, the neural network apparatus may calculate a value related to the cosine similarity as ‘cosine similarity’. The neural network apparatus may finally calculate the Lcosafter calculating values related to cosine similarity of the paired latent variables423and summing all the calculated values based on class equality. In other words, in order to minimize the first loss function Lbase, Lcosmay be minimized. Since the cosine similarity is close to 1 when paired latent variables423belong to the same class, ‘1-cosine similarity’ is used to bring the cosine similarity value closer to zero, and since the cosine similarity is close to 0 when paired latent variables423belong to different classes, cosine similarity may be used to minimize the Lcos. On the other hand, when the number of paired latent variables423increases, the time required for calculation may increase. In an example, the neural network apparatus may calculate the cosine similarity with respect to only a number of paired latent variables423. In an example, if the number of latent variables423(corresponding to the size of a batch of input embeddings410) is four, total 6 pairs (402) of paired latent variables423are generated, but the neural network apparatus may arbitrary select only four of the latent variables423(the same value as the size of the batch) and may calculate the cosine similarity. Since Lcosis included in the first loss function Lbase, the latent variables423that belong to different classes may be clearly distinguished in a vector space. In another example, LL2may be included in Equation 1 instead of Lcos. In this case, when the paired latent variables423belong to the same class, LL2may be minimized by controlling a distance L2 to be closer, and when the paired latent variables423belong to different classes, LL2may be minimized by controlling the distance L2 to be farther. In this way, instead of using Lcos, the first loss function Lbasemay be configured by changing the distance measurement according to whether the paired latent variables423belong to the same class or not. In Equation 1, LL1in the third term is related to L1-norm. LL1plays a role of enhancing the effect of Lcos. Since LL1is included in the first loss function Lbase, vector elements of the latent variables423have values close to zero. As a result, latent variables423that belong to different classes are located in different quadrants in a vector space, and thus, the latent variables423that belong to different classes may be more clearly distinguished in the vector space. In other words, as the latent variables423that belong to different classes are located in different quadrants in the vector space, cosine similarity between the latent variables423that belong to different classes is reduced. In Equation 1, λMSE, λcosand λL1are constants that determine the importance of each term. Meanwhile, the first loss function Lbaseof Equation 1 is described as including terms related to LMSE, Lcos, and LL1. In another example, the first loss function Lbasemay include only terms related to LMSEand Lcos. In an operation of training the autoencoder420, a contribution value of each of parameters included in the autoencoder420may be calculated. The contribution value of each of the parameters may be stored in a memory450. In order to minimize the first loss function Lbase, the parameters of the autoencoder420may have an optimal value. In other words, when the auto encoder420is trained to minimize the first loss function Lbase, the parameters of the auto encoder420have an optimal value. When the optimal value of the parameters is changed, the first loss function Lbaseis increased, and the quantification of the degree of contribution to the variation of the first loss function Lbasefor each parameter is a contribution value of each of the parameters. For example, when a first parameter and a second parameter are changed to the same degree, if Lbaseis increased greater by the change of the first parameter than the change of the second parameter, it may be stated that the contribution value of the first parameter is greater than that of the second parameter. In an operation of training the autoencoder420, a representative value for each of at least one class included in the first class group may be calculated. In detail, the neural network apparatus may convert input embeddings with respect to a specific class included in a first class group into latent variables and calculate a representative value representing the latent variables. For example, the neural network apparatus may calculate an average value of latent variables as a representative value. Since the representative value is also a vector value and has the same dimension as the latent variable, the representative value may be displayed on a vector space of the same dimension as the latent variable. When a plurality of classes are included in the first class group, the neural network apparatus may calculate a representative value with respect to each of the plurality of classes and may display the calculated representative values in the vector space. When the training of the autoencoder420is completed, the neural network apparatus acquires a test latent variable by inputting a test embedding to the autoencoder420, and may calculate similarity (for example, cosine similarity) between the test latent variable and the calculated representative values on a vector space. The neural network apparatus may determine a representative value having the highest similarity with the test embedding and may classify the test embedding into a class corresponding to the determined representative value. A process of classifying the class of the test embedding is expressed as Equation 2 below. In Equation 2 below, y indicates a predicted class, cos(h(x),μi) indicates cosine similarity between two input vectors, h(x) indicates a latent variable for test embedding, and μiindicates a representative value calculated with respect to ithclass. y=argmini∈{0,1,…,Nc-1}cos(h(x),μi)[Equation2] Meanwhile, the neural network apparatus may train the autoencoder420by using first input embeddings with respect to the first class group, and then, retrain the autoencoder420by using the second input embeddings with respect to the second class group. Here, since at least one class included in the first class group and at least one class included in the second class group are different, a class increment learning is performed through a process of retraining the autoencoder420. Referring toFIG.4B, a process of retraining the autoencoder420is shown. Hereinafter, a description previously given with reference toFIG.4Awill be omitted for convenience. The second loss function Lincin a retraining of the auto encoder420may be expressed as Equation 3 below. The neural network apparatus may train the autoencoder420to minimize the second loss function Lincby applying a backpropagation technique to the autoencoder420. Linc=λMSELMSE+λregLreg+δλcosLcos+λL1LL1[Equation 3] The neural network apparatus may train the autoencoder420to minimize the second loss function Linc, and for this purpose, LMSE, Lreg, Lcos, and LL1, each being a term of the second loss function Linc, should be minimized. The descriptions with respect to LMSE, Lcos, and Lcoswill be omitted because they were given previously with reference toFIG.4A. In equation 3, Lregin the second term is related to regularization. In order to prevent catastrophic forgetting from occurring during a relearning process of the autoencoder420, a term Lregindicating regularization may further be included in the second loss function Lincthat is used in the relearning process. In an example, Lregmay be calculated through a synaptic intelligence (SI) method. When the SI method is used to calculate Lreg, the degree of contribution by a kthparameter to the change of the loss function used in the previous learning of the autoencoder420is expressed as a value in which the change in gradient and the change in parameter are multiplied.) The above description may be expressed as Equation 4 below. In Equation 4, gradk(θ(t)) represents the change amount of gradient and dθk(t)dt represents the change amount of a parameter. wkn=∫tn-1tngradk(θ(t))dθk(t)dtdt[Equation4] The neural network apparatus continues to accumulate the degree of contribution wknuntil the relearning of the autoencoder420ends, a contribution value Ωknof a kthparameter may be calculated by regularizing the total amount of change of the parameter. The above description may be expressed as Equation 5 below. Ωkn=Σni<nwknt(Δknt)2+ξ[Equation5] Lregmay be defined as shown in Equation 6 by using the contribution values of each of the plurality of parameters included in the autoencoder420. Lreg=ΣkΩkn(−θk)2[Equation 6] In another embodiment, Lregmay be calculated through a memory aware synapses (MAS) method. The MAS method is similar to the SI method, but the method of calculating a contribution for each parameter is different. When the MAS method is used, the neural network apparatus may calculate a contribution value for each parameter by regularizing an L2-norm value of a gradient to a number N of all data observed in a relearning process. The above description may be expressed as Equation 7 below. Ωkn=1N∑i=1Ngradk(xi)[Equation7] Even when the MAS method is used, Lregmay be defined as in Equation 6. Meanwhile, in Equation 3, λMSE, λreg, and λL1are constants that determine the importance of each term. Also, δ has a value of 0 when the number of classes used for relearning an autoencoder is 1 and has a value of 1 when the number of classes used for training an autoencoder is 2 or more. It is described that the second loss function Lincof Equation 3 includes terms related to LMSE, Lcos, Lreg, and LL1, but in another embodiment, the second loss function Lincmay include only the terms related to LMSEand Lreg. In an operation of retraining the autoencoder420, a contribution value of each of the parameters included in the autoencoder420may be updated. The contribution value of each of the updated parameters may be stored in the memory450. In detail, in the learning operation of the autoencoder420, an input embedding with respect to at least one class included in the first class group is used, and in the relearning operation, an input embedding with respect to at least one class included in the second class group is used. At this time, since at least some of the classes included in the first class group and the second class group are different from each other, the contribution value of each of the parameters calculated in the operation of learning the autoencoder420and the contribution value of each of the parameters calculated in the operation of retraining the autoencoder420may be different. Accordingly, in the operation of retraining the autoencoder420, the neural network apparatus may update the contribution value of each of the parameters included in the autoencoder420. The second loss function Lincin the retraining of the autoencoder420includes a term Lregrelated to regularization, and this is to prevent the occurrence of catastrophic forgetting in a process of relearning the autoencoder420. That is, the neural network apparatus may prevent the occurrence of catastrophic forgetting with respect to the contribution value of each of the parameters calculated in the learning operation of the autoencoder420by retraining the autoencoder420based on the second loss function Lincincluding the term Lregrelated to regularization. When the relearning of the autoencoder420is completed, the neural network apparatus acquires a test latent variable by inputting a test embedding to the autoencoder420and may calculate similarity (for example, cosine similarity) between the test latent variable and the calculated representative values on a vector space. The neural network apparatus may determine a representative value having the highest similarity with the test embedding and may classify the test embedding into a class corresponding to the determined representative value. A process of classifying the class of the test embedding is expressed as Equation 2. In an example, a class incremental learning is performed by combining an autoencoder and a regularization technique, and thus, an operation speed may be increased by reducing the amount of computations and may reduce a memory capacity required for data storage. In an example, it may be unnecessary to store all of input embeddings with respect to the first class group, and it may be unnecessary to separately generate similar embeddings (for example, pseudo-samples), and thus, a memory capacity may be reduced. Also, since an additional computation for operations of sorting and writing input embeddings may not be needed, the amount of calculations may be reduced, and accordingly, a calculation speed may be increased. FIG.5is a diagram illustrating an example of a process of calculating a representative value of latent variables. Referring toFIG.5, a neural network apparatus may convert input embeddings510with respect to a specific class into latent variables530by using an autoencoder520. Also, the neural network apparatus may calculate an average value of the latent variables530as a representative value. The calculated representative values may be stored in a memory540. For example, in a process of training and retraining the autoencoder520, the neural network apparatus may convert the input embeddings with respect to the first class into latent variables and may calculate a first representative value representing the latent variables. In a similar manner, the neural network apparatus may convert the input embeddings with respect to a second class through a fourth class to latent variables and may calculate second through fourth representative values respectively representing the latent variables. Since the first through fourth representative values are also vector values and have the same dimension as the latent variable530, the first through fourth representative values may be displayed on a vector space of the same dimension as the latent variable530. When a learning and relearning of the autoencoder520is completed, the neural network apparatus acquires a test latent variable by inputting a test embedding to the autoencoder520and may calculate similarity (for example, cosine similarity) between the test latent variable and the calculated representative values on a vector space. The neural network apparatus may determine a representative value having the highest similarity with the test embedding and classify the test embedding into a class corresponding to the determined representative value. In the embodiment described above, when the representative value having the highest similarity with the test embedding is determined as the first representative value, the neural network apparatus may classify the test embedding into a first class. In an embodiment, the neural network apparatus may consider latent variables having a difference from a representative value exceeding a threshold value for each class as outliers among the latent variables and may remove the latent variables. For example, in order to remove the latent variables considered as outliers, a local outlier factor technique may be used, but the present embodiment is not limited thereto. In other words, the neural network apparatus may select latent variables having a difference from the representative value for each class below the threshold value among the latent variables. The latent variables having a difference from the representative value less than or equal to the threshold value may refer to latent variables located close to the representative value in a vector space. The neural network apparatus may increase learning performance by learning the autoencoder520based on input embeddings corresponding to the selected latent variables. For example, if a specific latent variable550of the latent variables for the fourth class is considered an outlier, the neural network apparatus may remove the specific latent variable550and may train the autoencoder520based on input embeddings corresponding to the remaining latent variables. A process of removing the latent variables considered as outliers may be performed in learning and relearning operations of the autoencoder520. In another example, the process of removing the latent variables considered as outliers may be performed only in the learning operation of the autoencoder520. In an example, when the autoencoder520is trained based on the input embeddings corresponding to the selected latent variables, the neural network apparatus may use a third loss function Ladd. The third loss function Laddmay be expressed as Equation 8 below. Ladd=λcenterLcenter+λcosLcos[Equation 8] Since Lcosof Equation 8 has been described above with reference toFIG.4A, the description thereof will be omitted. Lcenterof Equation 8 is a term that makes a latent variable of each class approach a representative value of each class, and may be expressed as Equation 9 below. In Equation 9, h(x) represents a latent variable and μnew,irepresents a newly calculated representative value for an ithclass after an outlier is removed. Lcenter=Σi∥h(x)−μnew,i∥2[Equation 9] FIG.6is a diagram illustrating an example of a method of classifying a class of training embeddings. The neural network apparatus may convert input embeddings for a specific class into latent variables using an autoencoder. Also, the neural network apparatus may calculate an average value of latent variables as a representative value. Since the representative value is also a vector value and has the same dimension as a latent variable, the representative value may be displayed on a vector space of the same dimension as the latent variable. Referring toFIG.6, latent variables and representative values of the latent variables for a plurality of classes may be displayed on a two-dimensional vector space. InFIG.6, for convenience of explanation, a two-dimensional vector space is assumed, but latent variables and representative values of the latent variables may be displayed in a high-dimensional vector space. For example, latent variables for a first class may be displayed in a first region610. Also, latent variables for a second class may be displayed in a second region620. The neural network apparatus may calculate an average value of the latent variables displayed in the first region610and may determine the average value as a first representative value611for the first class. Also, the neural network apparatus may calculate an average value of latent variables displayed in the second region620and may determine the average value as a second representative value621for the second class. In an example, the neural network apparatus may remove an outlier622, which is not displayed in the second region620, among the latent variables for the second class. The neural network apparatus may calculate the second representative value621using the latent variables displayed in the second region620after removing the outlier622. After learning and relearning of the autoencoder is completed, the neural network apparatus may obtain a test latent variable630by inputting a test embedding to the autoencoder and may display the test latent variable630in a vector space. The neural network apparatus may calculate a cosine similarity between the test latent variable630and the first representative value611and the second representative value621. The neural network apparatus may determine that the test latent variable630and the first representative value611have a high similarity and classify the test embedding corresponding to the test latent variable630into a first class. FIG.7is a diagram illustrating an example of a hardware configuration of a neural network apparatus700. The neural network apparatus700may be implemented by various types of devices, such as, for example, a smartphone, a mobile phone, a personal computer (PC), a server, a mobile device, an embedded device, a wearable smart device (such as, a ring, a watch, a pair of glasses, glasses-type device, a bracelet, an ankle bracket, a belt, a necklace, an earring, a headband, a helmet, a device embedded in the cloths, or an eye glass display (EGD)), a computing device, for example, a server, a laptop, a notebook, a subnotebook, a netbook, an ultra-mobile PC (UMPC), a tablet personal computer (tablet), a phablet, a mobile internet device (MID), a personal digital assistant (PDA), an enterprise digital assistant (EDA), an ultra mobile personal computer (UMPC), a portable lab-top PC, electronic product, for example, a robot, a digital camera, a digital video camera, a portable game console, an MP3 player, a portable/personal multimedia player (PMP), a handheld e-book, a global positioning system (GPS) navigation, a personal navigation device, portable navigation device (PND), a handheld game console, an e-book, a television (TV), a high definition television (HDTV), a smart TV, a smart appliance, a smart home device, or a security device for gate control, a walking assistance device, a smart speaker, a robot, an Augmented Reality (AR) device, a medical device, various Internet of Things (IoT) devices, a smart car, an autonomous vehicle, an automatic or autonomous driving system, an intelligent vehicle, an advanced driver assistance system (ADAS), a head-up display (HUD), and an augmented reality head-up display (AR HUD), and may be performed by an application, middleware, or an operating system installed on a user device, or a program of a server interoperating with the corresponding application. In another example, the neural network apparatus700may correspond to a smartphone that performs functions such as, for example, voice recognition, image recognition, and image classification. Furthermore, the neural network apparatus700may correspond to a dedicated hardware accelerator (HW accelerator) mounted on the devices described above, and the neural network apparatus700may be a hardware accelerator, such as, for example, a neural processing unit (NPU), a tensor processing unit (TPU), and a neural engine which are dedicated modules for driving a neural network. Referring toFIG.7, the neural network apparatus700includes a processor710, a memory720, and an input/output interface (not shown). InFIG.7, although only constituent elements related to the neural network apparatus700are illustrated, other general constituent elements may be included without departing from the spirit and scope of the illustrative examples described. The processor710controls overall functions for executing the neural network apparatus700. For example, the processor710generally controls the neural network apparatus700by executing programs stored in the memory720in the neural network apparatus700. The processor710may be a data processing device implemented by hardware including a circuit having a physical structure to perform desired operations. For example, the desired operations include instructions or codes included in a program. For example, the hardware-implemented data processing device includes a microprocessor, a central processing unit (CPU), a graphics processing unit (GPU), an application processor (AP), a multi-core processor, a reconfigurable processor, a multicore processor, a multiprocessor, an application-specific integrated circuit (ASIC), and a field programmable gate array (FPGA), or any other type of multi- or single-processor configuration. Further details regarding the processor710is provided below. The memory720is hardware for storing various data processed in the neural network apparatus700, and, for example, the memory720may store data processed and data to be processed in the neural network apparatus700. Also, the memory720may store applications, drivers, etc. to be driven by the neural network apparatus700. The memory720may include random access memory (RAM), such as dynamic random access memory (DRAM), static random access memory (SRAM), etc., read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), CD-ROM, Blu-ray or other optical disk storage, hard disk drive (HDD), solid state drive (SSD), or flash memory. The memory720includes a large capacity storage medium such as a hard disk to store the variety of data. The memory720stores at least a portion of information argumentized at a terminal of a user, or stores a program implementing an operating method of the neural network apparatus700. The memory720is a volatile memory or a non-volatile memory. The memory720includes a large capacity storage medium such as a hard disk to store the variety of data. Further details regarding the memory720is provided below. The processor710executes the program, and controls the neural network apparatus700. Program Codes executed by the processor710are stored in the memory720. The neural network apparatus700is connected to an external device (for example, a personal computer or a network) through an input/output device (not shown), and exchanges data therewith. In an example, the neural network apparatus700interacts with the user through the input/output interface (not shown). In an operation of training an autoencoder, the memory720may store contribution values of each of the parameters included in the autoencoder. Also, the memory720may store a representative value for each of at least one class used in a learning operation. In an operation of retraining an autoencoder, the memory720may update the contribution value of each of the previously stored parameters. Also, the memory720may store a representative value for each of the at least one class used in the relearning operation. The processor710reads/writes neural network data, for example, input data set, parameter data, contribution value for each parameter, representative value for each class, etc. from the memory720, and executes the neural network by using the read/write data. When the neural network is executed, the processor710may repeatedly perform a convolution operation. In an example, the input/output interface (not shown) may be a display that receives an input from a user or provides an output. In an example, the input/output interface (not shown) may function as an input device and receives an input from a user through a traditional input method, for example, a keyboard and a mouse, and a new input method, for example, a touch input, a voice input, and an image input. In an example, the input/output interface (not shown) may function as an output device, and provide an output of the neural network apparatus700to a user through a visual, auditory, or tactile channel. The input/output interface (not shown) may include, for example, a display, a touchscreen, a speaker, a vibration generator, and other devices that may provide an output to a user. However, the input/output interface (not shown) are not limited to the example described above, and any other displays, such as, for example, computer monitor and eye glass display (EGD) that are operatively connected to the neural network apparatus700may be used without departing from the spirit and scope of the illustrative examples described. FIG.8is a diagram illustrating an example of a method of performing a class incremental learning in a neural network apparatus. The operations inFIG.8may be performed in the sequence and manner as shown, although the order of some operations may be changed or some of the operations omitted without departing from the spirit and scope of the illustrative examples described. Many of the operations shown inFIG.8may be performed in parallel or concurrently. One or more blocks ofFIG.8, and combinations of the blocks, can be implemented by special purpose hardware-based computer, such as a processor, that perform the specified functions, or combinations of special purpose hardware and computer instructions. In addition to the description ofFIG.8below, the descriptions ofFIGS.1-7are also applicable toFIG.8and are incorporated herein by reference. Thus, the above description may not be repeated here. Referring toFIG.8, in operation810, a neural network apparatus may train an autoencoder by using first input embeddings with respect to a first class group. In an example, the neural network apparatus may convert an input data set into input embeddings that are vector values. For example, the neural network apparatus may convert an input data set into input embeddings using a convolutional feature extractor. The autoencoder includes an encoder and a decoder. The neural network apparatus may convert the first input embeddings into low-dimensional latent variables by using the encoder. Also, the neural network apparatus may generate first reconstruction embeddings from latent variables by using the decoder. The neural network apparatus may train an autoencoder by minimizing a first loss function with respect to the autoencoder such that the first reconstruction embeddings coincide with the first input embeddings. In an example, the first loss function may be expressed as Equation 1 above. In detail, the first loss function may include a term LMSEindicating a difference between an input embedding and a restoration embedding, a cosine similarity related term with respect to paired latent variables, and an L1-norm related term. In operation820, the neural network apparatus may calculate a contribution value of each of the parameters of the autoencoder and calculate a representative value for each class included in the first class group in a process of training the autoencoder. In order to minimize the first loss function, the parameters of the autoencoder may have an optimal value. When the optimal value of the parameters is changed, the first loss function increases and a numerical value of the degree of contribution to the amount of change of the first loss function for each parameter is a contribution value for each parameter. Also, the neural network apparatus may convert input embeddings for a specific class into latent variables and may calculate a representative value representing the latent variables. Since the representative value is also a vector value and has the same dimension as the latent variable, the representative value may be displayed on a vector space of the same dimension as the latent variable. In operation830, the neural network apparatus may retrain the autoencoder by using input embeddings for the second class group. The neural network apparatus may convert the second input embeddings into low-dimensional latent variables by using the encoder. Also, the neural network apparatus may generate second reconstruction embeddings from latent variables by using the decoder. The neural network apparatus may train the autoencoder by minimizing a second loss function for the autoencoder such that the second reconstruction embeddings coincide with the second input embeddings. Compared with the first loss function of operation810, the second loss function may further include a term related to regularization based on an updated contribution value of each of the parameters. In order to prevent catastrophic forgetting from occurring in a relearning process of the autoencoder, a term Lregindicating regularization may further be included in the second loss function used in the relearning process. In an example, Lregmay be calculated through a synaptic intelligence (SI) method or a memory aware synapses (MAS) method, but is not limited thereto. In operation840, the neural network apparatus may update the contribution value of each parameter in the course of retraining the autoencoder and may calculate a representative value with respect to each of at least one class included in the second class group. In detail, at least one class used in the learning operation of the autoencoder and at least one class used in the relearning operation of the autoencoder may be different from each other. Accordingly, the contribution value of each of the parameters calculated in the training of the autoencoder and the contribution value of each of the parameters calculated in the retraining may be different. In the operation of retraining the autoencoder, the neural network apparatus may update the contribution value when the contribution value of each of the parameters included in the autoencoder is changed. Also, the neural network apparatus may convert input embeddings with respect to a particular class into latent variables and may calculate a representative value representing the latent variables. Since the representative value is also a vector value and has the same dimension as the latent variables, the representative value may be displayed on a vector space of the same dimension as the latent variables. When the relearning of the autoencoder is completed, the neural network apparatus acquires a test latent variable by inputting a test embedding to the autoencoder and may calculate similarity (for example, cosine similarity) between the test latent variable and the calculated representative values on a vector space. The neural network apparatus may determine a representative value having the highest similarity with the test embedding and may classify the test embedding into a class corresponding to the determined representative value. A process of classifying the class of the test embedding is expressed as Equation 2 above. The methods described above may be implemented as a computer-readable program and may be realized in general computers that execute the program by using computer-readable recording media. Also, the structure of data used in the methods described above may be recorded on a computer-readable recording medium through various means. The computer-readable medium may be magnetic storage media (e.g., ROM, floppy disks, hard disks, etc.) and optical recording media (e.g., CD-ROMs or DVDs), and transmission media such as Internet transmission media. Since a class incremental learning is performed by combining an autoencoder and a normalization technique, the amount of computations is reduced, and as a result, a computation speed is increased and the amount of memory required for storing data is reduced. The autoencoder, encoder, decoder, converter320, autoencoder420, encoder421, decoder422, autoencoder520, and other apparatuses, units, modules, devices, and components described herein are implemented by hardware components. Examples of hardware components that may be used to perform the operations described in this application where appropriate include controllers, sensors, generators, drivers, memories, comparators, arithmetic logic units, adders, subtractors, multipliers, dividers, integrators, and any other electronic components configured to perform the operations described in this application. In other examples, one or more of the hardware components that perform the operations described in this application are implemented by computing hardware, for example, by one or more processors or computers. A processor or computer may be implemented by one or more processing elements, such as an array of logic gates, a controller and an arithmetic logic unit, a digital signal processor, a microcomputer, a programmable logic controller, a field-programmable gate array, a programmable logic array, a microprocessor, or any other device or combination of devices that is configured to respond to and execute instructions in a defined manner to achieve a desired result. In one example, a processor or computer includes, or is connected to, one or more memories storing instructions or software that are executed by the processor or computer. Hardware components implemented by a processor or computer may execute instructions or software, such as an operating system (OS) and one or more software applications that run on the OS, to perform the operations described in this application. The hardware components may also access, manipulate, process, create, and store data in response to execution of the instructions or software. For simplicity, the singular term “processor” or “computer” may be used in the description of the examples described in this application, but in other examples multiple processors or computers may be used, or a processor or computer may include multiple processing elements, or multiple types of processing elements, or both. For example, a single hardware component or two or more hardware components may be implemented by a single processor, or two or more processors, or a processor and a controller. One or more hardware components may be implemented by one or more processors, or a processor and a controller, and one or more other hardware components may be implemented by one or more other processors, or another processor and another controller. One or more processors, or a processor and a controller, may implement a single hardware component, or two or more hardware components. A hardware component may have any one or more of different processing configurations, examples of which include a single processor, independent processors, parallel processors, single-instruction single-data (SISD) multiprocessing, single-instruction multiple-data (SIMD) multiprocessing, multiple-instruction single-data (MISD) multiprocessing, and multiple-instruction multiple-data (MIMD) multiprocessing. The methods that perform the operations described in this application are performed by computing hardware, for example, by one or more processors or computers, implemented as described above executing instructions or software to perform the operations described in this application that are performed by the methods. For example, a single operation or two or more operations may be performed by a single processor, or two or more processors, or a processor and a controller. One or more operations may be performed by one or more processors, or a processor and a controller, and one or more other operations may be performed by one or more other processors, or another processor and another controller. One or more processors, or a processor and a controller, may perform a single operation, or two or more operations. [Instructions or software to control a processor or computer to implement the hardware components and perform the methods as described above are written as computer programs, code segments, instructions or any combination thereof, for individually or collectively instructing or configuring the processor or computer to operate as a machine or special-purpose computer to perform the operations performed by the hardware components and the methods as described above. In an example, the instructions or software includes at least one of an applet, a dynamic link library (DLL), middleware, firmware, a device driver, an application program storing the method of performing a class incremental learning in a neural network apparatus. In one example, the instructions or software include machine code that is directly executed by the processor or computer, such as machine code produced by a compiler. In another example, the instructions or software include higher-level code that is executed by the processor or computer using an interpreter. Programmers of ordinary skill in the art can readily write the instructions or software based on the block diagrams and the flow charts illustrated in the drawings and the corresponding descriptions in the specification, which disclose algorithms for performing the operations performed by the hardware components and the methods as described above. The instructions or software to control computing hardware, for example, one or more processors or computers, to implement the hardware components and perform the methods as described above, and any associated data, data files, and data structures, may be recorded, stored, or fixed in or on one or more non-transitory computer-readable storage media. Examples of a non-transitory computer-readable storage medium include read-only memory (ROM), random-access programmable read only memory (PROM), electrically erasable programmable read-only memory (EEPROM), random-access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), flash memory, non-volatile memory, CD-ROMs, CD-Rs, CD+Rs, CD-RWs, CD+RWs, DVD-ROMs, DVD-Rs, DVD+Rs, DVD-RWs, DVD+RWs, DVD-RAMs, BD-ROMs, BD-Rs, BD-R LTHs, BD-REs, blue-ray or optical disk storage, hard disk drive (HDD), solid state drive (SSD), flash memory, card type memory such as multimedia card, secure digital (SD) card, or extreme digital (XD) card, magnetic tapes, floppy disks, magneto-optical data storage devices, optical data storage devices, hard disks, solid-state disks, and any other device that is configured to store the instructions or software and any associated data, data files, and data structures in a non-transitory manner and providing the instructions or software and any associated data, data files, and data structures to a processor or computer so that the processor or computer can execute the instructions. In one example, the instructions or software and any associated data, data files, and data structures are distributed over network-coupled computer systems so that the instructions and software and any associated data, data files, and data structures are stored, accessed, and executed in a distributed fashion by the one or more processors or computers. While this disclosure includes specific examples, it will be apparent after an understanding of the disclosure of this application that various changes in form and details may be made in these examples without departing from the spirit and scope of the claims and their equivalents. The examples described herein are to be considered in a descriptive sense only, and not for purposes of limitation. Descriptions of features or aspects in each example are to be considered as being applicable to similar features or aspects in other examples. Suitable results may be achieved if the described techniques are performed in a different order, and/or if components in a described system, architecture, device, or circuit are combined in a different manner, and/or replaced or supplemented by other components or their equivalents. Therefore, the scope of the disclosure is defined not by the detailed description, but by the claims and their equivalents, and all variations within the scope of the claims and their equivalents are to be construed as being included in the disclosure. | 59,941 |
11861505 | DETAILED DESCRIPTION In order to make purposes, technical solutions and advantages of the present invention more clear, the disclosure will be further illustrated in detail below in connection with the drawings and embodiments. However, it should be understood that the specific embodiments described herein are merely used for explaining the disclosure, but not intended to limit the scope of the disclosure. In addition, in the following illustration, description on well-known structures and arts is omitted so as to avoid unnecessary confusion of concepts of the present invention. An embodiment of the disclosure designs an execution engine. The execution engine mainly involves a program running phase in a working process of an operating system, i.e., in the running process, the execution engine in the embodiment is named as a virtual machine. As shown inFIG.1,FIG.1is an architecture diagram of dynamic graph execution for neural network computation. An interpreter takes charge of carrying out interaction of a user mode and an instruction mode, and is configured to parse a user code into an instruction. A virtual machine takes charge of scheduling an instruction, and is configured to utilize the instruction to construct a dynamic directed acyclic graph consisting of instruction nodes. The process of scheduling the instruction by the virtual machine is actually a process of adding the instruction nodes to the dynamic directed acyclic graph, issuing the instruction nodes to a device stream to execute computation of a kernel function of an operator, and deleting the instruction node which has completed computation. The embodiment provides a method of executing dynamic graph for neural network computation, including the following steps:S1: constructing and distributing an operator and a tensor;S2: deducing an operator executing process by an operator interpreter;S3: constructing an instruction of a virtual machine at runtime by the operator interpreter;S4: sending the instruction to the virtual machine at runtime by the operator interpreter;S5: scheduling the instruction by the virtual machine; andS6: releasing an executed instruction by the virtual machine. The step S1includes the following specific sub-steps:S11: creating an input tensor on a specified hardware device;S12: constructing metainformation of an operator, the metainformation includes a name of the operator, names of the input tensor and an output tensor, and device information; andS13: distributing the metainformation of the operator and the input tensor to the operator interpreter. The step S2includes the following specific sub-steps:S21: receiving a distributed operator object, the input and output tensors, and a context object by the operator interpreter, traversing a list of the input tensor, and adding a data pointer of a current input tensor into a pointer list of the data of the input tensor;S22: deducing the metainformation of the operator, the metainformation including the device information and a type and a shape of the output tensor, the step S22includes the following specific sub-steps:S221: declaring a pointer object of tensor data and an object which stores metainformation of the output tensor,S222: traversing a list of the output tensor, applying for a tensor type pointer, initializing the tensor type pointer to be a tensor object, and updating a value of the object which stores the metainformation of the output tensor in the step S221at a corresponding position to be metainformation of the tensor object, andS223: deducing the device information, the shape, and the type of the operator, which includes the following specific sub-steps:(i): deducing the device information of the operator: calling a device deducing function defined when the operator is registered, so as to acquire the deduced device information,(ii): deducing shape and type information of the operator: calling shape deducing and type deducing functions defined when the operator is registered, so as to deduce the shape and the type of the operator at present, and(iii): traversing a data pointer of the output tensor and updating the data pointer of the output tensor based on the deduced metainformation of the output tensor; andS23: constructing a kernel function of an operator for execution: constructing a kernel function of an operator for execution according to the operator object and the deduced device information. The step S3is that the runtime of a deep learning operating system is abstracted to be the virtual machine. A minimum unit executed by the virtual machine is an instruction, and each type of instruction is bound with two attributes: one is a distributive description attribute which represents devices on which a current instruction is executed; and the other one is a type of a device stream, which represents a type of a hardware device on which the current instruction is executed, and if the hardware device is a GPU, the type of the hardware device stream is CudaStream. The device stream is an abstraction on the hardware device in the virtual machine, and each type of device stream corresponds to one type of hardware device. The process of constructing the instruction of the virtual machine at runtime by the operator interpreter includes the following steps:S31: constructing an instruction for reading and modifying a value of a tensor data pointer;S32: constructing an instruction for running an operator;S33: constructing an instruction for releasing a tensor memory of which the life cycle is ended; andS34: constructing a dependency relationship between the instructions, which includes the following specific sub-steps:S341: defining operand types of the instructions, and constructing the dependency relationship between the instructions by the operand types of the instructions, wherein the main operand types include const, mut, and mut2; const is a constant type, mut is a compile-time variable type, and mut2 is a runtime variable type; const corresponds to an input and represents a data reading operation; mut and mut2 correspond to an output and represent a data writing operation; a user issues two instructions a and b, then the instruction a requests to modify a value of an operand c, but the instruction b requests to read the value of the operand c, and then the instruction a has to be executed prior to the instruction b; the instruction a has an operand c with a mut type, and the instruction b has an operand c with a const type; by checking the type of the operands in the instructions a and b, the dependency relationship can be established between the instructions a and b: deduction of the instruction b needs to be carried out after deduction of the instruction a, and computation of the instruction b needs to be carried out after computation of the instruction a; the operand type mut2 is to process some operators of which the shapes of the output tensors need to be determined at runtime, and if the instruction a possesses the operand c in a form of the operand type nut2, both deduction and computation of the instruction b need to be carried out after computation of the instruction a;S342: consuming a readable data pointer object: traversing const operands in a kernel function of the current operator, fetching a corresponding data pointer object of a current operand in an input tensor tuple, and executing a computing task by the kernel function of the current operator by utilizing a value of the obtained data pointer object;S343: consuming a writable data pointer object, which includes the following specific processes:(1) traversing mut operands in the kernel function of the current operator and fetching a corresponding data pointer object of a current operand in an output tensor tuple;(2) when the data pointer object of the output tensor corresponds to a plurality of instruction accesses, maintaining an instruction list for all instructions which access the data pointer object of a current output tensor, and adding the current instruction into the instruction list for the instructions which access the data pointer object; and(3) carrying out a lock on a plurality of read-write accesses of the data pointer: fetching an access object of a current data pointer by an instruction access pool according to an access type of the current data pointer, the instruction, and the data pointer object, adding the access object into an access list of the current instruction, i.e., updating the access list of the current instruction by utilizing the access object, and adding the access object of the instruction into an access list of a writing operation maintained by a current data pointer object, i.e., updating the access list of the writing operation of the current data pointer by utilizing the access object of the instruction, in other words, writing data into the current data pointer; andS344: constructing an edge of the instruction: analyzing a relationship between two instructions, for example, a one-read one-write relationship, or a two-read relationship, or a two-write relationship, to respectively construct edges of the instructions and connecting the two instructions. The step S5of scheduling the instruction by the virtual machine is as follows: when executing the current instruction, the virtual machine at runtime applies for a video memory for the output tensor, calls a kernel function of the operator on a current device to carry out computation, and writes a computation result into the output tensor. During a program running period, the virtual machine may continuously carry out polling in a scheduler thread, executes the new executable instruction if a new executable instruction exists, otherwise continues to carry out polling. Scheduling the instruction by the virtual machine includes the following process:S51: initializing a preparation list, judging whether the current instruction can be issued, and if yes, adding the current instruction into the preparation list;S52: receiving the preparation list by a temporary list, traversing the temporary list, and fetching each instruction; andS53: issuing the instruction by the virtual machine. The step S53includes the following process:S531: adding the instruction to a corresponding device stream, which includes the following specific steps:(i) fetching a device stream of the current instruction according to the metainformation of the instruction,(ii) adding the current instruction into a runtime instruction list of a current device stream, and(iii) adding the current device stream into an active device stream list;S532: preparing the instruction by the device stream, which includes the following specific steps:(i) fetching a type of a current device stream according to metainformation of the device stream, and(ii) judging whether the type of the current device stream is located on a current scheduler thread, if yes, running the current instruction according to the type of the current device stream, otherwise, adding the current instruction into a preparation instruction list corresponding to the current device stream, and then running an instruction in the preparation list on a separate thread;S533: running the instruction by the device stream, which includes the following specific steps:(i) receiving the current instruction by a computing function of the type of the device stream,(ii) setting a device ID according to a device ID of the device stream of the current instruction,(iii) calling a computing function of the instruction according to the metainformation of the instruction, wherein:the computing process of the instruction includes the following steps:(1) acquiring device context information,(2) acquiring a current operand,(3) according to the device context information and the operand, allocating a memory to the output tensor and allocating a memory to a temporary tensor,(4) according to the device context information and the operand, initializing a state and a cache of the kernel function of the current operator, and(5) according to the device context information, the operand, and the state and the cache of the kernel function of the current operator, carrying out computation by the kernel function of the current operator; and(iv) after the current instruction is executed, setting a state of the current instruction to be a “completed state”; andS534: pre-scheduling the instruction: when the preparation instruction list works each time, firstly, placing the preparation instruction list into a temporary list, continuously carrying out scheduling on the temporary list, after the temporary list receives the instruction in the preparation list, directly executing the instruction in the temporary list by the virtual machine, and at the same time, in the working process, once an instruction capable of being pre-scheduled is found, adding the instruction into the preparation instruction list, such that the preparation instruction list can continue to receive the instruction sent to a virtual machine end by the interpreter. The step S6includes the following specific processes:S61: actively querying a state of the completed current instruction by the virtual machine: traversing a current active device stream list, calling a query interface of the type of the device stream to query the state of the current instruction, judging whether each instruction in the runtime instruction list of each device stream is in the “completed state”, and if yes, releasing the current instruction; andS62: deleting the current instruction from the runtime instruction list of the current device stream. The embodiment further discloses an apparatus of executing dynamic graph for neural network model computation in a deep learning training system. The apparatus includes an interpreter and a virtual machine. The apparatus mainly executes the following two processes: a process of parsing an instruction by the interpreter and a process of scheduling the instruction and issuing the instruction to a device stream to execute computation by the virtual machine. The process of parsing an instruction by the interpreter includes: deducing metainformation of an operator by utilizing a deducing function of the metainformation when the operator is registered through the interpreter, and specializing a kernel function of the operator into a kernel function on a specified device, and constructing an instruction of the virtual machine. Finally, the interpreter takes charge of sending a current instruction to the virtual machine. The defined virtual machine is as follows: the runtime is abstracted to be the virtual machine. When a Python service code is executed, the interpreter sends an instruction for calling the kernel function of the operator to the virtual machine. When executing the instruction, the virtual machine may apply for a video memory for an output tensor, the kernel function of the operator on the specified device is called to carry out computation, and a computation result is written into the output tensor. The specified device is decided by the use setting of a user, most commonly, a NVIDIA GPU device. The virtual machine has the most important responsibility of scheduling, and does not perform the specific work per se, and the specific work is done by a specific instruction of the operator. The virtual machine constructs a big directed acyclic graph from a Python service sequence, and the virtual machine has the main responsibility of constructing a dynamic directed acyclic graph. The process of scheduling the instruction by the virtual machine is actually a process of carrying out addition and deletion of nodes on the dynamic directed acyclic graph. The addition of nodes refers to continuous addition of instruction nodes to the dynamic directed acyclic graph. The deletion of nodes refers to deletion of the instruction nodes that are completed. The scheduling process of the virtual machine is as follows: during a program running period, the virtual machine continuously carries out polling in a scheduling thread, if a new executable instruction exists, the new executable instruction is executed, otherwise polling is continuously carried out; a major cycle driver is designed on an outermost layer to drive a scheduler continuously; in the major cycle, once a processed node is present, the processed node is deleted from the directed acyclic graph; and when a newly added Python service code is present, a new instruction node is added into the directed acyclic graph. The specific polling process of the virtual machine is as follows: the virtual machine makes an iterative mode by a preparation instruction list and an adding instruction list, and when the scheduler carries out polling, only the two lists need to be continuously processed; a dynamically processed instruction list that is ready for use is present in the directed acyclic graph; when the preparation instruction list works each time, firstly, the preparation instruction list is placed into a temporary list, and scheduling is continuously carried out on the temporary list; and in the working process, once an instruction capable of being pre-scheduled is found, the instruction is added into the preparation list. Through the above steps, the whole process of the implementation of dynamic graph execution for neural network model computation is completed, andFIG.2shows an instruction flowing process in a virtual machine scheduling phase. The dynamic graph executing process of ResNet-50 model computation is shown below by utilizing a NVIDIA Nsight Systems tool. An executing result includes runtime information:D represents that the virtual machine issues an instruction;S represents that the virtual machine issues the instruction to a device stream;P represents that the virtual machine pre-schedules a prepared instruction; andR represents an operation of releasing an executed instruction. The process of the partial schematic diagram of the shown dynamic graph executing process for the ResNet-50 model computation is as follows: firstly, the virtual machine issues an instruction of a normalization operator; at the moment, an instruction of a conv2d operator is prepared, and the virtual machine pre-schedules one instruction of the conv2d operator; once an input of the normalization operator is used up, an input memory of the normalization operator may start to be released; and then the virtual machine starts to pre-schedule the memory in which the completed instruction of the normalization operator is released. With reference toFIG.3, an embodiment of the disclosure further provides a dynamic graph executing apparatus for neural network computation, including a memory and one or more processors. An executable code is stored in the memory, and, the one or more processors, when executing the executable code, are configured to implement the method of executing dynamic graph for neural network computation in the above embodiment. The apparatus of executing dynamic graph for neural network computation in the embodiment of the present invention can be applied in any device with the data processing ability, and the any device with the data processing ability may be a device or an apparatus such as a computer. The apparatus embodiment may be implemented by software, or may be implemented by hardware or in a software and hardware combined mode. By taking software implementation as an example, an apparatus in the logical sense is formed by reading a corresponding computer program instruction in a nonvolatile memory into a memory and running by a processor of any device with the data processing ability where the apparatus is located. From the hardware level, as shown inFIG.3,FIG.3is a hardware structure diagram of any device with the data processing ability where the apparatus of executing dynamic graph for neural network computation of the invention is located, and besides a processor, a memory, a network interface, and a nonvolatile memory as shown inFIG.3, the any device with the data processing ability where the apparatus in the embodiment is located generally may further include other hardware according actual functions of the any device with the data processing ability, which is not described in detail herein. The implementing process of the function and the effect of each unit in the apparatus may refer to the implementing process of the corresponding steps in the method in detail, which is not described in detail herein. Since the apparatus embodiment basically corresponds to the method embodiment, so that the related parts may refer to the description of the part in the method embodiment. The above-described apparatus embodiment is merely schematic, wherein the units illustrated as separate parts may be, or may be not separated physically, and parts displayed as units may be or may not be physical units, i.e., may be located at one place or may be distributed onto a plurality of network units. Part or all of modules may be selected according to actual demands to achieve the purposes of the solutions of the present invention. Those of ordinary skill in the art can understand and implement the solutions without any inventive work. An embodiment of the disclosure further provides a computer readable storage medium which stores a program. The program, when being executed by a processor, implements the method of executing dynamic graph for neural network computation in the above embodiment. The computer readable storage medium may be an internal storage unit of any device with the data processing ability according to any of the above embodiments, e.g., a hard disk or a memory. The computer readable storage medium may also be an external storage device of any device with the data processing ability, e.g., a plug-in type hard disk, a Smart Media Card (SMC), an SD card, a Flash card, and the like equipped on the device. Further, the computer readable storage medium further may not only include an internal storage unit, but also include an external storage device of the any device with the data processing ability. The computer readable storage medium is used for storing the computer program and other programs and data required by the any device with the data processing ability, and also may be used for temporarily storing data which has been output or is to be output. The above description is merely preferred embodiments of the present disclosure, and is not intended to limit the disclosure, and the scope of the disclosure is determined by the appended claims. Any modifications, equivalent replacements or improvements, and the like made within the spirit and the principle of the disclosure shall fall within the scope of protection of the disclosure. | 22,812 |
11861506 | DETAILED DESCRIPTION Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments. Referring now to the drawings, and more particularly toFIG.1through6, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method. FIG.1illustrates an exemplary system for packing products with increased efficiency across packaging levels according to some embodiments of the present disclosure. In an embodiment, the system100includes one or more processors104, communication interface device(s) or Input/Output (I/O) interface(s)106, and one or more data storage devices or memory102operatively coupled to the one or more processors104. The memory102comprises a database108. The one or more processors104that are hardware processors can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) is configured to fetch and execute computer-readable instructions stored in the memory. In an embodiment, the system100can be implemented in a variety of computing systems, such as laptop computers, notebooks, hand held devices, workstations, mainframe computers, servers, a network cloud, and the like. The I/O interface device(s)106can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the I/O interface device(s) can include one or more ports for connecting a number of devices to one another or to another server. The memory102may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. Further, the memory may include functional modules of the system as shown inFIG.2. The database108may store information but not limited to, information associated with at least one of: (i) primary dimensions of a plurality of primary packages and (ii) secondary dimensions of standard secondary packages. Further, the database108stores information pertaining to inputs fed to the system100and/or outputs generated by the system (e.g., at each stage), specific to the methodology described herein. More specifically, the database108stores information of primary packages and standardized secondary packages. Functions of the components of system100are explained in conjunction with diagrams depicted inFIGS.2through5for packing of products with increased efficiency across packaging levels. In an embodiment, the system100comprises one or more data storage devices or the memory102operatively coupled to the processor(s)104and is configured to store instructions for execution of steps of the method depicted inFIG.3and process of the method depicted inFIG.4by the processor(s) or one or more hardware processors104. The steps of the method of the present disclosure will now be explained with reference to the components or blocks of the system100as depicted inFIG.1, functional flow diagram of the system100as depicted inFIG.2and the steps of flow diagram as depicted inFIGS.3and4. Although process steps, method steps, techniques or the like may be described in a sequential order, such processes, methods and techniques may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps to be performed in that order. The steps of processes described herein may be performed in any order practical. Further, some steps may be performed simultaneously. FIG.2illustrates a functional flow of the system ofFIG.1for packing of products with increased efficiency across packaging levels, according to some embodiments of the present disclosure. The memory102of the system100comprises a crafter202, a secondary packaging unit206and a tertiary packaging unit208. The database108includes a dataset comprising primary dimensions of a plurality of primary packages. The crafter202periodically determines secondary dimensions of one or more standard secondary packages for packing the plurality of primary packages using a process400illustrated inFIG.4. During working of the system100, one or more primary packages are transferred to the secondary packaging unit204in an online fashion. Further, the secondary packaging unit204packs each of the one or more primary packages into suitable secondary packages having secondary dimensions equal to one of the standard secondary packages. Further, the secondary packaging unit204calculates secondary packing efficiency as ratio of total volume of the primary package within the secondary package to the volume of corresponding secondary package. Further, the secondary packages are transferred to the tertiary packaging unit206wherein the secondary packages are packed into tertiary packages based on a Mixed Integer Linear Programming (MILP) optimization model disclosed herein. Further, the tertiary packaging unit206calculates tertiary packing efficiency as ratio of total volume of the secondary packages within the tertiary package to the volume of corresponding tertiary package. Further, the secondary packing efficiency and tertiary packing efficiency are reported to the crafter202which periodically updates secondary dimensions of standard secondary packages based on the secondary and tertiary packing efficiencies. FIGS.3A,3B and3C, collectively referred asFIG.3depict an exemplary flowchart illustrating a method300of packing of products with increased efficiency across packaging levels, using the system ofFIG.1, according to some embodiments of the present disclosure. At step302of the method300, the one or more hardware processors104are configured to obtain, from the database108, a dataset comprising primary dimensions of a plurality of primary packages, wherein the primary dimensions include length, breadth, and height of each of the plurality of primary packages. Further at step304of the method300, the one or more hardware processors104are configured to apply a clustering technique on the primary dimensions of the plurality of primary packages to create a plurality of clusters of the plurality of primary packages. Each of the plurality of clusters has a subset of packages from the plurality of primary packages. In an embodiment, K-means clustering technique is applied on the primary dimensions of the plurality of primary packages wherein value of K which denotes number of clusters to be created is equal to number of standard secondary packages required and can be configured by an operator of the system100. While in state of the art K-Means clustering technique, each cluster is represented by the centroid, the present disclosure represents each cluster by maximum of each primary dimension of the primary packages. This is done so that the subset of primary packages within each cluster can fit within the primary package representing the cluster. Due to this change in cluster representation, objective function of the K-means clustering technique also changes. While in state of the art K-Means clustering technique the objective function minimizes distance between the points within a cluster and the centroid, the objective function of K-Means technique of present disclosure minimizes sum ratio of volume of each of the subset of primary packages within a cluster to the primary package representing the cluster to achieve a high fill-rate (packing efficiency). These changes in K-means clustering technique ensures a fill-rate of 70-77% for various datasets of primary dimensions of plurality of primary packages with a median packing efficiency of around 72%. Further at step306of the method300, the one or more hardware processors104are configured to identify secondary dimensions for a plurality of standard secondary packages, wherein each of the plurality of standard secondary packages is associated with a corresponding cluster among the plurality of clusters and has the secondary dimensions equal to maximum of each of the primary dimensions of the subset of packages within the corresponding cluster. Each of the subset of primary packages within each of the plurality of clusters is packed, by the secondary packaging unit204, inside a secondary package having secondary dimensions equal to the standard secondary package of corresponding cluster. Thus, at the end of execution of step306, there are a plurality of secondary packages, each having a primary package packed within it. The step306is further explained using the process400illustrated inFIG.4, according to some embodiments of the present disclosure. At step402of the process400, a primary package with maximum primary dimensions from each of the plurality of clusters is selected. The selected primary package is identified as initial standard secondary package for the corresponding cluster, and initial standard secondary packages identified for each of the plurality of clusters together form a set of standard secondary packages. Further at step404of the process400, ratios of volume of each of the plurality of primary packages to volume of each standard secondary package from the set of standard secondary packages are calculated. Further, at step406of the process400, one or more standard secondary packages are identified from the set of standard secondary packages for each of the plurality of primary packages based on conditions comprising—(i) calculated ratios of volume is less than 1 and (ii) dimensions of the primary package is less than the dimensions of the secondary package. Further, at step408of the process400, a final standard secondary package is selected, for each of the plurality of primary packages, among the one or more standard secondary packages identified for each of the plurality of primary packages. The selected final standard secondary package has least volume among the one or more standard secondary packages identified for each of the plurality of primary packages. Further, at step410of the process400, each of the plurality of primary packages are reassigned to the cluster corresponding to the selected final standard secondary package. Thus, once the plurality of primary packages are reassigned to the cluster corresponding to the selected final standard secondary package, then referring back to steps of the method300, at step308of the method300, the one or more hardware processors104are configured to calculate secondary packing efficiency for each of the plurality of secondary packages. The secondary packing efficiency is calculated as ratio of total volume of the primary package within the secondary package to the volume of corresponding secondary package. Further at step310of the method300, the one or more hardware processors104are configured to pack the secondary packages within one or more tertiary packages, via the tertiary packaging unit206, based on Mixed Integer Linear Programming (MILP) optimization model, alternatively referred as MILP model, comprising an objective function which maximizes space utilization within the one or more tertiary packages, based on a plurality of heuristics, subject to a plurality of packing constraints including geometric constraints, vertical stability constraints, and efficient packing constraints. In an embodiment, the objective function of the MILP optimization model is given by equation 1. min{ΣiεPL(w1(xi+yi)+w2·zl+w3Σj=1NBp(i,j)·j)} (1) As understood by a person skilled in the domain, back left-bottom corner of the tertiary package is treated as the origin. For each secondary package i placed inside a tertiary package, the coordinates (xi, yi, zi) and (xl,yl,zl) denote the front-left-bottom and the back-right-top corners, respectively. As understood by a person skilled in the domain the points (xi, yi, zi) and (xl,yl,zl) uniquely determine the position of a secondary package inside a tertiary package. In equation 1, PLdenotes set of secondary packages to be packed within one or more tertiary packages; p(i,j)={1,ifsecondarypackageiisplacedwithintertiarypackagej0,otherwise; and NBdenotes total number of currently open tertiary packages. Each open tertiary package is assigned an index j∈{1, 2, . . . J} where j<k with j, k∈{1, 2, . . . NB} implies that tertiary package j was opened before tertiary package k. The objective function of equation 1 minimizes a sum of three components—(i) w1(xi+yi), (ii) w2·zland (iii) w3Σj=1NBp(i,j). j representing floor building, column building and first-fit heuristic respectively. The first component follows floor building heuristic by minimizing sum of xiand yifor a given secondary package i thereby minimizing the spread of the secondary packages on the floor. The second component follows column building heuristic by minimizingzlthereby placing the secondary package i in such a way that the overall height of packing arrangement after considering all permissible orientations of the secondary package is minimized. The third component follows first-fit heuristic wherein tertiary packages opened earlier are preferred over the tertiary packages opened at a later point in time. The non-negative quantities w1, w2and w3denote weight of the first, second and third component respectively. The first and second components act as countermeasures to each other. The weights w1and w2denote which component is given more importance and these depend on the ratio of the height of the tertiary package to its base area. For instance, if the available tertiary packages are roller-cages which have a larger height compared to their base area, then the weight w1is chosen higher than w2. On the other hand, if the tertiary package is a large flat long distance container with a larger base area and smaller height, then w2is chosen greater than w1. Typically, value of w3is chosen very high (between 10 and 100) since the usual values of xi, yiandz1in centimeters are 2 orders of magnitude higher than p(i,j). The MILP optimization model disclosed herein packs the secondary packages by solving online bin packing problem, which is well known in the art, as a series of offline bin packing problems. Suppose at time t∈0there aresecondary packages (alternately referred as boxes in online bin packing problem) in the look-ahead, NBtertiary packages (alternately referred as bins in online bin packing problem) are currently open, and N secondary packages are already placed in open tertiary packages. The offline bin packing problem is formulated as minimizing the objective function described in equation 1 subject to a plurality of packing constraints for the N+secondary packages to be placed in NBtertiary packages. However, since positions of secondary packages already placed within tertiary packages cannot be changed, the parameters corresponding to the secondary packages already placed within tertiary packages are updated. For example, suppose at time t=0, the MILP optimization model has to place firstsecondary packages into NBopen tertiary packages. Upon solving the MILP optimization model, a secondary package from PLis placed into one of the open tertiary package. Further, one more secondary package is added to PLso thatis constant. At t=1, the MILP optimization model consists of(look-ahead)+1 (already in tertiary package) secondary packages with NBopen tertiary packages and the parameters corresponding to the secondary package within the tertiary package are fixed to the value obtained at t=0. This procedure is repeated iteratively. Though the number of secondary packages may increase, the number of open tertiary packages is always ≤NB, and hence the MILP optimization model always solves a finite optimization problem. As understood by a person skilled in the domain, PB={1, 2, . . . , N} denotes set of secondary packages already placed within one or more tertiary packages; PL={1, 2, . . . ,} denotes set of secondary packages in the look-ahead; and the set {1, 2, . . . , NB} denotes the set of tertiary packages open at a particular instance of time. Every time a new tertiary package is opened, a tertiary package opened previously is closed and the indices of the tertiary packages in the set of tertiary packages are reset. As understood by a person skilled in the domain, following binary variables are defined for all i, kϵPB∪PL(indices of secondary packages); j∈{1, 2, . . . , NB} (indices of tertiary packages that are open); and s∈{x, y, z} (coordinates of secondary package within a tertiary package). pi,j={1,ifsecondarypackageiisplacedwithintertiarypackagej0,otherwise(2)uj={1,iftertiarypackagejwasused0,otherwise(3)sikp={1,iftertiarypackageiisplacedatrightofsecondarypackagek(s_k≤si)0,otherwise(si<s_k)(4) The variable spikas defined by equation 4 indicate relative position of secondary package i with respect to secondary package k along the x, y and z axes, and ensures that the secondary packages i and k do not overlap each other inside a tertiary package. As understood by a person skilled in the domain, following binary variable is required for specifying rotations allowed for placement of the secondary packages. riab={1,ifasidebofsecondarypackageiisalonga-axisofthetertiarypackage0,otherwise(5) The variable riabis defined by equation 5 for every iϵPB∪PLand for each a, b∈{1, 2, 3}, and checks the degree of freedom of a robotic arm which is used to place a secondary package within a tertiary package. For example, if the length of the secondary package is placed along the length of the tertiary package, then ri11=1. The plurality of packing constraints of the MILP optimization model include geometric constraints, vertical stability constraints, and efficient packing constraints. The geometric constraints ensure that placement of secondary package inside tertiary package is geometrically feasible. The geometric constraints are defined by equations 6-13 for all iϵPB∪PL, j∈{1, 2, . . . , NB}, s∈{x, y, z} and a, b∈{1, 2, 3}. The geometric constraints ensure that a tertiary package is used only if a secondary package is placed within the tertiary package, a secondary package is always placed within a tertiary package, and feasible orientations of the secondary package are respected. ΣiϵPB∪PLpij≤uj(6) Σj=1Bpij=1 (7) śl≤Σj=1BLjPij(8) xl−xi=ri11li+ri12bi+ri13hi(9) yl−yi=ri21li+ri22bi+ri23hi(10) zl−zi=ri31li+ri32bi+ri33hi(11) Σbriab=1 (12) Σariab=1 (13) Σs∈{x,y,z}(sikp+skip)≥(pij+pkj)−1 (14) sk≤si+(1−sikp)D(15) si+1≤sk+sikpD(16) Equations 9-11 describe degrees of freedom the robotic arm has while placing a secondary package. The constraints defined by equations 14-16 ensure that secondary packages do not overlap each other within a tertiary package. The equations 14-16 are defined for all i, k∈PB∪PL, j∈{1, 2, . . . , NB}, s∈{x, y, z}, D=L, D=B and D=H corresponding to s=x, s=y and s=z respectively. The vertical stability constraints of the MILP optimization model ensure that there is a stable packing arrangement and the secondary package being placed gets adequate support at the bottom and does not float in the air. This is ensured by placing a secondary package either on the floor of the tertiary package or in a location where at least 3 vertices of the base of the secondary package are supported by underlying secondary packages. As understood by a person skilled in the domain, a vertex of secondary package i is said to be supported by another secondary package k, if its height is same as the base of the secondary package i and there is overlap between the two secondary packages in the X-Y plane. This is mathematically represented by variables giand βiklwhich are declared according to equations 17 and 18 for every i∈PL, k∈PB∪PL, and l∈{1, 2, 3, 4} wherein l denotes base vertices of the secondary package i. gi={1,ifsecondarypackageiisonground(zi=0)0,otherwise(17)βikl={1,ifvertexlofsecondarypackageiissupportedbysecondarypackagek0,otherwise(18) The vertical stability constraints of the MILP optimization model are formulated according to equation 19 which ensures that sum of the vertices supported is either ≥0, when the secondary package is placed on the floor of the container or ≥3, otherwise. Σl=14Σk∈PB∪PL,βikl≥3(1−gi),∀∈PL(19) The efficient packing constraints of the MILP optimization model ensure that there are no gaps (or “holes”) in the packing arrangement to increase fill-rate. This is achieved by ensuring that every feasible location for a new secondary package at least two of the surfaces of the new secondary package should either be touching the secondary packages (along the X-Y plane) already placed or the walls of the tertiary package. This is mathematically represented by variables dikcand dijwwwhich are declared according to equations 19 and 20 for all i∈PL, k∈PB∪PL, j∈{1, 2, . . . , NB}, c∈{x, y} and w∈{1, 2, 3, 4} wherein l denotes base vertices of the secondary package i. dikc={1,ifsecondarypackageiisincontactwithsecondarypackagekalongc-axis0,otherwise(20)dijww={1,ifsecondarypackageiisincontactwithwalleoftertiarypackagej0,othewise(21) The efficient packing constraints of the MILP optimization model are formulated according to equations 22-28 for every i∈PL, k∈PB∪PL, j∈{1, 2, . . . , NB}, c∈{x, y}. D=L, m=j, p=1, q=3 for c=x and D=B, m=1, p=3, q=4 for c=y. ci≤ck+(1−dikc)D(22) ci≤ck+(dikc−1)D(23) ci≤(m−1)D+(1−dijpw)D(24) ci≤(m−1)D+(dijpw−1)D(25) ci≤mD+(1−dijqw)D(26) ci≤mD+(dijqw−1)D(27) Σi∈PLΣj=1NB(dij1w+dij3w)+Σi∈PLΣk∈PB∪PLdikx≤1 (28) Σi∈PLΣj=1NB(dij2w+dij4w)+Σi∈PLΣk∈PB∪PLdiky≤1 (29) Σi∈PLΣj=1NBΣe=14dijew+Σi∈PLΣk∈PB∪PLΣc∈{x,y}dikc≥2 (30) When a secondary package is being placed, the search for feasible locations has to guarantee the fact that at least two of its four vertical surfaces will touch those of the secondary packages already placed inside the tertiary package and/or the walls of the tertiary package. However, there is also a need to remove redundancy within the formulation which may arise from counting the same surface twice if it touches two or more secondary packages. These are guaranteed by constraints in equations 28-30 which are defined by constraints in equations 22-27. Thus, the MILP optimization model, as described above, enables packing of secondary packages inside one or more tertiary packages by iteratively identifying feasible locations for each secondary package in the look-ahead and packing them within suitable tertiary package. If at any point in time, no feasible location for any of the secondary package is identified, a tertiary package with maximum tertiary packing efficiency is closed and a new tertiary package is opened for packing the remaining secondary packages. Once process of tertiary packing is completed as in step310, further at step312of the method300, the one or more hardware processors104are configured to calculate tertiary packing efficiency for each of the one or more tertiary packages. The tertiary packing efficiency is calculated as ratio of total volume of secondary packages within the tertiary package to volume of corresponding tertiary package. Further at step314of the method300, the one or more hardware processors104are configured to multiply average of the secondary packing efficiency of each of the subset of primary packages and average of the tertiary packing efficiency of each of the one or more tertiary packages to get a product of packing efficiencies. Further at step316of the method300, the one or more hardware processors104are configured to select standard secondary packages, and corresponding clusters whose product of packing efficiencies is maximum, wherein the selected standard secondary packages are utilized for packing newly obtained primary packages to achieve increased efficiency across packaging levels. Experimental Results Fill-rates obtained by tertiary packaging based on Mixed Integer Linear Programming (MILP) optimization model is compared with those of first-fit heuristic, best-fit heuristic, Jampacker (M. Agarwal, S. Biswas, C. Sarkar, S. Paul and H. S. Paul, “Jampacker: An Efficient and Reliable Robotic Bin Packing System for Cuboid Objects,” in IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 319-326, April 2021, doi: 10.1109/LRA.2020.3043168) and MILP-Lite which is a variant of the MILP optimization model disclosed herein. The value of weights w1, w2and w3in the MILP optimization model are considered to be 1, 1, and 100 respectively. The secondary packages are packed in online fashion using a circular conveyor with a fixedand a fixed NB=3. The experiments have been performed on hardware comprising a laptop with a 4-core, i5-6200U processor with a speed of 2.3 GHz and a memory of 4 GB. Dataset Description The experiments were performed on two datasets. The first dataset considered for the experiment consists of nearly a million secondary packages which were packed in a sorting center over a period of one year. Initially, a smaller dataset was created by randomly sampling 10,000 secondary packages from the dataset. Further, the secondary packages are classified as small (65-80 secondary packages per tertiary package), medium (50-65 secondary packages per tertiary package), and large (35-50 secondary packages per tertiary package) depending on how many secondary packages were required to optimally fill up a tertiary package. The size of tertiary package used was 120 cm×80 cm×80 cm. Twenty-five collections of secondary packages (with small, medium, and large packages mixed) were created by random sampling such that secondary packages from each collection would optimally fill4tertiary packages. A crucial feature of this data-set is important: the dimensions of the secondary packages in most cases are not integers and hence, a ceiling function is used while placing the secondary packages. However, the results are reported on actual dimensions of the secondary packages. The second dataset considered for the experiment is synthetically generated by identifying secondary dimensions of a plurality of secondary packages by dividing the 3-dimensional space of a tertiary package (of size 80 cm×45 cm×45 cm) in such a way that when packed optimally the identified secondary packages would perfectly fit within the tertiary package. Care is taken so that the plurality of secondary packages are all cuboids; i.e., they are constructed by cutting planes that are orthogonal to each other. In addition to the size of tertiary package considered (80 cm×45 cm×45 cm), experiments with several others (mimicking industrial tertiary packages (bins) such as roller-cages, pallets etc.) have been performed and the results obtained are consistent across all these tertiary packages. Results and Analysis FIG.5illustrates mean tertiary packing efficiencies for experiments conducted on the first dataset of secondary packages, according to some embodiments of present disclosure. Table 1 represents the mean tertiary packing efficiencies (across all 25 collections of secondary packages) for the first-fit heuristic, best-fit heuristic Jampacker, MILP optimization model, and MILP-Lite. The first column of table 1 lists the tertiary packing method used and subsequent columns provide mean tertiary packing efficiencies obtained for each tertiary packing method with look-ahead () of 1, 2, 3, 4, 5, and 6. TABLE 1Tertiarypackingmethod= 1= 2= 3= 4= 5= 6First-fit64.91%65.67%67.42%68.06%69.18%70.05%Best-fit62.71%64.21%64.71%67.11%66.67%68.22%Jampacker60.77%62.11%61.84%61.57%61.38%63.42%MILP68.26%69.99%70.91%71.29%72.41%72.89%optimizationmodelMILP-Lite68.01%68.38%69.06%70.12%70.2%70.95% The computation time for the first-fit, best-fit, and Jampack heuristics is around (0.06-0.1 seconds per secondary package), but relatively more for the MILP optimization model. Further, the computation time increases with the size of the MILP problem; i.e., with look-ahead (). To alleviate this issue, alternate embodiments of present disclosure use a “lite” version of MILP optimization model (referred as MILP-Lite) which decomposes a problem of-look-ahead intoproblems of 1-look-ahead. The computation time thus increases linearly with look-ahead with marginal sacrifice of efficiency (as shown in tables 1 and 2) whereas the computation time for the MILP optimization model increases exponentially with increasing look-ahead. For instance, while the computation time jumps from 1.29 sec/secondary package to 3.22 sec/secondary package to 10.71 sec/secondary package for=2 to 4 for the MILP optimization model; for the MILP-lite, the corresponding numbers are 1.3 sec/secondary package for=2; 1.82 sec/secondary package for=3; and 2.5 sec/secondary package for=4. However, all of these computation times are well within the bounds of the robot decision-making operation (around 8-10 seconds per secondary package). From Table 1 andFIG.5, it can be inferred that the MILP optimization model (either original or lite version) achieves a higher packing efficiency than other tertiary packing methods. FIG.6illustrates mean tertiary packing efficiencies for experiments conducted on the second dataset of secondary packages, according to some embodiments of present disclosure. Table 2 represents the mean tertiary packing efficiencies for the first-fit heuristic, best-fit heuristic Jampacker, MILP optimization model, and MILP-Lite. The first column of table 1 lists the tertiary packing method used and subsequent columns provide mean tertiary packing efficiencies obtained for each tertiary packing method with look-ahead () of 1, 2, 3, 4, 5, and 6. TABLE 2Tertiarypackingmethod= 1= 2= 3= 4= 5= 6First-fit65.45%66.49%67.57%68.485%68.96%69.555%Best-fit68.71%69.99%70.435%71.28%71.74%72.35%Jampacker64.57%65.22%65.03%64.77%64.51%65.43%MILP73.925%75.03%75.945%76.955%77.93%78.32%optimizationmodelMILP-Lite73.11%74.05%74.83%75.4%75.99%76.27% From Table 2 andFIG.6it can be inferred that the MILP optimization model and the MILP-Lite achieves a higher packing efficiency than other tertiary packing methods. Also, the packing efficiencies for the second dataset is higher than those of the first dataset due to the nature of data in the dataset (dimensions are integers in the second dataset compared to decimal dimensions and hence, the use of ceiling function in the first dataset). The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims. The embodiments of present disclosure herein addresses unresolved problem of packing products with increased efficiency across packaging levels. The embodiment thus provides a systematic and efficient method of packing primary packages (or primary products) into secondary and tertiary packages. This is achieved by standardizing size of secondary packages, packing the secondary packages within tertiary packages using MILP optimization model based on packing heuristics, and providing a feedback between tertiary and secondary packaging levels to identify standard secondary packages which can pack the primary packages with higher packing efficiency. While the prior arts address improving packing efficiency in any one of the packaging levels, the present disclosure addresses all the levels of packaging and improves the overall packing efficiency. It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g., any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g., hardware means like e.g., an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g., an ASIC and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g., using a plurality of CPUs. The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media. It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims. | 37,190 |
11861507 | DESCRIPTION OF THE EMBODIMENTS Embodiments of the present invention will be described with reference to the accompanying drawings, wherein like parts are designated by like reference numerals throughout, and wherein the leftmost digit of each reference number refers to the drawing number of the figure in which the referenced part first appears. Overview of a Suggestion Engine As summarized above, embodiments of the present invention provide a novel approach for suggesting content items to supplement a user's search for information in an information space. An information space can be any body of information having individual items of content. An example of an information space is the World Wide Web (“WWW” or “Web”) comprising a system of interlinked hypertext documents accessed via the Internet. To provide content suggestions, embodiments of a suggestion engine can search a content repository (also referred to herein as a “data store”), and based on a variety of techniques discussed below, identify content items that are semantically related to each other. Importantly, the determination of semantic relatedness is based on actions that users have taken within the content repository to organize and associate content items together in folders. A simple example may facilitate further discussion. Referring now toFIG.1, which illustrates an exemplary embodiment of a suggestion engine system100in accordance with the present disclosure, suppose User1has collected a set of documents A, B, and C, and associated them with a Folder F1, where Folder F1resides within a content repository110provided by an embodiment of the invention. Folder F1could be a private folder for use only by User1or it could be a public folder, the contents of which can be accessed by other users of the system. Suppose further that User2has collected a set of documents A, B, and D, and associated them with a Folder F2, where Folder F2also resides within the content repository. Just like Folder F1could be private or public, Folder F2could also be a private folder for use only by User2or it could be a public folder, the contents of which can be accessed by other users of the system. Now assume User3conducts an Internet search and receives document A from a search engine115. User3could then ask suggestion engine105for additional content that is semantically related to document A. Or, the suggestion engine105could be configured to independently suggest content that is semantically related to received document A without first receiving an explicit user request for that content (for example, suggestion engine105may have received a notification that User3has received document A or has associated document A with a folder). In either case, because both User1and User2have associated document A with document B by placing the two documents together in a folder (User1associated the two documents together in Folder F1; User2associated the same two documents together in Folder F2), the suggestion engine105may conclude that documents A and B are semantically related and therefore provide document B as a new content suggestion to User3. Embodiments of the present invention are directed to systems and methods for providing suggestions in this fashion, using folder-like association criteria summarized in the example above, as well as more complex relational criteria described below. In the above example, documents A and B can be described as being “neighbors” of one another because at least one user has associated both documents with the same folder. For the same reason, documents A and B can be said to have “copresence” or be “copresent” with one another. Embodiments of the invention may derive significant meaning from copresence and the copresence count (i.e., the number of folders associated with a pair of content items). A high count for a pair of content items indicates that many users believe the two content items belong to, or are useful content to have, with respect to the same subject area. It therefore stands to reason that a user who has only one of those two content items is likely to have an interest in the other content item, as well. This general principle can be extended and refined to capture more complex relationships and discovery patterns, such as “find the neighbors of my neighbors,” as well as many others. The copresence count is used by embodiments of the suggestion engine to compare and triage a group of copresent content items in order to prioritize them relative to each other. In other words, a copresence count can be viewed as one type of measure of the “strength” of the relationship between two content items. Content Repository Embodiments of the invention can provide content suggestions to a community of users based in part on the users' interactions with content items that are stored and managed in a content repository.FIG.2illustrates an exemplary embodiment of a content repository200in accordance with the present invention. A content repository is also shown as item110ofFIG.1. Conceptually, a content repository200is a set of logical containers capable of organizing content items. The content repository200may be structured logically as one or more folder hierarchies, where each folder may contain other folders as well as content items, thereby reflecting a nested tree structure. Other equivalent logical structures are also possible, including, for example, a file system directory structure, or a database that incorporates folder-like document storage features. A content repository can be implemented using various data structures, including any combination of trees, lists, graphs (cyclic or acyclic, hierarchical or non-hierarchical), databases, and/or other appropriate data structures known in the art. In at least one embodiment, the content repository200is configured to support a hierarchy of folders. The storage and access methods for a content repository200may be implemented using cloud-based techniques, and may further include distributed software and data access techniques where portions of the content repository (including mirror and backup copies) may be located on a plurality of computing systems, including servers. Some user-specific portions of a content repository (including, for example, user folders for organizing a user's own personal content items) may be implemented physically on a user's own client device, such as a hard disk drive or equivalent device, but the same user-specific portions may also be implemented remotely or virtually using network services known in the art, including cloud-based network services. Some embodiments may provide methods that enable a user to navigate through portions of a content repository200, for example, portions of a content repository that correspond to a user's own folders. Such embodiments may further provide methods that permit a user to create, move, rename, delete, and edit folders, as well as the content items within them. Optionally, some embodiments may allow the same content item to appear within the content repository200in multiple folders. Some embodiments may place a limit on the number of folders that can reference the same item, while other embodiments may allow this number to be unbounded. As mentioned above,FIG.2illustrates an exemplary embodiment of a content repository200in accordance with the present invention. In this particular illustration, User1is shown to have created a set of folders within content repository200to hold exercise-related information. Under a folder named “exercise,” User1has created subfolders named “sports,” “yoga,” and “crossfit.” Under the sports folder, User1has created subfolders named “tennis” and “hockey.” Under the tennis folder, User1has created subfolders “federer,” “djokovic,” and “nadal.” User1has also associated two content items with the federer folder. One content item is named “rogerfederer.com.” The other content item is named “Roger Federer (@rogerfederer)|Twitter.” It should be understood that, for purposes of determining whether a content item is contained in a given folder, content items in subfolders of a parent folder can be considered to be contained in the parent folder for the purpose of generating suggestions. In the above example, the content item “rogerfederer.com” is in the federer folder, and therefore a suggestion engine can also consider “rogerfederer.com” to be in the tennis folder, the sports folder, and the exercise folder. FIG.2also shows a set of folders and content items created by another user indicated by the name “User2.” The folders and content items associated with User2are not shown as having names, but one of ordinary skill in the art will understand that the folders and content items associated with either User1or User2can be arranged and named (or not named) in any manner supported by the content repository200and according to the needs and likes of the respective users. Semantic Relatedness of Content Based on User Actions Certain aspects of the semantic meaning of content items can be based on interpretations of behaviors and interactions users take to organize the content items within a content repository or data store. For example, content items that a user places together in the same folder in the content repository can be assumed to be related in terms of their semantic content. By leveraging semantic meaning from the user interactions, embodiments of the invention can flexibly adapt and respond to evolving changes in user perceptions and understandings of their content without the need for extensive analysis of the content items themselves. That is, semantic similarities can be inferred from the relationships of content items to each other, based on actions that users have taken within the content repository200to organize and associate the content items with folders and similar content organizing structures. Such an approach is in stark contrast to conventional methods of organizing content items according to specific properties (usually predefined) of the content items. In a property-based approach, two content items might both be associated with a particular property (for example, using tags, categories, etc.), but it does not necessarily follow that one of the content items is a good suggestion for the other content item. For example, two content items named “rogerfederer.com” and “woodtennisrackets.com” might both be associated with the property “tennis,” but little can derived about whether users interested in one might also be interested in the other. On the other hand, the semantic approach of the present invention identifies more meaningful relationships between the two content items. If, for example, many users associated the two content items with the same folder, then there is more confidence that one content item is a good suggestion for the other. Similarly, if no users have associated the two content items with the same folder, then there is less confidence that one is a good suggestion for the other. Providing Content to a Suggestion Engine In some embodiments, a search operation with a conventional search engine (for example, search engine115ofFIG.1) is not required in order to provide content to a suggestion engine as a basis for obtaining suggestions. Users can obtain content in many ways. For example, a user can navigate through a public portion of a content repository to discover and view content, which can be supplied to a suggestion engine for the purpose of obtaining suggestions. Thus, in such an embodiment, users are able to receive suggestions for each content item that they view as they navigate using a browser or other application used for viewing content. Users can also create or supply their own content to a suggestion engine. Such user-supplied content can be created from scratch, obtained from friends or colleagues, or acquired from any other source available to a user. In embodiments, users can interact with content repositories that are small or moderate in size, as well as large distributed repositories, including, for example, document repositories such as Lexis (www.lexisnexis.com), the Library of Congress (www.loc.gov), Wikipedia (www.wikipedia.org), the JAMA Network (www.jamanetwork.com), and the Institute of Electrical and Electronics Engineers (www.ieee.org). Alternative content sources can also include private sources available to individual users and groups of users, as well as user-created content. Basis Data Sets Available to a Suggestion Engine Embodiments of a suggestion engine provided by the present invention (such as suggestion engine105illustrated inFIG.1) can operate on a variety of basis data sets corresponding to data objects, operands or information entities. Examples of such basis data sets include the following: Content items. As mentioned above, a content item (also referred to herein as “content,” or “item”) is a discrete digital information resource, such as a document or file that is accessible by a computer. Content items may include links or Uniform Resource Locators (“URLs”) that correspond to specific digital information resource(s). Content items may comprise, for example, web pages, images, videos, audio files, multimedia files, data files, documents, or other digital items that can be provided to a user via a browser or other type of content interface application or computer file management software. Content items may also include the corresponding web pages, images, videos, audio files, multimedia files, data files, documents, or other digital items themselves. The term “document” is intended to have the broadest meaning known in the art and should be understood to include documents of all kinds, such as PDF documents, word processing documents (for example, Microsoft Word documents), spreadsheets (for example, Microsoft Excel spreadsheets), presentation files (for example, Microsoft PowerPoint presentations), graphics files, source code files, executable files, databases, messages, configuration files, data files, and the like. Content items can be accessed, reviewed, modified, and saved by users of systems implemented by any of the embodiments. Folders. Folders are logical container objects in which users can place content items when they are saving, organizing, and categorizing them. Users can create folders and decide which items should go into which folders based on their individual beliefs about useful categorizations of the items. Because a content repository may be distributed across different computing systems, folders may be stored or cached locally on a user's own computing device, stored remotely or virtually using remote services over a network, such as cloud-based storage, and/or stored globally using a global organized content structure. A user's decision to store or associate a particular content item with a particular folder may be affected by recommendations offered by embodiments of the invention, based on semantic information about the content items themselves, semantic information derived from locations where the content items were found, and other factors discussed herein. Embodiments of the suggestion engine may also operate on additional information, such as metadata about the users and the content items, sources of the content items, histories of user activity with respect to the content items, user demographics, user groupings, and other information typically stored with documents to facilitate access, searching, and administration. As stated above, a content repository can be implemented using a variety of techniques and data structures known in the art. Since the content repository includes folders, the various implementations of the content repository also apply to the implementation of folders. The content repository may manage or control user access to folders as well as the content items within the folders. Folders may be private or public, shared or restricted, user-specific or group-specific, or any combination thereof. Although folders are defined as container objects and are often described as containing content items that are saved, placed, stored, put, or located in folders by users, the concept of “containment” is logical and abstract, and can be implemented in many different ways by persons skilled in the art of software engineering. For this reason, the disclosure may sometimes use phrases such as “saved in,” “associated with,” or “organized into” as equivalent ways of describing the concept of folder containment. Further, when a user saves a content item in a folder, he or she may not be saving the original content item, but rather a copy of the content item or a pointer or reference to the content item. For example, where the content item is a web page, the user may save a URL corresponding to the content item. Or where the content item is an image, the user may save a copy of the original image. For purposes of this description, both the original content item and the copy, pointer, or reference may be considered “the content item,” and each one is itself a content item. Similarly, if two or more users save a content item to their respective folders, and each of the content items is substantially similar to each of the other content items, each of the content items may be considered “the same content item.” Relationships Underlying Suggestions Embodiments of a suggestion engine may offer multiple approaches to generating suggestions, each of which provides users of the engine with alternatives for controlling the scope and types of suggestions. All of the approaches are based on determining formal relationships among the components of the basis data sets and entities that are at play, including the specific content items, folders, and users. In the context of describing embodiments of the invention, a formal relationship will be understood by one skilled in the art to be a property that associates an ordered tuple of elements with a truth value, which indicates whether the tuple of elements satisfies the property. In many embodiments, the tuple is a pair of elements, but in some embodiments, it may also be an n-tuple, where n is greater than 2, or the tuples may contain varying quantities of elements. For purposes of this disclosure, when elements A and B are related under relationship R, they are said to “satisfy the relationship R.” Alternatively, it is appropriate to say, “A is related to B under relationship R,” and one can “evaluate relationship R with respect to A and B in order to determine if R is satisfied.” Based on certain formal relationships discussed below, a suggestion engine can determine which entities satisfy the relationships either by pre-computing the relationships (i.e., finding answers before they are requested), or computing the relationships upon request. Either of these techniques can be applied by embodiments of a suggestion engine, depending on which workflow the engine is supporting. In the following sections, some exemplary methods are disclosed for finding entities that satisfy certain formal relationships. The exemplary methods operate on a data model that assumes (1) entities of interest (for example, content items) can be identified and enumerated; (2) the suggestion engine can examine their relevant properties; and (3) relationships among the entities can be discovered. For example, given a particular folder, including a folder at any arbitrary level in a hierarchy of folders, embodiments of a suggestion engine can determine which content items are included in or associated with that folder, optionally traversing a folder hierarchy or tree structure to access content items that may be associated with subfolders. Similarly, given a content item, embodiments of the suggestion engine may determine which folders are associated with a given content item and what other content items are contained or associated with those folders. Many different implementations are possible, and each may depend on various storage technologies and computing languages. Furthermore, specific enhancements or optimizations to the data model of the content repository may provide advantages in memory consumption and/or speed while executing the suggestion generation methods. Relationships Among Folders Two folders that share specific content items are called “Specific Commonality Neighbors.” They are defined more rigorously as follows: two folders, F1and F2, are specific commonality neighbors if they both contain a specific, non-empty set of content items {C1, C2, . . . Cm}. The notation for this relationship is SP, which is written as F1:SP:F2. Two folders that share a certain number of content items are called “Sufficient Commonality Neighbors.” They are defined more rigorously as follows: two folders, F1and F2, are sufficient commonality neighbors if they both contain at least j common content items (j>0), where j is the “commonality count threshold.” The notation for this relationship is SU, and it is written as F1:SU:F2in the general case, or F1:SU(j):F2to specify j. Depending on the particular relationship discussed herein, the term “threshold” can correspond to an integer value, a percentage, a proportion, or any other limiting value. In the case of the commonality count threshold identified in the Sufficient Commonality Neighbor relationship, the threshold is an integer value. One skilled in the art will understand that the numerical representation and interpretation of the threshold will depend on the context in which it is used. Two folders that are both specific commonality neighbors and sufficient commonality neighbors are called “Hybrid Commonality Neighbors.” More precisely, two folders, F1and F2, are “Hybrid Commonality Neighbors” if they both contain at least j common content items (j>0), where j is the “commonality count threshold” and in addition, both F1and F2contain a specific, non-empty set of content items {C1, C2, . . . Cm}. The notation for this relationship is H, and it is written as F1:H:F2in the general case, or F1:H(j):F2to specify j. A folder F2is a “Sufficiently Specific Neighbor” of folder F1if F2contains at least j items in common among m specific content items {C1, C2, . . . Cm} contained by F1(j<=m), where j is the “commonality count threshold.” The notation for this relationship is SS and it is written as F1:SS:F2in the general case, or F1:SS(j):F2to specify j. When j=m, relationship SS is the same as relationship SP. This relationship is not necessarily symmetrical. That is, although F1may contain j out of m specific content items found in F2, F2may not necessarily contain j out of m specific content items found in F1. A folder F2is a “Proportionate Commonality Neighbor” of folder F1if F2contains at least (r*100)% of the same content items contained in F1. In other words, if the intersection of F1and F2contains at least (r*100)% of the content items contained in F1, then F2is a proportionate commonality neighbor of F1. The variable r is the “commonality proportion threshold” (0<r<=1). The notation for this relationship is PC and it is written as F1:PC:F2in the general case, or F1:PC(r):F2to specify r. This relationship is not necessarily symmetrical. A folder F2is a “Proportionate and Specific Commonality Neighbor” of folder F1if F2contains at least (r*100)% of the content items contained in F1and in addition, both F1and F2contain a specific, non-empty set of content items {C1, C2, . . . Cm}. The variable r is the “commonality proportion threshold” (0<r<=1). The notation for this relationship is PSC. It is written as F1:PSC:F2in the general case, and F1:PSC(r):F2to specify r. Just like relationship PC, this relationship is not necessarily symmetrical. As mentioned above, given a particular folder F residing at any arbitrary level in a hierarchy of folders, embodiments of the invention can evaluate any of the folder-based relationships to determine which content items are included in or associated with folder F, as well as determine which content items are included in or associated with any subfolders of F. Relationships Among Content Items Two content items C1and C2are “Neighbors” if there exists at least one folder that contains both C1and C2. The notation for this relationship is N, and it is written as C1:N:C2. Two content items C1and C2are “j-Neighbors” if there exist at least j folders in the content repository that contain both C1and C2. The notation for this relationship is N(j), and it is written as C1:N(j):C2. The variable j is the “copresence threshold.” The Neighbor (N) relationship is a special case of j-Neighbor, where j=1. Content item C2is a “Synonym” of C1if C2appears in at least (p*100)% of the folders in which C1appears. The variable p is the “copresence ratio” of C2relative to C1. The notation for this relationship is C1:SY:C2in the general case, and C1:SY(p):C2to specify p. This relationship is not necessarily symmetrical. Two content items C1and C2are “joint Synonyms” if F1(the set of all folders that contain C1) and F2(the set of all folders that contain C2) are such that the intersection of F1and F2contains (p*100)% of the folders in the union of F1and F2(0<p<=1.0). The variable p is the “joint copresence ratio.” The notation for this relationship is C1:JS:C2in the general case and C1:JS(p):C2to specify p. Other Relations The set of relationships described above is not exhaustive. A number of additional relationships can be employed by those skilled in the art, including relationships that result from a combination of those described above. For example, a new relationship can be defined by requiring that two particular relationships hold true for a pair of folders or content items. The process of combining relationships to create new ones is a natural one for anyone skilled in the art of algorithm development. Other relationships include the following: Folder relationships based on independent content. The word “independent,” in this case, refers to the fact that a set of content items is selected first, and need not be a proper subset of either folder in a folder-to-folder relationship. A simple example of such a relationship is the following: A reference set of content items {C1, C2, . . . Cm} is designated. Then, a folder-to-folder neighbor relationship, “R(j),” is defined as follows: F1:R(j):F2if both F1and F2each contain at least j content items that are in {C1, C2, . . . Cm}. Folder relationships based on content item relationships. “Based on” refers to a situation when relationships among content items, such as those described earlier, must be known as a first step in establishing the folder-to-folder relationships. For example, the relationship “FN(j, m)” is defined between folders as follows: F1:FN(j, m):F2if both F1and F2contain at least m pairs of the same content items {(C1, C2), (C3, C4), . . . (C2m-1, C2m)}, such that for each pair, the two content items in that pair are j-neighbors. For example, take j=100 and m=2. From the earlier definition of j-neighbors, C1:N(100):C2means that C1and C2appear together in at least 100 folders. Similarly for C3:N(100):C4. If two folders, F1and F2, both contain C1, C2, C3, and C4, then these folders are related under FN(100,2). The FN relationship places an emphasis on folders not only having common content items, but also requires that those common items appear together with a certain frequency outside the context of those folders. In colloquial terms, one might say that this relationship ensures that the combined presence of these items is not a “fluke” (i.e., a chance occurrence) that takes place only in the folder F1and F2. A key aspect of this class of relationship is that it is drawing upon information that is exogenous to the folders themselves. Multi-Hop Neighbor Extension; Distance. For each neighbor relationship, R, defined above, one can define a multi-hop version of the relationship, Rm, defined for m>1 as follows: Two entities (for example, content items, or folders), X(0) and X(m), are related by Rm, if there exists at least one set of entities in the content repository {X(1), . . . , X(m−1)} such that X(j):Rm:X(j+1) for all j (0<=j<m). In other words, although two entities are not related as direct neighbors, they can be “indirectly” related by traversing a series of consecutive directly related neighbors. The ordered tuple of entities connecting the two related entities (including the end points) is called the “path” between the related entities. By applying the multi-hop concept to the Sufficient Commonality Neighbor relationship with the number of hops m=2, a new relationship can be defined, called “SU2”, which states that for two folders F1and F2, F1:SU2:F2if there exists at least one folder Fx such that F1:SU:Fxand Fx:SU:F2. The path between F1and F2is the triplet (F1, Fx, F2). As a second example, one can apply the multi-hop concept to the j-Neighbor relationship among content items, using m=3, and j=100. The statement C1:N(100)3:C2means that there exists at least two content items, Cxand Cy, such that: (a) C1belongs to at least 100 folders to which Cxalso belongs; (b) Cxbelongs to at least 100 folders to which Cyalso belongs; and; (c) Cybelongs to at least 100 folders to which C2also belongs. Note that for certain relationships, it is not meaningful to define a multi-hop version extension of the relationship. For example it is not useful to define SPm, as all folders in the path would also be immediate neighbors, since by definition, they must all contain the same specific set of content items. The “distance” between two entities under relationship R is defined to be the number of hops in the shortest path between those two entities using relationship R. Immediate neighbors have a distance of 1 between them. In some of the relationships described above, it may be necessary to determine whether two different folders contain a given content item Ci, or to determine whether one content item C1and another content item C2are sufficiently similar to be considered identical for purposes of satisfying the relationship criteria. In these circumstances, an identical match is not necessarily required. It may be sufficient, for example, to require two content items C1and C2to be only substantially similar. The criteria to establish substantial similarity can depend on a variety of factors including the type of content involved. For example, content corresponding to two URLs can be assumed to be substantially similar if the URLs themselves are identical. Content corresponding to two URLs can also be considered substantially similar if they point to equivalent content through different naming conventions or computing platforms (for example, mobile vs. desktop). As another example, two content items can be considered substantially similar if they share a high cosine similarity. As yet another example, two content items can be considered substantially similar if a selected percentage (for example, 95%) of the text within the two content items is identical, or the differences between the two content items are negligible. Negligible differences may include, without limitation, differences in metadata and/or timestamp information, advertising differences, header/footer differences, banner differences, and/or differences with respect to user comments. Other methods of determining substantial similarity of content are possible and within the scope of the present invention. Suggestion Engine Methods With various neighbor relationships defined and a notion of distance between entities (either folders or content items) provided, operations provided by embodiments of a suggestion engine can now be described in terms of the basis data sets and the relationships that are used to locate potential content items of interest. In general, this section describes how to generate a “pool” of content items that are likely to be relevant suggestions. A series of methods for generating suggestions from basis data sets are explained, and variations of those methods that utilize additional input parameters are discussed. The methods in following sections refer to the concept of “adding items to the pool” of suggestions. Many of the methods described herein may add the same item to the pool multiple times. From an algorithmic perspective, the multiple additions may be relevant to the results that are produced. However, it may be useful, especially for efficiency purposes, to place each content item in the pool only once. When a method would add the same item to the pool again, rather than introduce a redundant item, the method can increase a counter associated with that item to reflect the frequency with which it appears in the pool. This is an implementation choice that does not affect the functionality of the methods. Methods for a Specific Content Item FIG.3illustrates an exemplary embodiment of a general method for providing suggested content items. At Step310, the method ofFIG.3begins with a content repository (for example, the content repository110shown inFIG.1) receiving an indication that a specific user, in this case User1, has associated a particular content item, Content Item A, with a particular folder, Folder A. Based on this indication, at Step320the content repository will mark Content Item A as being associated with Folder A. As explained elsewhere, the marking of Content Item A as being associated with Folder A may be accomplished in a variety of ways using techniques known in the art, based on the selected implementation of the content repository in general, and the selected implementation of folders in particular. Steps310and320are envisioned to be performed any number of times, as users organize content items into folders that are useful to them. At Step330, a suggestion engine (for example suggestion engine105shown inFIG.1) may receive an indication that User2has requested suggestions relating to Content Item A. This indication may be explicit, based, for example, on User2clicking a request button; it maybe implicit, based, for example, on User2placing a copy of Content Item A in a folder in the content repository; it may be triggered, based, for example, on an event occurring within the suggestion engine or the content repository or on User2's computer; or it may be independent of any triggering event and instead based on algorithms within the suggestion engine that automatically provide suggestions relating, for example, to new content items deposited into the content repository. In response to a user request for suggestions, to a triggering event, or to an automated suggestion-generating process, the suggestion engine may then, at Step340, select one or more relationships between Content Item A and other content items in the content repository, in order to identify potential content for suggestion to User2. The specific set of relationships can be user-selected. Alternatively, they can be determined by the suggestion engine based on a variety of factors, including user preferences, the preferences of other users, the characteristics (for example, properties) of Content Item A itself, the characteristics of the relationships (for example, relationships that have previously yielded many suggestions for Content Item A, have previously yielded high quality suggestions for Content Item A, i.e., suggestions that have been viewed and/or saved by users, or are computationally more efficient to evaluate with respect to Content Item A), as well as the characteristics of the content repository (for example, the size of the repository, the number and size of folders within the content repository, and the quantity and quality of suggestions previously provided for Content Item A, and other factors). The specific set of relationships can comprise, for example, any of the relationships described herein that are appropriate for Content Item A, and the relationships may be evaluated in any order. Step350is where each of the relationships selected in Step340is evaluated in order to identify potential content suggestions. Note that the content repository software may pre-compute at least a portion of the evaluations of some relationships. For example, whenever users store new content items into the content repository, the content repository software may immediately determine the extent to which the new content items are related to other existing content items under one or more relationships. In such a case, embodiments of the invention may simply access the results of the pre-computed evaluation(s). Alternatively, embodiments may complete any remaining computations required of the evaluation(s) and then access the results. The output of Step350is a set or pool of potential suggested content items that have satisfied at least one of the relationships selected in Step340. From the pool of suggested content items produced by evaluating the selected relationships in Step350, a number of content items may be selected and provided to User2in Step360. FIG.4illustrates an exemplary embodiment of a method for locating content items that are semantically related to a single content item. In general, each of the following methods begins with Step410, in which a suggestion engine (for example, the suggestion engine105shown inFIG.1) receives an indication of a single content item of interest. Then, in accordance with a selected relationship, the suggestion engine receives at Step420an indication of a value for any parameter(s) that may be required to evaluate the selected relationship. For example, if the relationship “N(j)” is being evaluated, the suggestion engine may receive at Step420an indication of a value for the parameter “j,” corresponding to the copresence threshold. Using the selected relationship and the appropriate parameter value(s) supplied in Step420, the suggestion engine may then undertake Step430to locate at least some content items that are semantically related to the content item of interest by evaluating the selected relationship. At Step440, the content items discovered in Step430are added to the pool of possible suggestions. Each of the following suggestion generation methods applies to a single, specific content item of interest. Each of these single-content item methods follows the same general series of steps shown inFIG.4.Method 1.1: use relationship “N,” as defined above.a) A content item of interest is chosen.b) At least some of the item's neighbors, using relationship N, are located. Note that these neighbors are content items, not folders.c) These neighboring items are added to the pool for possible presentation to a user.Method 1.2: use relationship “N(J),” as defined above.a) A content item of interest is chosen.b) A user specifies the value of an additional parameter: copresence threshold, j.c) At least some of the item's neighbors using relationship NO), are located. Note that these neighbors are content items, not folders.d) These items are added to the pool for possible presentation to the user.Method 1.3: use relationship “SY(p),” as defined above.a) A content item of interest is chosen.b) A user specifies the value of an additional parameter: copresence ratio p.c) At least some of the item's synonyms using relationship SY(p), are located. Note that these synonyms are content items, not folders.d) These items are added to the pool for possible presentation to the user.Method 1.4: use relationship “JS(p),” as defined above.a) A content item of interest is chosen.b) A user specifies the value of an additional parameter: copresence ratio p.c) At least some of the item's joint synonyms using relationship JS(p), are located. Note that these joint synonyms are content items, not folders.d) These items are added to the pool for possible presentation to the user. In embodiments, each of the single-content item methods above can be repeated for sets of content items (for example, all of the content items associated with a folder). In such embodiments, the resulting content items of each iteration of a method are combined (for example, by determining the union), and the combined content items are added to the pool for possible presentation to the user. Methods for a Set of Content Items In contrast toFIG.4, which focused on finding suggestions relating to a single specific content item, the method inFIG.5illustrates an exemplary embodiment of a method for locating content items that are semantically related to a set of content items. As inFIG.4, the method ofFIG.5begins at Step510when a suggestion engine receives an indication of a set of content items as a basis for generating content suggestions. The set of content items can be associated with a single folder or a combination of different folders. Then, in accordance with a selected relationship, the suggestion engine receives at Step520an indication of a value for any parameter(s) that may be required to evaluate the selected relationship. For example, if the relationship “H” is being evaluated, the suggestion engine may receive at Step520an indication of a value for the parameter “j,” corresponding to the commonality count threshold. Using the selected relationship and the appropriate parameter value(s) supplied in Step520, the suggestion engine may then undertake Step530to locate folders that are semantically related to the set of content items of interest by evaluating the selected relationship. At Step540, the content items associated with the folders discovered in Step530are added to the pool of possible suggestions. Each of the following suggestion generation methods applies to a specific set of content items. These set-based suggestion methods follow the same general series of steps shown inFIG.5.Method 2.1: use relationship “SP,” as defined above.a) A set of content items of interest is chosen.b) At least some neighbor folders are located using relationship SP, based on the set of content items.c) The items (other than the original set of content items) belonging to the folders obtained in the previous step are added to the pool for possible presentation to the user.Method 2.2: Use relationship “H,” as defined above.a) A set of content items of interest is chosen.b) The value of an additional parameter: commonality count threshold j is supplied.c) At least some neighbor folders are located using relationship H, based on the set of content items, and the threshold value j.d) The items (other than the original set of content items) belonging to the folders obtained in the previous step are added to the pool for possible presentation to the user.Method 2.3: Use relationship “SS,” as defined above.a) A set of content items of interest is chosen.b) The value of an additional parameter: commonality count threshold j is supplied.c) At least some neighbor folders are located using relationship SS, based on the set of content items and the threshold value j. Note that, unlike Method 2.2, described above, Method 2.3 uses j as a threshold among the set of content items, and not among all the items in the folder.d) The items (other than the original set of content items) belonging to the folders obtained in the previous step are added to the pool for possible presentation to the user.Method 2.4: Use relationship “PSC,” as defined above.a) A set of content items of interest is chosen.b) The value of an additional parameter: commonality proportion threshold r is supplied.c) At least some neighbor folders are located using relationship PSC, based on the set of content items, and the threshold value r.d) The items (other than the original set of content items) belonging to the folders obtained in the previous step are added to the pool for possible presentation to the user. Methods for a Single Folder FIG.6illustrates an exemplary embodiment of a method for locating content items that are semantically related to a folder. The method ofFIG.6begins at Step610when a suggestion engine receives an indication of a folder of interest as a basis for generating content suggestions. In accordance with a selected relationship, the suggestion engine receives at Step620an indication of a value for any parameter(s) that may be required to evaluate the selected relationship. For example, if the relationship “SU” is being evaluated, the suggestion engine may receive at Step620an indication of a value for the parameter “j,” corresponding to the commonality count threshold. Using the selected relationship and the appropriate parameter value(s) supplied in Step620, the suggestion engine may then undertake Step630to locate folders containing content items that are semantically related to content items in the folder of interest by evaluating the selected relationship. At Step640, the content items discovered in Step630are added to the pool of possible suggestions. Each of the following suggestion generation methods applies to a single folder as a basis for generating content suggestions. These folder-based suggestion methods follow the same general series of steps shown inFIG.6.Method 3.1: use relationship “SU,” as defined above.a) A folder is chosen.b) The value of an additional parameter: commonality count threshold j is supplied.c) The chosen folder's neighbors are located using relationship SU and the threshold value j.d) At least some of the items belonging to the folders obtained in the previous step are added to the pool for possible presentation to the user.Method 3.2: Use relationship “PC,” as defined above.a) A folder is chosen.b) The value of an additional parameter: commonality proportion threshold r is supplied.c) The chosen folder's neighbors are located using relationship PC and the threshold value r.d) At least some of the items belonging to the folders obtained in the previous step are added to the pool for possible presentation to the user. In the same or alternative embodiments, the suggestion generation methods above may use a “virtual folder” as a basis for generating content suggestions. A virtual folder is a temporary folder that is associated with a plurality of content items collated from a plurality of other folders. A user may, for example, create a virtual folder in an ad hoc manner by selecting two or more content items from one or more folders, by selecting two or more folders, or by selecting a combination of content items and folders in the content repository. Users or embodiments of the invention may also create virtual folders from non-folder collections of content items (for example, from the results of a web search or a search of the content repository). For purposes of evaluating any of the relationships discussed herein, a virtual folder may be treated the same as an ordinary folder. Methods for a User In addition to suggestion methods that operate on a single content item, a set of content items, and/or a folder, these same methods can be adapted, alone or in combination, to generate suggestions for a user, without first specifying or requiring a particular content item, set of content items, or folder containing content items. Any combination of the user's content can be identified and/or selected for use as a basis to generate suggested content. The combination of user content to be used as a basis data set can be selected by the user, by a suggestion engine based on user preferences, or by a suggestion engine based on a selected subset of the user's content items or the user's folders (for example, the folders that contain the most frequently or recently accessed folders and/or content items). Once the combination of user content is identified, any of the applicable methods discussed above for selecting and evaluating relationships to discover content suggestions can be employed. Methods Based on Multi-Hop Neighbor Relations As mentioned above, the concept of multi-hop neighbor relationships is derived from the other defined neighbor relationships. To generate multi-hop suggestions, all of the suggestion generation methods described above, with the exception of methods 2.1 and 2.3, can be implemented in the exact same manner as explained above, by replacing the relationship at the core of the method with its multi-hop counterpart. The multi-hop variants of the methods are capable of producing a broader set of results than the equivalent single-hop versions. In other words, the set of content items added to the pool using a multi-hop relationship can be a superset of the content items that would be added by an equivalent single-hop version of the relationship. This need not always be the case, however. Some multi-hop methods can elect not to add some content items discovered at one or more hops. For example, the content items (or folders) discovered at the first hop can be used merely to facilitate discovery of content items from only the second hop relationship. Multi-hop variants can be used to:(a) Expand a set of results when the user requests additional suggested content items. In such a case, the method does not necessarily conclude when initial results are returned to the user. Instead, the results for a certain number of hops are gathered and returned to the user. The execution of the method may be paused, and its state is preserved such that it can resume when desired. If and when the user exhausts the suggestions provided so far, and the user requests more, the method's execution can be resumed.(b) Expand the set of results until a goal is met (for example, a certain number of content items is obtained).(c) Reflect a specific choice by a user who is selecting the hop count, either directly or indirectly, via one or more parameters designed to modulate the breadth and variety of the suggestions. For example, a user can select a hop count to include not only neighboring folders in a hierarchy, but also sibling folders, etc. Adaptive Multi-Hop Methods of Generating Suggestions In case (c) above, a multi-hop variant may rapidly expand to generate a very large number of suggestions, as well as suggestions that may start to become less relevant as the hop count increases. Adaptive variants of each multi-hop method can be implemented to control the expansion of the neighbor space and help the suggestion engine's search converge. The general concept of the adaptive variants is to “make it progressively harder” for the method to traverse subsequent hops. Adaptive multi-hop approaches are particularly applicable to methods that have threshold parameters. In such cases, the threshold parameters can be made more stringent as additional hops are traversed in the search. As one example of a multi-hop adaptive strategy, any suggestions obtained from the methods discussed above can be constrained by requiring the copresence count of the suggestion with respect to a particular content item of interest (i.e., the number of times the possible suggestion is in the same folder as the content item of interest) to be above a certain value. As another example of a multi-hop strategy, Method 3.2 above, which has a threshold parameter, r, may be applied to folder F to generate suggestions. Suppose that the value of r is calibrated (either directly or indirectly by user input, set as a default, or set by an algorithm that computes a recommended value) to an initial value of 0.25. This initial value is used for the first hop traversed by the method. A non-adaptive version of Method 3.2 simply continues to use the same value of r for each of the successive hops. Suppose that the first hop yields N folders that are neighbors of F by relationship PC. Then, on the second hop, the method searches for neighbors of each of those N folders. Suppose further that on each hop, an average of N new folders is found for each of the folders added on the previous hop. The total number of folders is Nk(N to the k-th power), where k is the number of hops. This number can grow large quickly in a large information space, even for reasonably small values of r, since N can itself frequently be a large number, such as 100 or 1000. In contrast, an adaptive variant of Method 3.2 may reduce the number of folders added at each hop by increasing the value of r that is applied as the number of hops increases. Thus, for example, the first hop might use r=0.25, the second hop r=0.30, the third hop r=0.4, and the fourth hop r=0.55. As r increases, the average number of new neighbors found for each folder may decrease. The method can be stopped when a variety of different conditions are met, including: 1) the number of content items added in the latest iteration is less than x % of the total content items accumulated by the method so far, where the threshold, x %, is a parameter of the algorithm, or a constant built into the algorithm; 2) the number of content items added in the latest iteration is less than a certain threshold; 3) the number of content items added in the latest iteration is less than x % of the content items added in the previous iteration, where the threshold, x %, is a parameter of the algorithm, or a constant built into the algorithm; and 4) the number of total content items accumulated so far has reached a pre-specified limit. Additional stopping conditions for the method can easily be imagined based on these examples. Another variation of adaptive multi-hop methods available to embodiments of the suggestion engine involves modulating parameters that influence the number of next hop neighbors at each hop traversed by the search, but doing so as a function of the results obtained in previous hops of the algorithm's execution. For example, if the search produces a large number of new neighbors when a particular hop is traversed, then on the next hop, thresholds can be commensurately tuned to reduce the number of new neighbors that are likely to be obtained. Many different mathematical formulas can use the quantity of results so far (or just in the immediately preceding iteration, for example) as an input in order to tune the search parameters for the next hop, which in turn may increase or decrease the quantity of candidate suggestions that are obtained. Note that in all of the adaptive methods described herein, the adaptations may be applied either: (a) independently along each multi-hop path that the method generates, taking into account properties of the path developed up until that point; or (b) uniformly across all the paths the method is generating, taking into account properties of the collective set of paths generated up until that point. Changing Relationships Along the Path All of the methods discussed so far, whether single-hop or multi-hop, make use of a single relationship to discover neighbors for content items or folders. However, another variation of multi-hop methods involves altering the relationship that is used at one or more hops along the generated paths. In the simplest case, a pre-programmed sequence of relationships can be applied to a fixed sequence of hops. For example, a method could be fixed at two hops, and could evaluate, in order: (a) relationship SS on the first hop; and (b) relationship PC on the second hop. An example of this two hop method could behave as follows:a) Starting with an initial folder, F1, and three content items {C1, C2, C3}, the first hop traversal could lead to folders that contain at least 2 of the three content items.b) Then, for each folder, Fi, obtained via the first hop, the second hop traversal could use relationship PC(0.2), for example, to locate folders Fjwhere the intersection of Fiand Fjcontains at least 20% of the content items contained in Fi. In other cases, the sequence of relationships can be determined dynamically based on factors such as user selection or preference, random variation, the number of suggestions generated thus far by other methods, and other factors known in the art. When selecting relationships to be evaluated at each hop of a multi-hop sequence, embodiments of the invention may first select a relationship from one entity class and then select a relationship from another entity class. For instance, the first hop could employ a folder-to-folder relationship. Then the content items issuing from that step could be used as inputs to an item-to-item relationship in the second hop. Suggestion Constraints In certain circumstances, users of embodiments of a suggestion engine described herein may wish to exercise additional control over the way in which suggested content items are selected. A number of constraints can be specified to enhance the accuracy of the selection process. Such constraint parameters refer to desirable, or conversely, undesirable, properties of candidate content items. In general, any property of the content items in the information space can be used for the purpose of specifying constraints. Any suggestion generation method, such as those described in preceding sections of this document, can be combined with constraints. A simple way to apply the constraints is to run the method in its normal fashion, and prior to adding a content item to the pool of suggestions, test the item against the constraint in order to make a final decision about whether it should be added. Alternatively, a method can be run to generate all of its suggestions as it normally would, and then the pool of suggestions can be filtered based on the specified constraints. For example, a constraint can generally be specified by:(a) identifying one or more properties of interest that belong to some or all content items;(b) stating which criteria are to be used to test the one or more properties; and(c) stating how the test result should be interpreted by the suggestion engine (for example, reject or accept the item). Constraints may be selected and/or invoked by individual users, or they may be built into one or more of the various algorithms employed by embodiments of a suggestion engine to generate content suggestions. In the latter case, users may exhibit some control over the constraints through preferences and/or controls available to the user via a user interface (for example, the Suggestion Assistant described further below). Properties are generally one of two types: independent or contextual. Independent properties are those that pertain to characteristics of the content item itself, while contextual properties are those that pertain to characteristics of the content item with respect to one or more other content items and/or folders. An exemplary independent property is the type of the content item such as, for example, whether the content item is a document, a web page, an image, a video, etc. An exemplary contextual property, on the other hand, is a suggestion acceptance count, i.e., a count of the number of times that any user saved the content item after it was offered as a suggestion with respect to another content item or folder. Suggestions may be constrained by both independent and contextual properties in a variety of ways depending on the types of properties. For example, properties may be tested or evaluated against keywords, expressions, integer values, percentages, and changes in values over time (i.e., trends). Two or more properties may also be evaluated together for more complex constraints. For example, a suggestion acceptance count may be combined with a date-time stamp to include only those suggested content items that were saved by a certain number of users, and also saved at least once in a time period deemed to be sufficiently recent. The following are some examples of constraints: Keyword or expression presence. To satisfy a keyword or expression constraint, a suggested content item must contain a specified keyword, a set of keywords, a specific phrase, or a text string, such as a regular expression. All of these are standard criteria used by search engines to test content for relevance, and this type of constraint specification and application is well understood. In embodiments, a keyword or expression presence can be required of a particular sub-part of a content item, such as a page title, a synopsis, any type of tag, or the main body of the content item. Alternatively, the requirement may apply to an entire content item and/or all of its parts (i.e., any part could satisfy the constraint), or any combination of its parts. Date-time stamp. To satisfy a date-time stamp constraint, a suggested content item's date of creation must be more recent (or conversely, older) than a certain date-time stamp. Assuming at least some items in the information space have date-time stamps indicating when they were created, the constraint allows users to filter out items that are too old (or conversely, too recent). The same type of constraint can be applied to other date-time stamps, such as: “last update time or modification time”—the time when the item was most recently changed; “first save time”—the time when the item was first added to the information space; “last save time”—the time when the item was last saved by a user; and in general, any date-time stamp that describes a useful aspect of the content item's history. Quality rating. A quality rating constraint may refer to an independent or contextual quality-related property. In the independent sense, the quality of a content item may refer to its general quality or popularity. For example, a content item may be associated with a corresponding user-rating (such as a numerical score or star rating), indicating how much it is liked by users who have viewed and rated the content item. In the contextual sense, the quality of a content item may refer to how well the content item has been received as a suggestion for another content item. For example, if a content item has been saved by 90% of users who have viewed the content item as a suggestion for another particular item, it may be considered a high quality suggestion for that particular item. In either the independent or contextual cases, the quality rating constraint can be satisfied if a suggested content item has a quality rating that exceeds a specified threshold. Ratings from multiple users can be aggregated to create an overall quality rating. A user who is receiving suggestions may, for example, specify a quality constraint of 4 out of 5 stars, meaning that only content items with 4 stars or more will be delivered as suggestions. View history. To satisfy a view history constraint, a suggested content item must not have been seen by a user (for example, viewed by the user using the normal browsing application used for this purpose) within some specified period of time prior to the suggestion request. Alternatively the constraint may require the opposite, meaning that the user must have viewed the content item during a specified period of time, such as the previous 30 minutes. As mentioned above, any property of a content item may be used for constraint purposes. For purposes of illustration only, some additional examples of constraints are provided below, and one of ordinary skill in the art will recognize that these constraints may correspond to independent properties, contextual properties, or both. Visited count—a number of times users have visited/viewed a content item. Save count—a number of times users have associated a content item with a folder, or more simply put, the number of folders associated with a content item. Saved suggestion count—a number of times users have saved a content item after it was offered as a suggestion. Suggestion acceptance count—a number of times users have saved a content item after it was offered as a suggestion with respect to a particular content item, set of content items, or folder. Suggestion acceptance ratio—a ratio of the suggestion acceptance count for a content item to the number of times the content item was offered to users as a suggestion. Blacklisted count—a number of times users have blacklisted (i.e., indicated that they do not want to see the content item as a suggestion in the future, and/or that they do not want the item displayed in search results in the future) a content item, thereby indicating that the content item is irrelevant or uninteresting. Blacklisted relationship count—a number of times users have blacklisted a content item after it was offered as a suggestion with respect to a particular content item, set of content items, or folder. Ignore count—a number of times users have ignored (i.e., did not visit or view) a content item after it was offered as a suggestion. Ignore relationship count—a number of times users have ignored a content item after it was offered as a suggestion with respect to a particular content item, set of content items, or folder. Save rate—a measure of the rate at which a content item has been saved over a period of time (for example, an average of 10 times per hour over the last 24 hours). Other examples similar to this constraint include measures of the rate at which a content item has been previewed, viewed, ignored, deleted, blacklisted, etc. over a period of time. Deleted count—a number of times users have deleted a content item, i.e., dissociated the content item with a folder. Link traversal count—a number of times users have traversed a link between a first content item and a second content item that is offered as a suggestion for the first content item. The link traversal count can include the number of traversals from the second content item to the first content item, the number of traversals from the first content item to the second content item, or both. Such traversals can, for example, be captured by embodiments of the Suggestion Assistant described below. Red flag court—the number of times users have marked an item as offensive, obscene, or otherwise inappropriate. Content items for which the red flag count has reached a certain threshold may automatically be excluded from all further suggestions. FIG.7illustrates an exemplary embodiment of a method for applying constraints to a pool of possible suggestions. The method begins at Step710with selection of a basis data set. The basis data set can be a single content item, a set of content items, or a folder. At Step720, the specific relationship to be evaluated is selected. Then at Step730, the selected relationship is evaluated with respect to the basis data set and the appropriate content items in the content repository, to locate content items that satisfy the relationship. At Step740, each of the located content items is evaluated against one or more constraints. The content items that match the constraint(s) are added to the pool of possible suggestions at Step750. Finally, at Step760, suggested content items can be selected from the pool of possible suggestions. Synonym Interchangeability Synonym interchangeability is a principle stating that, if two content items appear together sufficiently frequently, then for the purposes of certain analyses, one content item may act as a substitute for the other. The desired frequency threshold is the parameter “p” for the relationship “SY” defined previously. This parameter may be set as a constant, or selected by a user, an administrator, or an algorithm that has a specific goal for making use of the concept of interchangeability. For example, if the parameter is set to the value 0.95, and if C2appears in at least 95% of the folders in which C1appears, then C2will be identified as a synonym of C1, or using relationship terminology, C1:SY(p):C2. With this fact established, certain analytical functions of the suggestion engine may choose to consider C1and C2to be interchangeable. At the folder level, a folder Fxmay contain C1, but not C2; and a folder Fymay contain C2but not C1. Then, as an optional feature of embodiments of the present invention, a method such as Method 1.1, described above, may allow the C1belonging to Fxto be substituted for a C2for the purpose of evaluating the SU(1) relationship. With this substitution in place, both folders can appear to contain C2, such that Fx:SU:Fy. Note that the terms “substitute” and “substituted,” above, are used somewhat loosely. In reality, when a synonym interchangeability option is enabled for a method, the method can take a temporary action to evaluate the folder as if it contained the substitute. The substitution step can be implemented in at least two ways:(a) at least temporarily replace the original item with its synonym; or(b) add the synonym to the folder, such that both items are present simultaneously. Enabling synonym-based substitution can allow any of the suggestion engine methods to include a broader set of candidates for offering suggestions to users. If the parameter governing the synonym relationships is tuned to be sufficiently high, the suggestion relevance is expected to generally still be good while providing an opportunity to find additional valid suggestion candidates. Note that the two different synonym relationships SY and JS can lead to different results for suggestion generation methods that employ substitution. Recall that relationship SY is not symmetrical. C1:SY(p):C2means that C2appears in (p*100)% of the folders that contain C1. However, a vastly greater number of folders could contain C2, without also containing C1. One interpretation of such a situation is that C2can act as a good substitute for C1, since it is highly likely to appear wherever C1appears; however, the converse may not be true; that is, C1may not act as a good substitute for C2. On the other hand, relationship JS is symmetrical and therefore can be used to establish bidirectional interchangeability of content items. Template for Additional Suggestion Generation Methods The set of suggestion methods presented herein is not exhaustive. To construct additional methods, the following general template approach may be followed:(1) Select a basis data set.(2) Select a relationship that can be evaluated with respect to that basis data set. The term “relationship” is inclusive of any variants that extend or alter the way in which the relationship relates neighbors to each other (for example, multi-hop, use of synonym interchangeability, etc.).(3) Using the basis data set and the relationship, find the entities (folders or content items) that satisfy the relationship.(4) If any constraints are enabled, apply the constraints to filter the set of entities.(5) If the located entities are content items, add them to the suggestion pool.(6) If the located entities are folders, add the content items contained in those folders to the suggestion pool, except for any items that are already found in the basis data set. The template approach above can be applied to any of the relationships disclosed above, either explicitly, as a broad class of relationships, or to any other relationships known in the art. In each case, the result is a method for generating suggestions whose characteristics are based on the properties of the selected relationships and constraints. Varying Suggestions Embodiments of the suggestion generation methods discussed above add one or more suggested content items to a pool of suggested contented items. The pool may be very small (for example, only several content items) or very large (for example, hundreds or thousands of content items). Accordingly, because of display constraints, a user may only be able to see a subset of the pool at any one time, but be able to request more suggested content items on demand. The order in which suggested content items are presented to the user may thus influence how often suggested content items are ever seen by users. Embodiments of the invention may be configured to vary suggestions to users based on a variety of factors. Variation decreases the likelihood that the suggestion engine will present the same suggestions to a user at different points in time under similar circumstances. Variation methods can be applied at the time suggestions are added to a pool of suggestions and/or at the time when suggestions are selected from the pool and presented to the user. Specific variation methods may be selected and/or invoked by individual users, or they may be built into one or more of the algorithms employed by embodiments of the invention. In the latter case, users may exhibit some control over the variation methods through preferences and/or controls available to the user via a user interface (for example, the Suggestion Assistant described further below). The following are some example variation methods: Random variation. A random variation method selects suggested content items randomly from the pool of suggestions or applies a random test to select or discard suggestions as they are being added to the pool. Random variation methods can be combined with other variation methods. Date-time stamp. A date-time stamp variation method uses a content item's date-time stamp property to vary suggestions. For example, such a method may randomly filter content items from the pool of suggestions using a weighted coin toss algorithm in which content items that have been saved more recently are less likely to be discarded. View history. A view history variation method uses a user's view history property to vary suggestions. For example, such a method may filter from the pool of suggestions any content items that have been seen by a user within some specified period of time. Synonym variation. A synonym variation method selects synonyms of suggested content items and presents the synonyms in conjunction with or in alternative to the suggested content items. For example, such a method may select synonyms of suggested content items and present them to a user when the user has already seen the suggested content items. Score bands. A score band is a series of value categories, such as TOP, HIGH, MIDDLE, LOW, and BOTTOM, which serve as a way of simplifying a range of actual score values. Scores can be used to represent various properties of content items such as the quality or popularity of particular content items. For example, as discussed above with respect to the quality rating constraint, a numerical score or star rating may be used to indicate how much a particular content item is liked by users who have viewed and rated the content item. A score band variation method varies suggestions by selecting content items from one or more of the bands using an algorithm such as a weighted round-robin algorithm. For example, a score band variation method might select five content items with scores in the “TOP” band for every one content item with a score in the “BOTTOM” band. In this manner, a user is more likely to see suggested content items with higher scores, but suggested content items with lower scores may still be given an opportunity to be offered to users, and ultimately, receive increases in their scores. Prioritizing Suggestions In addition to varying suggestions, it may be desirable to prioritize certain suggestions for a variety of reasons. For example, users might be more interested in a suggested content item that has a statistically strong relationship to an item of interest than a suggested content item that has a statistically weaker relationship to the item of interest. In another example, users interested in news may want to receive suggestions for breaking news stories of national or international significance, even if those stories have not yet been saved by many users. Similarly, content items with very high save rates over a recent period, but relatively low save counts, may serve as better suggestions than content items with low save rates over a recent period, but high save counts. Or, there may be simply be content items that deserve a chance to become more popular, but are at risk of being overshadowed by content items that have been in the content repository for longer periods of time. Methods for prioritizing suggestions can be applied at the time suggestions are added to a pool of suggestions and/or at the time when suggestions are selected from the pool and presented to the user. Specific prioritization methods may be selected and/or invoked by individual users, or they may be built into one or more of the algorithms employed by embodiments of the invention. In the latter case, users may exercise some control over the prioritization methods through preferences and/or controls available to the user via a user interface (for example, the Suggestion Assistant described further below). Prioritization methods may prioritize content items by increasing the likelihood or guaranteeing that a content item will be selected from a pool of suggestions. Prioritization methods may also affect the ordering of suggestions so that higher priority suggestions are presented to a user before lower priority suggestions. The prioritization methods may assign and update a content item's priority, for example, based on a numerical scale of 0-10 or priority levels such as low, medium, and high. Prioritization methods may also operate in conjunction with variation methods in selecting suggestions to present to users. The following are some example prioritization methods: Strength of relationship. A strength of relationship prioritization method assigns priorities to content items based on the statistical strength of the relationship between the content items and other content items, sets of content items, or folders of interest. In other words, priorities may be assigned according to the degree by which relationships exceed specified thresholds, ratios, or other parameters associated with relationships. For example, a content item that satisfies an N(j) relationship and exceeds the threshold j by a factor of 10 may be assigned a higher priority than a content item that satisfies the relationship but only exceeds the threshold j by a factor of 2. User preference. A user preference prioritization method assigns priorities to content items that, based on their properties or other metadata, correspond to user preferences. For example, a user may specify that he or she prefers content from certain sources or by certain authors. Content items matching these preferences are assigned higher priorities, and are therefore more likely to be presented as suggestions, than content items not matching these preferences. Save rate. A save rate prioritization method assigns priorities to content items according to their save rates and any corresponding policies established by users or embodiments of the invention. For example, a policy may specify that content items with very high save rates over a particular period of time, but low save counts, be given higher priorities than content items with only high save counts, but low save rates over the same particular period of time. Infancy. An infancy prioritization method assigns priorities to content items based on how recently they have been first saved by any user. For example, such a method may assign a higher priority to a content item that was first saved by any user within the last hour than a content item that was first saved by any user several weeks ago. In this manner, users may be more likely to discover content that, simply by being new, has not yet had a chance to be saved by many users. Additional prioritization methods may be contemplated by one of ordinary skill in the art based on properties of content items, relationships, and combinations thereof without departing from the scope of the invention. Avoiding Stale Suggestions Embodiments of the invention may also be configured to avoid stale suggestions. A stale suggestion is a content item for which one or more of its properties indicate that the item is outdated, unpopular, no longer relevant, or generally a lesser quality suggestion. For example, a downward trend in its save rate or an upward trend in its deleted count may indicate that the content item is stale. In some embodiments, stale suggestions can be avoided by filtering them out as suggestions are being added to a pool of suggestions and/or at the time when suggestions are selected from the pool and presented to the user. Staleness-avoidance methods may be selected and/or invoked by individual users, or the methods may be built into one or more of the algorithms employed by embodiments of the invention. In the latter case, users may exercise some control over the staleness-avoidance methods through preferences and/or controls available to the user via a user interface (for example, the Suggestion Assistant described further below). The following are some examples of techniques to avoid stale suggestions: Date-time stamp. To avoid stale suggestions using a date-time stamp, a date-time stamp threshold can be used to filter out suggestions that have not been saved by any user within some recent period of time. Similarly, embodiments of the invention can create a date-time stamp “window” that restricts suggestions to a bounded date-time range, and then move that window over time. Save rate. Because the save rate may indicate the rate at which the popularity of a content item is increasing or decreasing over a period of time, this property can be used to filter out suggested content items that have become stale. For example, if fewer people are saving a content item today than were saving the content item a week ago, such behavior can be considered a downward trend in popularity. Such a content item may be considered stale if its save rate drops precipitously over a short period of time or gradually over a long period of time. Using Archived Content to Generate Suggestions For efficiency purposes or otherwise, embodiments of the invention (for example, the content repository) may store links (for example, URLs) to content items instead of the content items themselves. These linked content items (for example, web pages) may include dynamic content that can change or even disappear over time. Embodiments of the invention thus enable users to save linked content items in one of two ways. If a user wishes to save a linked content item for its general content (for example, a blog or news web page that changes frequently), then the user may choose to save only the link. Alternatively, if a user wishes to save a linked content item for its specific content at the time it is saved (for example, a specific news article), the user may choose to save a static version or “snapshot” of the content item in addition to the corresponding link. In some embodiments, the content repository may employ an algorithm to automatically make this election on behalf of the user, for example, based on how frequently the item has been observed to change throughout its history in the repository. Where a content item in the information space changes multiple times, there may thus be multiple versions or snapshots of that content item saved by one or more users. In an embodiment, each one of the snapshots is stored as an independent content item, meaning each snapshot may be associated with its own folders and have its own relationships. Accordingly, the suggestion generation methods discussed above may identify one or more snapshots of a content item independently of other snapshots of the same content item. In addition, the suggestion generation methods discussed above may be applied independently to the separate snapshots in order to provide suggestions that are relevant to each of them. While it may be desirable to save different snapshots for a content item when the differences among the snapshots are significant, it may be undesirable to do the same when the changes are trivial (for example, where a date stamp within a content item updates on a daily basis, but the remainder of the content is static). Accordingly, embodiments of the invention may compare a snapshot that a user wishes to save with other existing snapshots to determine whether there are any non-trivial differences. Such a comparison may be performed by conventional tools for comparing two documents, web pages, etc. If the differences are trivial, embodiments may save only a previous snapshot of the content item. If the differences are significant, however, embodiments may save a new snapshot of the content item. In the same or alternative embodiments, snapshots may be saved with pointers to other snapshots of the same content item. Or, in another embodiment, all snapshots for a particular content item can be saved under a common identifier for that content item. In either implementation, alternative versions of a content item may be provided to a user as part of a single suggestion. For example, a suggestion that includes a snapshot of an older version of a content item may include a link to a more recent or current snapshot of the content item, thereby permitting the user to quickly jump between versions. Handling Multiple References to the Same Content Just as web pages and other dynamic content can change over time, so can their corresponding addresses in the information space, also referred to as links (for example, URLs on the World Wide Web). For example, a web page may be moved to a new location, leaving the old URL pointing to empty content. There may also be multiple current links corresponding to the same content. For example, a web server may “redirect” a request comprising a shorthand or alternative link for a web page to the actual link for the web page. Additionally, a single web page or other content item may comprise multiple versions that are each dependent on, for example, whether a user views the content item from a desktop or mobile device. In such a case, a web server may redirect a request for a desktop version (accessible via a first link) to a mobile version (accessible via second link), and vice versa. As discussed above, content items may comprise links to various resources, thereby permitting embodiments of the invention to store dynamic content such as web sites and/or web pages according to their links. For example, in one such embodiment, when a user saves or associates a web page with a folder, the content repository may mark the web page's corresponding link as being associated with the folder. Accordingly, it is conceivable that users may save two or more different links corresponding to the same web page as independent content items. In some embodiments, treating different links corresponding to the same content as separate content items may skew the suggestion generation methods in undesirable ways. For example, the content may be less likely to be suggested because the relationships associated with each content item will be evaluated separately. Alternatively, a user might receive the same content as two separate suggestions. In some embodiments, the suggestion engine may address these behaviors by identifying instances in which two or more links correspond to the same content item and consolidating the links to a single content item with one or more aliases (i.e., alternative links for the content item). In one such embodiment, the content repository may first determine that two links correspond to the same content item by intercepting browser communications. For example, a plug-in, extension, or other software component (such as a Result Organizational Tool described below), may interface with a browser to intercept communications between the browser and a web server. Such communications generally include both the originally requested link and the redirected link. The intercepting software may then transmit both links to the content repository. In the same or an alternative embodiment, the content repository may search through all of its stored links, looking for links with similar elements. For example, the difference between two links corresponding to a desktop version of a web page (for example, www.yahoo.com) and a mobile version of the same page (for example, m.yahoo.com) is often very insubstantial and easily identifiable by a pattern-matching algorithm. The content repository may perform such a search on a periodic basis or on demand when a user saves a link. Once the content repository receives and/or identifies two or more links to the same content, it may select one link as the primary link (for example, the link to which other links redirect, if there is such a link), and it may store the other links as alias links together with the primary link. For example, the alias links may be stored as an attribute of the primary link. If this is the first time saving any of the links, then no further action is necessary. If two or more of the links have previously been saved, then the content repository may merge the properties and any other data associated with the previously saved links, store the data with the primary link, and delete the non-primary links. Logical Persistence of Content Items and Related Data Embodiments of the invention are able to store, or more specifically to provide logical persistence services for, several broad classes of information relating to content items. The term “logical” refers to which information is to be persisted and maintained and the conditions under which it is accessed, not the specific mechanisms (for example, a database) that may be used to store and manage access to the information, or even the actual form of any underlying data structures. Many different design choices could be made with respect to data store functions, while still respecting the same logical storage design. Such choices are well known by persons of ordinary skill in the art. Embodiments of the invention support at least three primary objectives for logical information persistence:Objective 1: Persist all information saved by users so they can retrieve, inspect, and modify that information. User-saved information includes content items saved by users, as well as user-specific data, such as personal preferences, personal configurations, personal settings, and personal account data.Objective 2: Persist information that reflects user behaviors and indications with respect to their manipulation of content items and/or suggestions. The behaviors and indications may include personal information and/or anonymous information. The behaviors/indications may be explicit (for example, a user dismisses a suggestion, indicating she is not interested in it); or they may be implicit (for example, a user previews a suggestion, but then shows no further interest in it, neither clicking through to the web page, nor saving the corresponding link). This information often takes the form of metrics, characterizing user behaviors with respect to their manipulation of content items in the data store. The metrics can include aggregations of user behaviors and indications across many or all users in the system.Objective 3: Persist information that is derived from a user population's saved data, such as data described in Objective 1, as well as behavioral/indication data described in Objective 2. The purpose of derived information is to accelerate algorithms and decisions needed to support certain features of a suggestion engine system. For example, an algorithm for providing suggestions to a user with respect to certain content may require the inspection and use of data associated with many objects in the data store. If part or all of the analysis of these objects can be performed in advance and then stored, the algorithm that provides suggestions can run much faster, which may be necessary to make the algorithm sufficiently responsive to be useful when accessed by live users via a user interface. User Data User data reflects information that embodiments of a suggestion engine system may have saved about a user. The primary components of user data are enumerated below and described from a user's perspective: My folders and their content. My Folders and their content may include a user's content items, as well as the user's folders containing both content items and other folders in a nested fashion. Each folder may have a unique ID. The content of a folder may be represented as a set of IDs, where each object (for example, a content item) has its own ID. The IDs may identify the objects of interest within the data store or content repository. My data items. My data items may include a user's content items, web links, rich text documents, images, saved notes, emails, and other types of objects. Each data item may have a unique ID and may also carry information indicating which type of data item it is. Common Elements. Certain data items are entirely personal to a user (for example, notes or annotations) and have nothing in common with the data items of other users. However, certain data items may contain some information that can be shared with other data items in the data store. For example, if two users have saved a data item of type “web link” referring to the same web page “www.sample.com”, they may each have their own personal notes associated with the data item. However, the URL “www.sample.com” may be identical for both users and can be shared. The same is true for additional data that is proper to the URL and its associated web page, such as a the title of the page; or a summary derived from the page; or one or more images that are extracted from the page to serve as its visual representation; or metrics associated with the web page which may pertain to a community of users in general. Common elements, such as URLs in the previous example, may be stored just once in the data store, given an ID, and referred to by other objects by using that ID. So, in the previous example, assume that user A and user B both save data items that are web links for www.sample.com. Then, in the data store, two data items, DataItem-A, and DataItem-B are persisted, one for user A and one for user B. A separate object called a “Link” (for example) is created to capture information that concerns www.sample.com, from a global perspective (i.e., not user-specific), and is given an ID, such as LinkID-1. DataItem-A and DataItem-B both contain a data member (for example, a field in a database, or a data structure member) indicating that their web link has ID=LinkID-1. This technique can also be applied to PDFs, images, or other types of documents that are in the public domain and of interest to multiple users. My preferences, which govern the behavior of certain features that a user is given permission to control. User Behaviors and Indications Embodiments of the invention provide methods that permit a user to interact with various content items/objects/data items (these terms are used interchangeably). Information relating to user behaviors and indications with respect to the data items can be saved or persisted. Saved information may include interactions with a user's own private data, such as data items the user has saved. For example, the system may keep track of how many times each user has accessed each saved item. Saved information may also include user interactions with common elements. For example, embodiments of the invention may track the number of times that a particular web page was presented as a suggestion and also the number of times that the suggested web page was accepted (i.e., saved) by the user to whom it was presented. Since a web page is a common element, the counter can reflect the aggregate behavior of many users with respect to that item. Furthermore, the same user interaction may cause an update to occur on both a private data item and a common element. Using the example above, when a user accesses a saved web page, not only can embodiments increment the count reflecting that particular user's behavior with respect to his own saved data item, but embodiments can also adjust the metrics associated with the common element (i.e., the web page) referred to by the user's data item. Derived Data for Suggestion Analytics Derived data would not be necessary if computers were infinitely fast at calculating, storing, and retrieving information. Since computers do not have those capabilities, and embodiments of the invention repeatedly need certain information within shorter time frames than the information could practically be calculated, some embodiments of the invention will compute certain information in advance, also known as “pre-computing.” In some cases, pre-computing is performed by embodiments via batch processes that may run periodically over appropriate portions of the data set in order to compute the desired result. The result is then stored and made available for any algorithm or feature that wishes to use it. Periodically, the batch processes can be executed again in order to obtain up-to-date pre-computed data. In certain other cases, it is possible and economical, from a computational perspective, to maintain the desired information incrementally. This means that as changes are made to the state of the overall data store, the resulting changes in derived data can be calculated without having to recompute the entire derived data from scratch, as is typically done in the batch process approach. An example of a derived result is a summation of a certain field across all of the objects of a certain type. As long as the summation is saved and is correct, then when a new object is created, the summation algorithm merely has to add the contribution of that new object to the summation. Similarly, if an object of that type is deleted, the summation result merely has to be decremented by the contribution of the deleted object. Certain information key to the operation of the data store may be saved by embodiments using the incremental technique described above. This information is, in particular, useful for the algorithms that compute suggestions for content that is considered to be likely to be of interest to users. Copresence Counts For example, a key relationship for suggestion analytics is the “copresence count” for every pair of content items. Two content items are considered “copresent” (also referred to as “neighbors”) if at least one user has saved them both in the same folder. The number of times that this occurs, across all users, is called the “copresence count” for that pair of content items. For most potential pairs of content items this count will be zero, because most pairs of content items will not be stored together in the same folder by any user. In some embodiments, such copresence counts are not represented explicitly in the data store or content repository. The absence of a copresence count can imply that the value is zero. Determining copresence counts for any arbitrary content item in the data store could require a vast number of read operations and calculations if the algorithm were to start from scratch. However, it may be desirable for the suggestion generation methods to quickly access the non-zero values for any content items. The question to answer is: “for content item A, what is the set of content items that have non-zero copresence counts with content item A?” To support answering this question quickly, embodiments of the data store or content repository can maintain, with respect to every content item, a collection of all of related content items with non-zero copresence counts. The collection is actually a set of link IDs and associated copresence counts. This data can be maintained in an incremental fashion each time a content item is saved to a folder by any user, each time a content item is deleted from a folder, and each time a content item is moved from one folder to another. Similarly, when folder-level operations occur, such as a folder deletion, the copresence counts are appropriately adjusted for items that were contained by that folder. Folder Set Information Another critical relationship for suggestion analytics connects a content item to the folders that contain it or are associated with it. Since multiple separate users can independently save the same content item, this is a one-to-many relationship. In an embodiment, where a folder is said to contain a content item, it means that the folder contains or is associated with a data item referring to the content item. With this context, when analyzing a content item, one of the questions of interest is: “Which folders contain the content item?” Computing this result from scratch would require a traversal of all the folders in the system to determine which ones contain the content item of interest. Since it may be desirable for the suggestion generation methods to acquire this information in a short time frame, embodiments can keep the information ready at all times by maintaining a “folder set” for each content item. A content item's folder set is maintained through incremental updates. Each time a content item is added to, or removed from, a folder, the appropriate information can be adjusted accordingly. Similarly, when a folder is deleted, it can be removed from the folder sets of all the content items that it contained immediately prior to its deletion. Folder-Based Suggestions: First Example Method In an earlier section describing methods for generating suggestions for a set of content items, Method 2.1 evaluated the “Specific Commonality Neighbors (SP)” relationship of a set of content items to find folders that contain a specific subset of the set of content items. When the content repository maintains folder set information for each content item (a list of which folders contain the content item), the task of finding the desired folders involves traversing the list of folders in the folder set. That is, the items of interest already “know” all of the folders that contain them. Then, for each item of interest, a folder-based suggestion method could compile all of the folder sets associated with the items of interest, and then compute the intersection of the folder sets to obtain a final set of folders to examine. The folder-based suggestion method could then extract the content items from the final set of folders, optionally rank each of them based on how many times it appeared across all of the folders in the final set, and add them to a pool of potential suggestions. Another earlier section describes Method 3.1 for folder-based suggestions, which uses the “Sufficient Commonality Neighbors (SU)” relationship. This method does not rely on specific items, but instead considers the entire basis folder “F.” The method discovers folders that contain at least j items in common with F. Of course, the various discovered folders need not all have the same intersection with F. This method can also take advantage of the availability of folder sets. To find the desired folders, a folder-based suggestion method may begin by looping through all of the items in F, and for each item, obtaining its folder set. The collection of folder sets are then merged to produce a set of pairs where the first element in the pair is a folder, and the second element is the count of the number of times the folder appeared in all of the folder sets. The count must be at least 1, but it may or may not be greater than or equal to j, the threshold value. Folders having a commonality count less than j can be removed, since they do not contain enough of the original items in F to meet the required threshold. The remaining folders are the ones of interest. To produce items from the final set of folders, an additional step extracts the content items from the folders, optionally ranks the content items based on how many times they appeared across all of the final folders, and adds them to a pool of potential suggestions. Folder-Based Suggestions: Second Example Method Folder sets also allow suggestion generation methods in the embodiments to follow a content item to other folders. This is in contrast to the copresence data, which provides a way of traversing from one content item to other content items. In most cases, the goal of a suggestion generation method is to produce suggested content items and not folders. However, by propagating to other folders, it is possible to discover information that is not available merely through copresence counts. One such case occurs when providing suggestions for a set of content items, as opposed to an individual content item. A special subcase of this capability would be, for example, providing suggestions for an entire folder. Suppose that the goal is to determine all of the content items that are copresent with any of the content items in a folder F, and to count how many times those content items are copresent. An algorithm could simply loop through all of the content items in F, and for each one, obtain the copresent links and their respective counts. Then, for each of the copresent content items, the algorithm could add up the counts that it had collected with respect to each of the content items in F. However, if in another folder, there is a content item that is copresent with multiple content items that are in F, it may be undesirable to count that content item multiple times, as this would amount to redundantly accounting for the content item's presence within that folder. In other words, the content item would be present only once in the folder, but may be counted multiple times. Thus, copresence counts alone are insufficient to obtain an answer. The following simple example, using the following folders and their contents, illustrates the reason why:F1contains content items (A), (B)F2contains content items (A), (X), (Y)F3contains content items (A), (B), (X) If the suggestion engine executes an algorithm to determine suggestions for folder F1, one approach would be to use copresence counts for the content items contained in F1. Doing so, the algorithm would determine the following:A's copresent content items and counts are: (B=2); (X=2); (Y=1)B's copresent content items and counts are: (A=2); (X=1) When determining suggestions for folder F1, A and B are uninteresting for suggestion purposes, since they are already part of F1, leaving only X and Y. One must aggregate the data for content items that appear on behalf of multiple content items in F1. In this case, X is the only such content item because X is the only content item copresent with A and/or B and has a count greater than one. The question now arises: should the count for X be 3, which one would obtain by adding the count on behalf of A to the count on behalf of B? Or, on the other hand, since X appears only twice throughout all the folders, should the count be 2? Both are legitimate answers with different interpretations, but suppose that one desires to adopt the latter approach, and not count X twice when it occurs in F3, merely because both A and B are present together in F3. Under this approach, there is insufficient information with just the copresence counts. Access to the folders themselves is required in order to detect that redundant counting would occur. To complete the example, the following reasoning illustrates a way to obtain the desired copresent content items and aggregated counts for F1. First, begin with the folder sets, which are always maintained in a correct state.A's folder set is: F1, F2, F3B's folder set is: F1, F3 F1is uninteresting, since it is the basis folder for computing suggestions, so the remaining folders of interest are the union of {F2, F3} and {F3}, which is {F2, F3}. Looping through the content items contained in F2and F3to determine their total counts, counting each instance only once, results in:A=2B=1X=2Y=1 A and B are uninteresting since they are already in F1, and therefore are not useful suggestions. The remaining useful results are X=2 and Y=1. As the two folder-based examples illustrate, pre-computed folder sets provide a useful tool to simplify and accelerate the generation of certain suggestions. Other suggestion methods can also leverage folder sets for their implementation, including for example, Method 3.2 above, which uses the “Proportionate Commonality Neighbor (PC)” relationship. Data Store Consistency Another important use for folder sets is for maintenance and consistency of the data store or content repository. When a content item that is a common element is deleted, it is necessary to update all of the data items that refer to that content item. Note that users would not normally be able to delete the common element representation of a content item since it belongs to many users. However, there may be times when the system itself decides to delete the common element. For example, if the content item's URL has become invalid as a result of the page or domain being removed, then embodiments of the suggestion engine system (for example, the content repository) may detect this fact, and then choose to delete the content item entirely. It may also be desirable for an administrator of an embodiment of the system to have the capability to delete a common element because it has been determined to be inappropriate for users to see. At that time, it is appropriate to either delete all of the data items that refer to the content item, or to mark them as having a special status so that users can be warned when the content item is displayed. Regardless of the specific policy, there is a need to traverse from the content item as a common element to all of the data items that refer to it. The folders that contain the data items would also be affected if the policy is to delete the data items. Obtaining the set of affected data items is easily accomplished by using the folder set of the deleted content item. Taking each folder in the folder set, the algorithm could simply identify the data item in each folder that refers to the deleted content item. Selecting Folders for Content Items As discussed throughout, when a user encounters a new content item (i.e., as a suggestion or otherwise), he or she may save the content item for future use. Because embodiments of a suggestion engine may possess semantic information about the content item (for example, the names of relevant folders in the content repository where the content item may be found, metadata concerning the content item and/or its associated folders, other content items in the related folders, and other information relating to the circumstances in which the folders and content items were created, including correlations between the new content item and the content items that have already been organized and saved in the folders), embodiments of a suggestion engine may recommend to the user a specific folder or set of folders, including a new folder or set of folders to be created, where the new content item may be saved, in order to be consistent with the user's organizational scheme. In the same or alternative embodiments, a suggestion engine may automatically select an existing folder or a new folder without user input. For example, when a user elects to save a content item, the suggestion engine may automatically save the content item to a specific folder (i.e., a new folder or an existing one) without requiring the user to make a selection. FIG.8illustrates an exemplary embodiment of methods that can be used to recommend or automatically select an existing folder or a new folder in which to save a content item of interest. At Step810, the method may first evaluate a user's existing folders to see if any of them are a good fit for the content item. The folders can be evaluated, for example, by determining the copresence count for the content item of interest (i.e., the content item to be saved) with respect to each content item in each existing folder. By summing the copresence counts for each existing folder, one or more folders with the highest sums can be selected as the most appropriate destination(s) for the content item of interest. At Step810, copresence counts may be supplemented by also considering multi-hop neighbors. For example, a content item of interest and a content item from an existing folder may not be copresent (or may have a low copresence count), but each item might separately be copresent with a different common content item. In such a case, a “multi-hop copresence count” (i.e., the lesser of two copresence counts with a common content item) may be calculated. For example, content items A and B may have a copresence count of M, and content items B and C may have a copresence count of N. The lesser of M and N can be considered the multi-hop copresence count of A and C. If this multi-hop copresence count is sufficiently high, then the folder associated with C may be a good recommendation for A. If the copresence counts are low for all existing folders, embodiments may use other methods for recommending an existing folder. For example, the suggestion engine can examine keywords (for example, from the title or snippet of a Web page) or metadata associated with the content item of interest as well as the content items in a user's existing folders. The suggestion engine can then look for similarities between the content item of interest and the content items in existing folders, and recommend one or more folders with sufficient similarities. At Step820, embodiments can determine whether it is appropriate, based on the evaluations performed thus far, to recommend an existing folder for saving a content item of interest. If an existing folder was located in Step810, the method can proceed to Step830to recommend or automatically select that existing folder. In some cases, however, embodiments may conclude at Step820that no existing folder is an appropriate destination for the content item of interest. Thus, at Step840, embodiments may recommend saving a content item to a new folder. The name of the new folder may be derived from the content item's semantic information, including for example, the names of other users' folders that contain the content item of interest, keywords identified in the content item itself (for example, from the title or snippet of a Web page), or metadata stored with the content item of interest. In embodiments, the keywords and/or metadata may be compared with the other users' folder names to identify common words or phrases. In an embodiment, all potential folder names, keywords, and/or common words or phrases can be processed by collating them, removing certain stop words, and creating a frequency table of 1-word, 2-word, 3-word, etc. phrases. Embodiments of the invention can search for overlaps among the phrases and retain only the overlapping words. For example, if three 2-word phrases contain one common word, then the phrases can be discarded in favor of the common word. Once the frequency table is populated, the phrase(s) with the highest frequency count(s) can then be recommended or automatically selected as the name(s) of the new folder(s). When recommending new folders at Step840, embodiments of the invention can implement privacy measures to remove private or personal names from use in generating potential folder names. For example, the suggestion engine may require a certain folder name, keyword, or phrase to appear a threshold number of times in the content repository before it can be suggested as a potential folder name. In this manner, if a user names his folder “Bob's Golfing Sites,” “Bob's” would not be recommended or automatically selected as part of a potential folder name for another user unless “Bob's” appeared a sufficient number of times in other folder names, keywords, and/or phrases. Returning back to recommending existing folder names at Step810, embodiments may compare the high-frequency phrases with existing folder names, and if one or more suitable matches are located, recommend or automatically select them as existing folders for the content item of interest. In the same or an alternative embodiment, instead of comparing the high-frequency phrases to existing folder names, the suggestion engine may compare the high-frequency phrases with high-frequency phrases generated for each content item within an existing folder. Then, if some threshold number of content items within a folder are suitable matches for the content item of interest, the suggestion engine can recommend or automatically select the existing folder. At Step810, embodiments may also give priority to recently used folders when recommending an existing folder as the destination for a content item to be saved. A folder can be considered recently used, for example, if it was one of the previous N (where N is an integer) folders to which a content item was saved, if a user saved a content item to the folder within some period of time (for example, within the last 15 minutes), or a combination of these two criteria. When given priority, a recently used folder may be presented to the user before other recommendations and/or it may be analyzed more closely than folders that have not been recently used. For example, if the suggestion engine normally compares only the top 10 high-frequency word combinations to an existing folder name, then it might compare the top 20 combinations to the folder name of a recently used folder, thereby making it more likely that the recently used folder will be recommended or automatically selected. In embodiments, a user can request a suggestion engine to organize all or a portion of the user's saved content items. For each content item supplied by the user, including a folder of content items or a hierarchy of folders of content items, embodiments of the invention can use any of the various teachings associated withFIG.8described above to recommend or automatically select folders in which to save the content items. Suggestion Engine System Embodiments FIG.9illustrates an embodiment of a Suggestion Engine System900in accordance with the present invention. The embodiment illustrated inFIG.9provides a Suggestion Engine905that interfaces with a Content Repository910to provide content suggestions to a user operating User Computer915. Content Repository910is a collection of content items that may be provided by users, such as a user operating User Computer915or a user operating User Computer920. As discussed above, Content Repository910may be structured logically as one or more folder hierarchies, where each folder (for example, Folders925and930) may contain other folders (for example, Folders927and928) as well as content items (for example content items A1, A4and A5shown in Folder925). Other logical structures are also possible, as long as the structure enables users to group or organize content items together. Content items in Content Repository910may be presented to a user in the form of a hierarchically organized set of groupings, stacks, directories, folders, or similar representations. As discussed above, Content Repository910can be implemented using various data structures, including any combination of trees, lists, graphs (cyclic or acyclic, hierarchical or non-hierarchical), databases, and/or other appropriate data structures known in the art. Storage and access methods for Content Repository910may be implemented using cloud-based techniques, which may further include distributed techniques where portions of Content Repository910(including mirror and backup copies) may be located on a plurality of computing devices, an example of which is illustrated as Computing Device1000inFIG.10. Some user-specific portions of Content Repository910may be implemented on a user's own client device, such as a hard disk drive or equivalent device, but the same user-specific portions may also be implemented remotely or virtually using network and storage services known in the art, including cloud-based network and storage services. Content Repository910may employ any type of internal structure or graph to organize content items based on user input. For example, the internal structure of Content Repository910may be implemented as a graph that is cyclic or acyclic. In addition, the internal structure of Content Repository910may be one or more hierarchical trees comprising progressive levels of narrower semantic scope. For purposes of illustration, Content Repository910is illustrated inFIG.9as a plurality of hierarchal trees of folders and content items. In this context, the term “folder” is intended to describe any such logical structures known in the art that support organizing and/or grouping content items. Those skilled in the art will recognize that a hierarchical tree is just one form of organized structure that may be used in the embodiments. Other structures are possible and are within the principles of the present invention. Content Repository910may include interface software, including an application programming interface (“API”) and related software methods that may permit users to access Content Repository910and interact with information stored therein. As shown inFIG.9, Content Repository910may include content items, such as A1, A4, and A5, which may be stored in or associated with folders, such as Folder925. For exemplary purposes, content items A1and A4are shown inFIG.9as being commonly associated with multiple folders: Folder925and Folder930. Folder930is additionally shown as being associated with content item A9, which is not found in any other folder. Content Repository910also comprises Folder927and Folder928, both of which are shown as being contained within or associated with Folder925. Folder927is associated with content items B1, B2, and B6. Folder928is associated with content item C1(and later in the discussion will be associated with content items C3, and C7). To add new content to Content Repository910, a user may use a computer such as User Computer915to interact with a content source within Network935. Network935may comprise one or more networks, such as a local area network, the Internet, or other type of network, including a wide area network and all types of wireless networks, such as wireless local area networks, and mobile data networks. In addition, Network935may support a wide variety of known protocols, such as the transport control protocol and Internet protocol (“TCP/IP”) and the hypertext transport protocol (“HTTP”). In some embodiments, Network935may be implemented using the Internet. Content sources (or information spaces) conceptually represent any collection of information provided by a publisher or other source of information. Content sources may comprise various types of content items, such as documents, multimedia, images, etc. Content sources may incorporate various types of storage, such as direct attached storage, network attached storage, and cloud-based storage to store and access information. Search Engine940represents any system or application that is designed to search for information available on the Network935. For example, Search Engine940may correspond to well known conventional search engines such as Google, Yahoo, Bing, etc., which commonly provide a user interface for searching and presenting search results. In general, Search Engine940may present search results in a list format or similar format. User Computers915and920may be implemented using a variety of devices and software. For example, User Computers915and920may be implemented on Computing Device1000(FIG.10), which may comprise a personal computer, laptop computer, mobile device, such as a smart-phone or tablet computer, etc. User Computers915and920may comprise a memory and local storage (not shown inFIG.9), such as a hard disk drive, flash drive, solid-state drive, an external disk drive, and the like. In addition, User Computers915and920may utilize various types of storage systems and services, such as network attached storage, storage area networks, and cloud-based storage services via Network935or another network. User Computers915and920may run an operating system, such as the LINUX operating system, the Microsoft Windows operating system, the Apple iOS operating system, the Google Android operating system, and the like. User Computers915and920may also operate a Browser945, such as Firefox by Mozilla, Internet Explorer by Microsoft Corp., Netscape Navigator by Netscape Communications Corp., Chrome by Google, or Safari by Apple, Inc. User Computers915and920may also include software, such as a Suggestion Assistant950, that enables users to interact with embodiments of the invention, for example to save content to Content Repository910, to organize and view content within Content Repository910, and to receive suggestions via Suggestion Engine905. Suggestion Assistant950may operate alone or in conjunction with conventional Browsers945(for example, as a plugin or extension to Browsers945). Suggestion Assistant950can be implemented as an application (including a mobile “app”), a program, a tool, a plugin, an extension, an interactive web page, a widget, or any other type of software. In embodiments, Suggestion Assistant950includes a graphical user interface (“GUI”) for rendering information to a user and/or receiving information from the user. The GUI may include any combination of user interface elements, such as buttons, windows, menus, text boxes, scrollbars, etc., for enabling users to interact with the embodiments. Users may use Suggestion Assistant950(either alone or in conjunction with conventional Browsers945) to: browse content resources (for example, the Internet), view content items (for example, web pages), and/or conduct searches (for example, using Search Engine940). Users may also use Suggestion Assistant950to: create folders (for example, Folder928) in Content Repository910, save content items (for example, Content Items C3and C7) to folders (for example, Folder928) in Content Repository910, navigate and view collections of folders and content items (for example, Folder925and Folder930and their corresponding items), organize folders and content items (for example, to include copying, moving, deleting, renaming, and customizing folders and content items), and receive suggestions for folders and content items via Suggestion Engine905. InFIG.9, for example, a user of Suggestion Assistant950on User Computer920has obtained Content Items960(C3and C7). The Content Items960, for example, may have been: discovered through use of a search engine, created by the user, shared by another user, presented as a suggestion, or acquired in any other manner. Using Suggestion Assistant950, the user may then organize at least some of the received content items960by associating them with folder(s) within Content Repository910, for example by associating Content Items960(C3and C7) with Folder928(indicated by actions970and975). The selected folder(s) correspond(s), at least in part, to the user's subjective categorization of the Content Items960. The user content and folder structure (for example, Folder928and its contents) within Content Repository910may then be shared with, published to, or otherwise made accessible to, Suggestion Engine905. Suggestion Engine905may then access content items within Content Repository910and provide new content suggestions to the same user or other users seeking new content. In embodiments, users of Suggestion Assistant950may receive suggestions for folders and content items (including suggestions of folders in which to save content items) via Suggestion Engine905in a variety of ways. For example, the GUI of Suggestion Assistant950may include a dedicated suggestion window, which displays previews of suggested content items. The suggested content items may, for example, correspond to one or more folders and/or content items that a user viewed or selected. Users may then select one or more of the suggested content items for more comprehensive viewing and/or saving. In the same or an alternative embodiment, the GUI of Suggestion Assistant950may display suggested content items within tooltips, balloons, pop-up windows, or any other graphical container or textual representation. Such a display may include the content item's content and/or any associated attributes (for example, a text description, a corresponding image, a URL, etc.), including any subsets and combinations thereof. InFIG.9, for example, a user of Suggestion Assistant950on User Computer915has received Content Items965(A1and B1) in response to a search request. Suggestion Assistant950may then provide content item A1to the Suggestion Engine905as an item of interest along with a request for semantically similar content. Suggestion Engine905may then employ any of the suggestion-generation methods discussed above to locate available content items within Content Repository910. For example, for content item A1, Suggestion Engine905may determine that Folders925and930also contain content item A1. And because Folders925and930also contain content item A4, Suggestion Engine905may then determine that content item A4is sufficiently related to content item A1to warrant suggesting content item A4to the requesting user operating User Computer915. Following the same example, if Suggestion Assistant950provides content item B1to the Suggestion Engine905along with a request for related content, Suggestion Engine905may determine that Folder927also contains content item B1. And because Folder927also contains content items B2and B6, Suggestion Engine905may then determine that content items B2and B6are both sufficiently related to content item B1to warrant suggesting content items B2and B6to the requesting user operating User Computer915. In embodiments, Suggestion Assistant950also collects additional information from users and from user interactions with content items, including content items provided to the user as suggestions, and Suggestion Assistant950may communicate this information to Suggestion Engine905. For example, users may supply various preferences and other parameters that the Suggestion Engine905may use to provide user-specific suggestions. Suggestion Assistant950may also collect and communicate information about the content items a user views, the order in which the user views the content items, the time the user spends viewing each content item, and other metrics or observations pertaining to the user's interactions with content items that may be useful to Suggestion Engine905in providing suggested content. Computing Device FIG.10is a block diagram of an exemplary embodiment of a Computing Device1000in accordance with the present invention, which in certain operative embodiments can comprise, for example, the Suggestion Engine905, the Content Repository910, User Computer915and User Computer920ofFIG.9. Computing Device1000can comprise any of numerous components, such as for example, one or more Network Interfaces1010, one or more Memories1020, one or more Processors1030including program Instructions and Logic1040, one or more Input/Output (I/O) Devices1050, and one or more User Interfaces1060that may be coupled to the I/O Device(s)1050, etc. Computing Device1000may comprise any device known in the art that is capable of processing data and/or information, such as any general purpose and/or special purpose computer, including as a personal computer, workstation, server, minicomputer, mainframe, supercomputer, computer terminal, laptop, tablet computer (such as an iPad), wearable computer, mobile terminal, Bluetooth device, communicator, smart phone (such as an iPhone, Android device, or BlackBerry), a programmed microprocessor or microcontroller and/or peripheral integrated circuit elements, an ASIC or other integrated circuit, a hardware electronic logic circuit such as a discrete element circuit, and/or a programmable logic device such as a PLD, PLA, FPGA, or PAL, or the like, etc. In general, any device on which a finite state machine resides that is capable of implementing at least a portion of the methods, structures, API, and/or interfaces described herein may comprise Computing Device1000. Such a Computing Device1000can comprise components such as one or more Network Interfaces1010, one or more Processors1030, one or more Memories1020containing Instructions and Logic1040, one or more Input/Output (I/O) Devices1050, and one or more User Interfaces1060coupled to the I/O Devices1050, etc. Memory1020can be any type of apparatus known in the art that is capable of storing analog or digital information, such as instructions and/or data. Examples include a non-volatile memory, volatile memory, Random Access Memory, RAM, Read Only Memory, ROM, flash memory, magnetic media, hard disk, solid state drive, floppy disk, magnetic tape, optical media, optical disk, compact disk, CD, digital versatile disk, DVD, and/or RAID array, etc. The memory device can be coupled to a processor and/or can store instructions adapted to be executed by processor, such as according to an embodiment disclosed herein. Input/Output (I/O) Device1050may comprise any sensory-oriented input and/or output device known in the art, such as an audio, visual, haptic, olfactory, and/or taste-oriented device, including, for example, a monitor, display, projector, overhead display, keyboard, keypad, mouse, trackball, joystick, gamepad, wheel, touchpad, touch panel, pointing device, microphone, speaker, video camera, camera, scanner, printer, vibrator, tactile simulator, and/or tactile pad, optionally including a communications port for communication with other components in Computing Device1000. Instructions and Logic1040may comprise directions adapted to cause a machine, such as Computing Device1000, to perform one or more particular activities, operations, or functions. The directions, which can sometimes comprise an entity called a “kernel”, “operating system”, “program”, “application”, “utility”, “subroutine”, “script”, “macro”, “file”, “project”, “module”, “library”, “class”, “object”, or “Application Programming Interface,” etc., can be embodied as machine code, source code, object code, compiled code, assembled code, interpretable code, and/or executable code, etc., in hardware, firmware, and/or software. Instructions and Logic1040may reside in Processor1030and/or Memory1020. Network Interface1010may comprise any device, system, or subsystem capable of coupling an information device to a network. For example, Network Interface1010can comprise a telephone, cellular phone, cellular modem, telephone data modem, fax modem, wireless transceiver, Ethernet circuit, cable modem, digital subscriber line interface, bridge, hub, router, or other similar device. Processor1030may comprise a device and/or set of machine-readable instructions for performing one or more predetermined tasks. A processor can comprise any one or a combination of hardware, firmware, and/or software. A processor can utilize mechanical, pneumatic, hydraulic, electrical, magnetic, optical, informational, chemical, and/or biological principles, signals, and/or inputs to perform the task(s). In certain embodiments, a processor can act upon information by manipulating, analyzing, modifying, converting, transmitting the information for use by an executable procedure and/or an information device, and/or routing the information to an output device. A processor can function as a central processing unit, local controller, remote controller, parallel controller, and/or distributed controller, etc. Unless stated otherwise, the processor can comprise a general-purpose device, such as a microcontroller and/or a microprocessor, such the Pentium IV series of microprocessors manufactured by the Intel Corporation of Santa Clara, California In certain embodiments, the processor can be dedicated purpose device, such as an Application Specific Integrated Circuit (ASIC) or a Field Programmable Gate Array (FPGA) that has been designed to implement in its hardware and/or firmware at least a part of an embodiment disclosed herein. User Interface1060may comprise any device and/or means for rendering information to a user and/or requesting information from the user. User Interface1060may include, for example, at least one of textual, graphical, audio, video, animation, and/or haptic elements. A textual element can be provided, for example, by a printer, monitor, display, projector, etc. A graphical element can be provided, for example, via a monitor, display, projector, and/or visual indication device, such as a light, flag, beacon, etc. An audio element can be provided, for example, via a speaker, microphone, and/or other sound generating and/or receiving device. A video element or animation element can be provided, for example, via a monitor, display, projector, and/or other visual device. A haptic element can be provided, for example, via a very low frequency speaker, vibrator, tactile stimulator, tactile pad, simulator, keyboard, keypad, mouse, trackball, joystick, gamepad, wheel, touchpad, touch panel, pointing device, and/or other haptic device, etc. A user interface can include one or more textual elements such as, for example, one or more letters, number, symbols, etc. A user interface can include one or more graphical elements such as, for example, an image, photograph, drawing, icon, window, title bar, panel, sheet, tab, drawer, matrix, table, form, calendar, outline view, frame, dialog box, static text, text box, list, pick list, pop-up list, pull-down list, menu, tool bar, dock, check box, radio button, hyperlink, browser, button, control, palette, preview panel, color wheel, dial, slider, scroll bar, cursor, status bar, stepper, and/or progress indicator, etc. A textual and/or graphical element can be used for selecting, programming, adjusting, changing, specifying, etc. an appearance, background color, background style, border style, border thickness, foreground color, font, font style, font size, alignment, line spacing, indent, maximum data length, validation, query, cursor type, pointer type, auto-sizing, position, and/or dimension, etc. A user interface can include one or more audio elements such as, for example, a volume control, pitch control, speed control, voice selector, and/or one or more elements for controlling audio play, speed, pause, fast forward, reverse, etc. A user interface can include one or more video elements such as, for example, elements controlling video play, speed, pause, fast forward, reverse, zoom-in, zoom-out, rotate, and/or tilt, etc. A user interface can include one or more animation elements such as, for example, elements controlling animation play, pause, fast forward, reverse, zoom-in, zoom-out, rotate, tilt, color, intensity, speed, frequency, appearance, etc. A user interface can include one or more haptic elements such as, for example, elements utilizing tactile stimulus, force, pressure, vibration, motion, displacement, temperature, etc. The present invention can be realized in hardware, software, or a combination of hardware and software. The invention can be realized in a centralized fashion in one computer system, or in a distributed fashion where different elements are spread across several computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suitable. A typical combination of hardware and software can be a general-purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein. Although the present disclosure provides certain embodiments and applications, other embodiments apparent to those of ordinary skill in the art, including embodiments that do not provide all of the features and advantages set forth herein, are also within the scope of this disclosure. The present invention, as already noted, can be embedded in a computer program product, such as a computer-readable storage medium or device which when loaded into a computer system is able to carry out the different methods described herein. “Computer program” in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or indirectly after either or both of the following: a) conversion to another language, code or notation; or b) reproduction in a different material form. The foregoing disclosure has been set forth merely to illustrate the invention and is not intended to be limiting. It will be appreciated that modifications, variations and additional embodiments are covered by the above teachings and within the purview of the appended claims without departing from the spirit and intended scope of the invention. Other logic may also be provided as part of the exemplary embodiments but are not included here so as not to obfuscate the present invention. Since modifications of the disclosed embodiments incorporating the spirit and substance of the invention may occur to persons skilled in the art, the invention should be construed to include everything within the scope of the appended claims and equivalents thereof. | 141,296 |
11861508 | DETAILED DESCRIPTION Principle of the present disclosure will now be described with reference to some example embodiments. It is to be understood that these embodiments are described only for the purpose of illustration and help those skilled in the art to understand and implement the present disclosure, without suggesting any limitations as to the scope of the disclosure. The disclosure described herein can be implemented in various manners other than the ones described below. In the following description and claims, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skills in the art to which this disclosure belongs. As used herein, the singular forms ‘a’, ‘an’ and ‘the’ are intended to include the plural forms as well, unless the context clearly indicates otherwise. The term ‘includes’ and its variants are to be read as open terms that mean ‘includes, but is not limited to.’ The term ‘based on’ is to be read as ‘at least in part based on.’ The term ‘one embodiment’ and ‘an embodiment’ are to be read as ‘at least one embodiment.’ The term ‘another embodiment’ is to be read as ‘at least one other embodiment.’ The terms ‘first,’ ‘second,’ and the like may refer to different or same objects. Other definitions, explicit and implicit, may be included below. In some examples, values, procedures, or apparatus are referred to as ‘best,’ ‘lowest,’ ‘highest,’ ‘minimum,’ ‘maximum,’ or the like. It will be appreciated that such descriptions are intended to indicate that a selection among many used functional alternatives can be made, and such selections need not be better, smaller, higher, or otherwise preferable to other selections. As described above, discovering why and how a thing happened and finding a strategy which enables a desirable thing to happen become urgent requirements in many fields, such as, market research, manufacture, healthcare, retail and so on. For example, in the field of marketing research, people want to know what factors affect customer satisfaction with a telecommunication operator and how to improve the customer satisfaction. In the field of product manufacture, people want to know what factors affect product yields and how to improve the product yields. In the field of retail, people want to know what factors affect product sales and how to improve the product sales. In the field of software development, people want to know what factors affect software failure rate and how to reduce the software failure rate. Therefore, it would be desirable to provide a causal analysis system which can discover a causal relationship among a plurality of factors and recommend a strategy to affect a target factor in the plurality of factors based on the causal relationship. Some conventional solutions support causal analysis in a manual way and require a lot of manual interactions to perform the causal analysis, which results in low efficiency and cannot satisfy the above needs in different fields. Embodiments of the present disclosure provide a solution for causal analysis, so as to solve the above problems and/or one or more of other potential problems. In this solution, a causal relationship among a plurality of factors can be automatically discovered from observation samples of the plurality of factors. A causal structure representing the causal relationship can be presented to a user. The user can adjust the causal structure to input some prior knowledge, so as to optimize the discovered causal structure. The user can specify a target factor in the plurality of factors and retrieve one or more key factors that have greatest effects on the target factor from the plurality of factors. Moreover, this solution can evaluate an effect of a strategy which is inputted by the user for affecting the target factor. This solution can also recommend an optimal strategy which enables the target factor to reach a desirable value to the user. As used herein, the term “factor” is also referred to as a “variable”. The term “observation sample” refers to a set of observation values of a number of factors that can be directly observed, and a factor that can be directly observed is also referred to as “observable variable” or “observable factor”. The term “target factor” refers to a factor that people expect to affect. For example, in the field of marketing research, the observable factors may include factors related to customer attributes (such as, a customer level, a customer phone number, etc.), factors related to customer behaviors (such as, traffic consumed per month, ratio of free traffic, total cost of the traffic consumed per month, etc.), factors related to customer feedback (for example, the number of complaints, customer satisfaction) and factors related to strategies (for example, the number of reminders for a specific event, etc.). The customer satisfaction may be considered as the target factor. As another example, in the field of software development, the observable factors may include an amount of human resources for software development, time duration for software development, the number of functions, the number of code lines, a programming language used for software development, software failure rate, and so on. For example. the software failure rate can be considered as the target factor. An observation sample may include a set of observation values of the observable factors. Some example embodiments of the present disclosure will be described below with reference to the figures. However, those skilled in the art would readily appreciate that the detailed description given herein with respect to these figures is provided only for the purpose of illustration, without suggesting any limitation to the scope of the present disclosure. FIG.1Aillustrates an example environment100in which embodiments of the present invention can be implemented. As shown inFIG.1A, the environment100may include a user110, a causal analysis server120and a data collection device130. The causal analysis server120may include a user interface module121, a causal analysis engine122and a database123. It is to be understood that the structures of the environment100and/or the causal analysis server120are shown only for purpose of illustration, without suggesting any limitation to the scope of the present disclosure. Embodiments of the present disclosure may also be applied to a different environment with a different structure and/or a different causal analysis server with different components. In some embodiments, the data collection device130may be configured to collect observation samples of a plurality of factors automatically. Each observation sample may include a set of observation values of the plurality of factors. In some embodiments, the data collection device130may include one or more sensors for collecting the observation samples. Alternatively, in some embodiments, the data collection device130may include one or more collection units for collecting observation values of different types of factors, respectively. In some embodiments, the data collection device130may transmit the collected observation samples to the causal analysis server120for subsequent storage, processing and/or analysis. For example, the observation samples collected by the data collection device130may be transmitted to the causal analysis server120via the user input interface module121. Then, the observation samples may be transmitted from the user input interface module121to the causal analysis engine122for subsequent storage, processing and/or analysis. For example, the causal analysis engine122may discover a causal relationship among the plurality of factors and/or perform causal analysis based on the observation samples. Alternatively, in some embodiments, the data collection device130can be omitted. For example, the observation samples can be inputted to the server120by the user110. In some embodiments, the user110can communicate with the causal analysis system120. For example, the user110may input user information, observation samples, one or more requests, useful knowledge and/or one or more configurations for causal analysis to the causal analysis server120via the user input interface module121. The user inputs may be transmitted from the user input interface module121to the causal analysis engine122. In some embodiments, in response to receiving the user inputs, the causal analysis engine122may execute one or more actions for causal analysis associated with the user inputs, and present one or more results or feedbacks to the user110via the user interface module121. The causal analysis engine122may store the received data, generated structures, expert knowledge and/or any useful information into the database123for subsequent use. FIG.1Billustrates another example environment105in which embodiments of the present invention can be implemented. As shown inFIG.1B, the environment105may include the user110, the data collection device130(which is the same as or similar to the data collection device130as shown inFIG.1A), a user device140and a causal analysis server160. For example, the user device140can communicate with the causal analysis server160via a network150, such as, Internet. It is to be understood that the structures of the environment105, the user device140and/or the causal analysis server120are shown only for purpose of illustration, without suggesting any limitation to the scope of the present disclosure. Embodiments of the present disclosure may also be applied to a different environment, a different user device and/or a different causal analysis server. As used herein, the term “user device” may refer to any device having wireless or wired communication capabilities. Examples of the user device include, but not limited to, user equipment (UE), personal computers, desktops, mobile phones, cellular phones, smart phones, personal digital assistants (PDAs), portable computers, image capture devices such as digital cameras, gaming devices, music storage and playback appliances, or Internet appliances enabling wireless or wired Internet access and browsing and the like. As shown inFIG.1B, for example, the user device140may include the user interface module121(which is the same as to similar to the user interface module121as shown inFIG.1A) and a local database141. In some embodiments, the user device140may receive, via the user interface module121, the observation samples from the data collection device130, and/or receive, via the user interface module121, the user inputs from the user110. The user device140may store the received observation samples, data, expert knowledge, and/or useful information at the local database141for subsequent use. The user device140may further transmit the received observation samples, data and/or information to the causal analysis server160via the network150for subsequent processing and/or analysis. As shown inFIG.1B, for example, the causal analysis server160may include the causal analysis engine122(which is the same as to similar to the causal analysis engine122as shown inFIG.1A) and a database161. In some embodiments, in response to receiving the observation samples of the plurality of factors, the causal analysis engine122may discover a causal relationship among the plurality of factors and/or perform causal analysis based on the observation samples. In response to receiving user inputs (such as, user information, observation samples, one or more requests, useful knowledge and/or one or more configurations for causal analysis), the causal analysis engine122may execute one or more actions for causal analysis associated with the user inputs and transmit one or more results or feedbacks back to the user device140. The causal analysis engine122may store the received data, generated structures, expert knowledge and/or any useful information into the database161for subsequent use. The user device140may present the one or more results or feedbacks to the user110via the user interface module121. FIG.2Aillustrates a general system200for causal analysis in accordance with some embodiments of the present disclosure. As shown inFIG.2A, the user interface module121may receive one or more inputs201from the user110and/or the data collection device130. The user interface module121may transmit the one or more inputs201to the causal analysis engine122. The causal analysis engine122may perform actions associated with the one or more inputs201. The causal analysis engine122may generate one or more outputs202by performing the actions. Alternatively, or in addition, the causal analysis engine122may transmit the one or more outputs202back to the user interface module121so as to present them to the user110. FIG.2Billustrates an example block diagram of the user interface module121in accordance with some embodiments of the present disclosure. As shown inFIG.2B, the user interface module121may include at least one of a data input interface210, a causal structure discovery interface220, a causal structure evaluation interface230, a causal graph management interface240, and a strategy management interface250. It is to be understood that the interfaces shown inFIG.2Bare illustrated only for the purpose of illustration, without suggesting any limitation to the scope of the present disclosure. The user interface module121may provide any suitable number of interfaces adapted for implementing embodiments of the present disclosure. For example, in some embodiments, the user interface module121may also provide a login interface which allows the user110to login or log out of the causal analysis engine122. In some embodiments, the data input interface210may allow the user110or the data collection device130to prepare data (such as, observation samples of a plurality of factors) in a format supported by the causal analysis engine122. The data input interface210may also allow the user110to translate sensitive information in the data into non-sensitive information. As shown inFIG.2B, in some embodiments, the data input interface210may provide a data upload interface211, which allows the user110or the data collection device130to upload the data (such as, the observation samples of the plurality of factors). The uploaded data may then transmitted to the causal analysis engine122. Alternatively, or in addition, in some embodiments, the data input interface210may also provide a pre-processing method selection interface212, which allows the user110to select a data preprocessing method from one or more data preprocessing methods supported by the causal analysis engine122, which may help to improve the data quality. In some embodiments, the causal structure discovery interface220may provide a target factor selection interface221, which allows the user110to specify the target factor (such as, the customer satisfaction, the product yields, the software failure rate, etc.) in the plurality of factors. Alternatively, or in addition, in some embodiments, the causal structure discovery interface220may also provide a discovery algorithm selection interface222. The discovery algorithm selection interface222may present a group of causal discovery algorithms supported by the causal analysis engine122to the user110for selection. For example, different causal discovery algorithms may be applicable for different kinds of datasets, such as, discrete data, continuous data, mixed data, and so on. In some embodiments, the discovery algorithm selection interface222may allow the user110to select, from the group of causal discovery algorithms, a suitable causal discovery algorithm to be used in the following discovery of a causal structure. Alternatively, or in addition, in some embodiments, the causal structure discovery interface220may also provide a hyper parameter adjustment interface223, which allows the user110to adjust some hyper parameters of the selected causal discovery algorithm, so as to improve the speed and/or accuracy of the causal structure discovery. Alternatively, or in addition, in some embodiments, the causal structure discovery interface220may also provide an expert knowledge input interface224, which allows the user110to input expert knowledge about causality among the plurality of factors, so as to improve the speed and/or accuracy of the causal structure discovery. Examples of the expert knowledge may include but not limited to: there is direct causality between two factors; there is no direct causality between two factors; one factor is an indirect cause of another factor; a set of factors are not a cause of another set of factors; and so on. The inputted expert knowledge may be stored at a database for subsequent use. Alternatively, or in addition, in some embodiments, the causal structure discovery interface220may also provide a causal structure simplification interface225, which allows the user110to initiate an independent test to optimize the discovered causal structure, for example, to delete some unreasonable causal relations from the discovered causal structure. In some embodiments, the causal structure evaluation interface230may allow the user110to initiate evaluations of the discovered causal structure under a variety of evaluation metrics and/or evaluation methodologies, so as to identify the fitness of the discovered causal structure to the observation samples of the plurality of factors. In some embodiments, the causal structure evaluation interface230may provide an evaluation metrics/methodology selection interface231, which allows the user110to select an evaluation metric and/or an evaluation methodology to be used for evaluating the discovered causal structure. In some embodiments, the discovered causal structure may be presented as a graph, which is also referred to “causal graph” in the following. For example, the causal graph may include a plurality of nodes corresponding to the plurality of factors and one or more edges connecting the plurality of nodes. An edge connecting two nodes may indicate causality between two factors corresponding to the two nodes, which is also referred to as a “causal edge” in the following. In some embodiments, the causal graph management interface240may provide a causal path search selection interface241, which allows the user110to select any two factors from the plurality of factors and initiate a search for causal paths between the selected two factors. Alternatively, or in addition, in some embodiments, the causal graph management interface240may also provide a causal graph editing interface242, which allows the user110to edit the presented causal graph to input some expert knowledge for optimizing the causal graph. In some embodiments, the editing performed by the user110on the causal graph may include any of the following: adding an edge to the causal graph for indicating direct causality between two nodes; removing an existing edge from the causal graph for indicating no direct causality between two nodes; redirecting an existing edge in the causal graph for redirecting causality between two nodes; or adding one or more labels to the causal graph for indicating some prior knowledge. The expert knowledge may be then used for optimizing the discovered causal graph. In some embodiments, if the expert knowledge conflicts with those stored previously, an indication of the conflict may be presented to the user110via the causal graph management interface240(such as, the causal graph editing interface242). Alternatively, or in addition, in some embodiments, the causal graph management interface240may also provide a factor combination selection interface243, which allows the user110to enable or disable a factor combination operation on the discovered causal graph. For example, the factor combination operation may combine two or more factors in the discovered causal graph into one factor, so as to optimize or simplify the discovered causal graph. The factor combination operation may be performed based on confirmatory factor analysis (CFA) or explorative factor analysis (EFA). In some embodiments, a factor combination selection interface same as or similar to the factor combination selection interface243may also be provided by the causal structure discovery interface220, such that the factor combination operation can be performed prior to the causal structure being discovered in order to facilitate the discovery of the causal structure. Alternatively, or in addition, in some embodiments, the causal graph management interface240may also provide a key factor analysis interface244, which allows the user110to select a target factor and input the number of key factors affecting the target factor to be retrieved. The key factor analysis interface244may then present, to the user110, the key factors that affect the target factor. For example, the key factors may be ranked according to their causal effects on the target factor. In some embodiments, the strategy management interface250may provide a strategy selection/control interface251, which allows the user110to input constraints on one or more factors, such as, the sales volume of a product exceeding an expected sales volume while the price of the product falling within a range from 5 dollars to 9 dollars. The strategy selection/control interface251may then automatically present one or more control strategies satisfying those constraints, as well as present respective effects of these control strategies. Alternatively, or in addition, the strategy management interface250may also provide a strategy evaluation interface252, which allows the user110to input one or more strategies for evaluation. For example, a strategy inputted by the user110may indicate values of at least one factor affecting the target factor. The strategy evaluation interface252may then present respective effects of these strategies if they are carried out, and will allow the user110to select the optimal strategy according to the presented effects. It is to be understood that each interface in the user interface module121as described above may interact with a corresponding module or unit in the causal analysis engine122. Example modules or units in the causal analysis engine122will be described with reference toFIGS.2C-2Ein the following. FIG.2Cillustrates a block diagram of an example causal analysis engine122in accordance with some embodiments of the present disclosure. As shown inFIG.2B, for example, the causal analysis engine122may include a data processing module260, a causal structure discovery module270and a causal analysis module280. It is to be understood that the modules of the causal analysis engine122are shown only for purpose of illustration, without suggesting any limitation to the scope of the present disclosure. In some embodiments, the causal analysis engine122may include additional modules and/or omit some module as shown. For example, in some embodiments, the data processing module260may be omitted. In some embodiments, the data processing module260may receive observation data (such as, the observation samples of the plurality of factors) from the data input interface210and perform a data pre-processing on the received observation data. The data processing module260may also receive information from the causal structure discovery interface220and perform further processing to optimize the factors for which a causal structure is to be discovered. Example function units in the data processing module260will be described with reference toFIG.2Din the following. FIG.2Dillustrates a block diagram of an example data processing module260in accordance with some embodiments of the present disclosure. As shown inFIG.2D, for example, the data processing module260may include at least one of a data pre-processing unit261, a factor engineering unit262and a factor shrinkage unit263. It is to be understood that the units of the data processing module260are shown only for purpose of illustration, without suggesting any limitation to the scope of the present disclosure. In some embodiments, the data processing module260may include additional units and/or omit some unit as shown. For example, in some embodiments, the factor engineering unit262and/or the factor shrinkage unit263may be omitted. In some embodiments, the data (such as, the observation samples of the plurality of factors) uploaded via the data input interface210may be provided to the data pre-processing unit261for data pre-processing. In some embodiments, the data pre-processing unit261may provide a data cleaning function which may process and clean noisy data that is not in a reasonable range (for example, age is 200, a price discount is 1.2, etc.). In some embodiments, the data pre-processing unit261may provide several methods to fill in a missing value in the data, such as, using a mean value, a nearby value, a predicted value or the like to fill in the missing value in the data. In some embodiments, the data pre-processing unit261may provide a data filtering function which may automatically remove observation samples/variables with a missing ratio exceeding a threshold set by the user110. Alternatively, or in addition, in some embodiments, the data pre-processing unit261may provide a data statistic function which may perform statistic on the uploaded data, such as, calculating the maximum, minimum, mean, or variance value for each observable variable, calculating a missing ratio for each observable variable and so on. The preprocessed data can also be stored in a database (such as, the database123as shown inFIG.1Aor the database161as shown inFIG.1B) for subsequent use. In some embodiments, the factor engineering unit262may analyze characters of the plurality of factors based on the observation samples and optimize the plurality of original factors into a group of new factors. These new factors can reflect the characters of the original factors, such as, change rates of the original factors in a certain time period or on a certain dimension, so as to facilitate the discovery of the causal relationship/structure. It is to be understood that, in some embodiments, the factor engineering unit262can be omitted. In some embodiments, as described above, the causal structure discovery interface220(such as, the target factor selection interface221) may allow the user110to specify the target factor (such as, the customer satisfaction, the product yields, the software failure rate, etc.) in the plurality of factors. The factor shrinkage unit263may receive an indication of the target factor from the causal structure discovery interface220and use some analysis technology to delete, from the plurality of factors, one or more factors which are unlikely to be a cause of the target factor, so as to improve the efficiency of the following discovery of the causal relationship/structure. It is to be understood that, in some embodiments, the factor shrinkage unit263can be omitted. With reference back toFIG.2C, in some embodiments, the causal structure discovery module270may discover, from the observation samples of the plurality of factors, a causal relationship/structure among the plurality of factors. Example function units in the causal structure discovery module270will be described with reference toFIG.2Din the following. FIG.2Dillustrates a block diagram of an example causal structure discovery module270in accordance with some embodiments of the present disclosure. As shown inFIG.2D, for example, the causal structure discovery module270may include at least one of a causal structure discovery unit271and a causal structure simplification unit272. It is to be understood that the units of the causal structure discovery module270are shown only for purpose of illustration, without suggesting any limitation to the scope of the present disclosure. In some embodiments, the causal structure discovery module270may include additional units and/or omit some unit as shown. For example, in some embodiments, the causal structure simplification unit272may be omitted. In some embodiments, as described above, the causal structure discovery interface220may allow the user110to select, from a group of causal discovery algorithms, a suitable causal discovery algorithm to be used in the discovery of the causal relationship. Alternatively, or in addition, in some embodiments, the causal structure discovery interface220may also allow the user110to adjust some hyper parameters of the selected causal discovery algorithm, so as to improve the speed and/or accuracy of the causal analysis. Alternatively, or in addition, in some embodiments, the causal structure discovery interface220may also allow the user110to input expert knowledge about causality among the plurality of factors, so as to improve the speed and/or accuracy of the causal structure discovery. Indications of the selected causal discovery algorithm, the adjusted hyper parameters and/or the expert knowledge may be provided to the causal structure discovery module270. In some embodiments, the causal structure discovery module270may discover, from the observation samples of the plurality of factors, a causal relationship among the plurality of factors based on the selected causal discovery algorithm, the adjusted hyper parameters and/or the expert knowledge. The causal structure discovery module270may generate a causal structure representing the discovered causal relationship. In some embodiments, the generated causal structure can be presented in different visual forms, such as, a form, a causal graph, or so on. In some embodiments, the generated causal structure may be presented as a causal graph. For example, the causal graph may include a plurality of nodes corresponding to the plurality of factors and one or more causal edges connecting the plurality of nodes. In some embodiments, as described above, the user110may initiate an independent test to optimize the discovered causal structure via the causal structure discovery interface220(such as, the causal structure simplification interface225). In some embodiments, in this case, the causal structure simplification unit272may receive an indication from the causal structure simplification interface225and apply an independent test technique to optimize the generated causal graph, such as, to delete some unreasonable causal edges from the generated causal graph. In some embodiments, the generated and/or optimized causal graph can be provided to the causal structure discovery interface220for presentation to the user110. Additionally, the generated and/or optimized causal graph may also be stored in a database (such as, the database123as shown inFIG.1Aor the database161as shown inFIG.1B) for subsequent use. With reference back toFIG.2C, in some embodiments, the causal analysis module280may perform actions for causal analysis based on one or more user inputs via the causal structure evaluation interface230, the causal graph management interface240and/or the strategy management interface250. Example function units in the causal analysis module280will be described with reference toFIG.2Ein the following. FIG.2Eillustrates a block diagram of an example causal analysis module280in accordance with some embodiments of the present disclosure. As shown inFIG.2E, for example, the causal analysis module280may include a causal structure evaluation unit281which may interact with the causal structure evaluation interface230, a graph analysis unit282which may interact with the causal graph management interface240and a strategy unit283which may interact with the strategy management interface250. For example, the graph analysis unit282may include a causal path search function291, a causal graph editing function292, a factor combination function293and a key factor analysis function294. The strategy unit283may include a strategy control/evaluation function295and a strategy prescription function296. It is to be understood that the units or functions in the causal analysis module280are shown only for purpose of illustration, without suggesting any limitation to the scope of the present disclosure. In some embodiments, the causal analysis module280may include additional units or functions, and/or omit some unit or function as shown. For example, in some embodiments, the factor combination function293may be omitted. In some embodiments, as described above, the causal structure evaluation interface230allow the user110to initiate evaluations of the discovered causal structure under a variety of evaluation metrics and/or evaluation methodologies, so as to identify the fitness of the discovered causal structure to the observation samples of the plurality of factors. For example, the evaluation metrics/methodology selection interface231may allow the user110to select an evaluation metric and/or an evaluation methodology to be used for evaluating the discovered causal structure. The evaluation metric may be an absolute metric or a relative metric. Examples of the absolute metric may include, but not limited to, Root Mean Square Error of Approximation (RMSEA), Standardized Root Mean Square Residual (SRMR), Bayesian information criterion (BIC), and so on. RMSEA is related to residual in the model. RMSEA values range from 0 to 1 with a lower RMSEA value indicating better model fit. For example, acceptable model fitness may be indicated by an RMSEA value of 0.05 or less. SRMR is an overall badness-of-fit measure that is based on the fitted residuals. SRMR closing to zero may indicate a good fit. A rule of thumb is that the SRMR should be less than 0.05 for a good fit, whereas values smaller than 0.10 may be interpreted as acceptable. BIC is a score considering the balance of data fitting and model sparsity. For example, the model with the lowest BIC is preferred. Examples of the relative metric may include, but not limited to, Comparative Fit Index (CFI), Non-normed Fit Index(NNFI) or Tucker-Lewis Index (TLI), and so on. CFI is equal to the discrepancy function adjusted for sample size. CFI ranges from 0 to 1 with a larger value indicating better model fit. A rule of thumb for this index is that 0.97 is indicative of a good fit relative to the independence model, while values greater than 0.95 may be interpreted as an acceptable fit. NNFI or TLI (they are the same) values range from 0 to 1, with a higher value indicating better fit. This index greater than 0.97 is indicative of a good fit relative to the independence model, whereas values greater than 0.95 may be interpreted as an acceptable fit. In some embodiments, an indication of the selected evaluation metric and/or evaluation methodology may be provided to the causal structure evaluation unit281. The causal structure evaluation unit281may evaluate the discovered causal structure under the selected evaluation metric and/or evaluation methodology, so as to identify the fitness of the discovered causal structure to the observation samples of the plurality of factors. The causal structure evaluation unit281may provide a result of the evaluation to the causal structure evaluation interface230for presentation to the user110. In some embodiments, the graph analysis unit282which includes at least one of the causal path search function291, the causal graph editing function292, the factor combination function293and the key factor analysis function294may interact with the causal graph management interface240. As described above, the causal graph management interface240(such as, the causal path search selection interface241) may allow the user110to select any two factors from the plurality of factors and initiate a search for causal paths between the selected two factors. In some embodiments, an indication of the selected factors may be provided to the causal path search function291. The causal path search function291may search the discovered causal structure (such as, the causal graph) for causal paths between the selected two factors. The causal path search function291may provide the causal paths to the causal graph management interface240for presentation to the user110. As described above, in some embodiments, the causal graph management interface240(such as, the causal graph editing interface242) may allow the user110to edit the presented causal graph to input some expert knowledge for optimizing the causal graph. In some embodiments, the editing performed by the user110on the causal graph may include any of the following: adding an edge to the causal graph for indicating direct causality between two nodes; removing an existing edge from the causal graph for indicating no direct causality between two nodes; redirecting an existing edge in the causal graph for redirecting causality between two nodes; or adding one or more labels to the causal graph for indicating some expert knowledge. The expert knowledge indicated by the editing on the causal graph may be compared with the expert knowledge stored previously. In some embodiments, if there is a conflict, an indication of the conflict may be presented to the user110via the causal graph management interface240(such as, the causal graph editing interface242). In some embodiments, if there is no conflict, the expert knowledge indicated by the editing on the causal graph may be stored at a database for subsequent use. In addition, the expert knowledge indicated by the editing on the causal graph may be provided to the graph analysis unit282(such as, the causal graph editing function292). In some embodiments, the graph analysis unit282may re-discover the causal relationship/structure among the plurality of factors based on the expert knowledge and the observation samples of the plurality of factors and regenerate a further causal structure (such as, a further causal graph) representing the re-discovered causal relationship. The regenerated causal structure may integrate the expert knowledge and reflect the editing performed on the initial causal graph. For example, the regenerated causal structure can be provided to the causal graph management interface240for presentation to the user110. Additionally, the regenerated causal structure/graph may also be stored in a database (such as, the database123as shown inFIG.1Aor the database161as shown inFIG.1B) for subsequent use. As described above, in some embodiments, the causal graph management interface240(such as, the factor combination selection interface243) may allow the user110to enable or disable a factor combination operation on the discovered causal graph. An indication for enabling or disabling the factor combination operation may be provided to the graph analysis unit282(such as, the factor combination function293). The factor combination function293may perform the factor combination operation by combining two or more factors in the discovered causal graph into one factor, so as to optimize or simplify the discovered causal graph. The factor combination operation may be performed based on confirmatory factor analysis (CFA) or explorative factor analysis (EFA). The optimized or simplified causal graph may be provided to the causal graph management interface240for presentation to the user110. Additionally, the optimized or simplified causal structure/graph may also be stored in a database (such as, the database123as shown inFIG.1Aor the database161as shown inFIG.1B) for subsequent use. As described above, in some embodiments, the causal graph management interface240(such as, the key factor analysis interface244) may allow the user110to select a target factor and input the number of key factors affecting the target factor to be retrieved. The target factor and the number of the key factors may be indicated to the graph analysis unit282(such as, the key factor analysis function294). In some embodiments, the key factor analysis function294may search the causal graph for those factors affecting the target factor. Each factor may be assigned with a score to reflect its importance on the target factor. The key factor analysis function294may provide the key factors as well as their causal effects on the target factor to the causal graph management interface240for presentation to the user110. In some embodiments, for example, the causal graph management interface240may highlight one or more nodes corresponding to the key factors on the causal graph. Alternatively, or in addition, the causal graph management interface240may also present visual representations (such as, text, numbers, progress bars, pie chart, bar chart, etc.) of importance of the key factors. In some embodiments, the strategy unit283which includes the strategy control/evaluation function295and the strategy prescription function296may interact with the strategy management interface250. As described above, in some embodiments, the strategy management interface250(such as, the strategy selection/control interface251) may allow the user110to input constraints on one or more factors, such as, the sales volume of a product exceeding an expected sales volume while the price of the product falling within a range from 5 dollars to 9 dollars. The constraints on the one or more factors may be provided to the strategy unit283(such as, the strategy prescription function296). In some embodiments, the strategy prescription function296may determine one or more strategies satisfying the constraints based on the causal graph. In some embodiments, if the strategy prescription function296is unable to find a strategy satisfying all of the constraints, the strategy prescription function296may try to find one or more strategies which can satisfy at least a part of the constraints. In some embodiments, the strategy prescription function296may find one or more strategies which can cause a predicted value of the target factor (such as, the sales volume of the product) to approach the expected sales volume (such as, a difference between the predicted sales volume of the product and the expected sales volume is below a threshold). The strategy prescription function296may provide the determined one or more strategies as well as respective effects of these strategies to the strategy management interface250for presentation to the user110. The strategy management interface250may allow the user110to select the optimal strategy according to the presented effects. As described above, in some embodiments, the strategy management interface250(such as, the strategy evaluation interface252) may allow the user110to input one or more strategies for evaluation. For example, a strategy inputted by the user110may indicate values of at least one factor affecting the target factor. The inputted strategy may be provided to the strategy unit283(such as, the strategy control/evaluation function295). In some embodiments, the strategy control/evaluation function295may execute a simulation to predict a value of the target factor based on the causal graph and the values of the at least one factor indicated by the strategy. The strategy control/evaluation function295may provide the predicted value of the target factor to the strategy management interface250for presentation to the user110. In this way, the user110can foresee an effect of the strategy if the strategy is carried out. The interactions between the user interface module121and the causal analysis engine122are summarized inFIG.3. As shown inFIG.3and as described above with reference toFIGS.2B-2E, the data input interface210may interact with the data processing module260. The causal structure discovery interface220may interact with the data processing module260and/or the causal structure discovery module270. The observation data processed by the data processing module260may be provided to the causal structure discovery module270. The causal structure discovered by the causal structure discovery module270may be provided to the causal analysis module280which includes the causal structure evaluation unit281, the graph analysis unit282and the strategy unit283. As shown inFIG.3and as described above with reference toFIGS.2B-2E, the causal structure evaluation interface230may interact with the causal structure evaluation unit281in the causal analysis module280. The causal graph management interface240may interact with the graph analysis unit282in the causal analysis module280. The strategy management interface250may interact with the strategy unit283in the causal analysis module280. In some embodiments, the causal analysis engine122may further include a display control module (not shown in figures). The display control module may control the display of the discovered causal structure (such as, the causal graph) in response to an operation of the user110. The display control module may be configured to perform at least one of following actions: (1) indicating causal importance of a factor on the target factor by changing at least one of a size and a color of the factor; (2) indicating causal importance between related factors by changing at least one of thicknesses and colors of edges (or arrows) associated with the factors; (3) indicating whether the target factor is selected or not by changing the shape of the target factor in the causal graph; (4) presenting a chart in which a factor with higher overall importance is ranked on top of another factor with lower overall importance; (5) relocating factors in a specific shape (for example, a circle) to show a density of causality among the factors; (6) shuffling factors in the causal graph to show a simplified graph having shorter edges (or arrows) among factors according to causal importance; (7) indicating a factor with an animation (e.g. blinking) when the user110selects a name of the factor; (8) indicating factors having direct causal relations with a selected factor and edges (or arrows) representing the direct causal relations while hiding other factors in response to a predetermined operation of the user110(for example, selecting the factor and keeping pressing the factor for a period); (9) keeping edges (or arrows) representing causal relations connected and moving the edges (or arrows) in response to the user110moving one or more factors by dragging and dropping; (10) indicating a description of a factor in response to the user110selecting the factor and hovering on the factor for a period; (11) controlling showing and hiding of causal importance associated with an edge (or an arrow) on the causal graph; (12) controlling showing and hiding of at least some of edges (or arrows) on the causal graph according to respective causal importance associated with the edges (or arrows); and so on. It is to be understood that a corresponding operation interface may be included in the user interface module121. The operation interface may be used by the user to trigger execution of at least one of the above actions. FIG.4illustrates an example method400in accordance with some embodiments of the present disclosure. The method400can be implemented by the causal analysis system200as shown inFIG.2A. In some embodiments, for example, the method400can be implemented at the causal analysis server120as shown inFIG.1A. Alternatively, in some embodiments, for example, the method400can be implemented at the user device140and the causal analysis server160as shown inFIG.1B. It is to be understood that the method400may include additional blocks not shown and/or may omit some shown blocks, and the scope of the present disclosure is not limited in this regard. At block410, a first causal structure indicating a first causal relationship among a plurality of factors is determined from observation samples of the plurality of factors, each observation sample including a set of observation values of the plurality of factors. In some embodiments, as described above, the user110or the data collection device130may upload the observation samples of the plurality of factors via the data input interface210(such as, the data upload interface211). For example, each of the observation samples may include a set of observation values of the plurality of factors. In some embodiments, the uploaded observation samples of the plurality of factors can be processed by the data processing module260(such as, one or more of the data pre-processing unit261, the factor engineering unit262and the factor shrinkage unit263). The causal structure discovery module270(such as, the causal structure discovery unit271) may determine, from the observation samples of the plurality of factors, the first causal structure indicating the first causal relationship among the plurality of factors. In some embodiments, as described above, the causal structure discovery interface220may allow the user110to select, from a group of causal discovery algorithms, a suitable causal discovery algorithm to be used in the discovery of the causal relationship. Alternatively, or in addition, the causal structure discovery interface220may also allow the user110to adjust some hyper parameters of the selected causal discovery algorithm, so as to improve the speed and/or accuracy of the causal analysis. Alternatively, or in addition, the causal structure discovery interface220may also allow the user110to input expert knowledge about causality among the plurality of factors, so as to improve the speed and/or accuracy of the causal structure discovery. In some embodiments, the causal structure discovery module270(such as, the causal structure discovery unit271) may discover, from the observation samples of the plurality of factors, the first causal relationship among the plurality of factors based on the selected causal discovery algorithm, the adjusted hyper parameters and/or the expert knowledge. In some embodiments, as described above, the user110may initiate an independent test to optimize the discovered causal structure via the causal structure discovery interface220(such as, the causal structure simplification interface225). In some embodiments, the causal structure discovery module270(such as, the causal structure simplification unit272) may receive an indication from the causal structure simplification interface225and apply an independent test technique to optimize or simplify the generated causal structure, such as, to delete some unreasonable causal relations from the generated causal structure. At block420, the first causal structure is presented to the user110. The generated causal structure can be presented in different visual forms, such as, a form, a causal graph, or so on. In some embodiments, the first causal structure may be presented as a causal graph. For example, the causal graph may include a plurality of nodes corresponding to the plurality of factors and one or more causal edges connecting the plurality of nodes. In the following, the phrases “causal structure”, “causal graph” and “causal relationship” can be used interchangeably. It is to be understood that this is merely for the purpose of illustration, without suggesting any limitation to the scope of the present disclosure. FIG.5Aillustrates an example causal graph510in accordance with some embodiments of the present disclosure. As shown inFIG.5A, the causal graph510includes a plurality of nodes501,502. . .506corresponding to a plurality of factors. For the purpose of description, in the following, the node501may also be referred to as “factor501”; the node502may also be referred to as “factor502” . . . the node506may also be referred to as “factor506”. It is to be understood that the number of factors in the causal graph510is provided only for the purpose of illustration, without suggesting any limitation to the scope of the present disclosure. The causal graph in accordance with embodiments of the present disclosure can include any suitable number of nodes or factors. It is also to be understood that in different fields, the factor501,502. . . or506may have different meanings. For example, in the field of marketing research, the factor501,502. . . or506may include any of the following: a customer level, a customer phone number, traffic consumed per month, ratio of free traffic, total cost of the traffic consumed per month, the number of complaints, customer satisfaction and so on. In the field of software development, the factor501,502. . . or506may include any of the following: an amount of human resources for software development, time duration for software development, the number of functions, the number of code lines, a programming language used for software development, software failure rate, and so on. As shown inFIG.5A, the causal graph510also includes a plurality of causal edges511,512. . .516connecting the plurality of nodes501,502. . .506. For example, the edge511pointing from the node501to the node503may indicate that the factor501is a direct cause of the factor503; the edge512pointing from the node502to the node503may indicate that the factor502is a direct cause of the factor503. . . the edge516pointing from the node505to the node506may indicate that the factor505is a direct cause of the factor506. In some embodiments, a causal edge in the causal graph510may have different colors. For example, if the edge511is of a first color (such as, red), it means that the value of the factor503may increase as the value of the factor501increases. If the edge511is of a second color (such as, blue) different from the first color, it means that the value of the factor503may decrease as the value of the factor501increases. At block430, it is determined if at least one user input about the first causal structure is received from the user110. In response to the at least one user input being received, at block440, actions associated with the at least one user input are executed based on the first causal structure. Then, at block450, a result of the execution of the actions is presented to the user110. In some embodiments, the at least one user input may comprise an edit operation performed on the first causal structure (such as, the causal graph) by the user110. As described above, for example, the causal graph management interface143may allow the user110to edit the presented causal structure (such as, the causal graph) to input some prior knowledge for optimizing the discovered causal structure. In some embodiments, the editing performed by the user110on the causal graph may include any of the following: adding an edge to the causal graph for indicating direct causality between two nodes; removing an existing edge from the causal graph for indicating no direct causality between two nodes; redirecting an existing edge in the causal graph for redirecting causality between two nodes; and adding one or more labels to the causal graph for indicating some prior knowledge. In some embodiments, the plurality of nodes may comprise a first node (such as, the node501inFIG.5A) corresponding to a first factor from the plurality of factors and a second node (such as, the node503inFIG.5A) corresponding to a second factor from the plurality of factors and the at least one edge may comprise a first edge (such as, the edge511inFIG.5A) pointing from the first node to the second node for indicating that the first factor is a direct cause of the second factor. In some embodiments, the edit operation performed by the user110on the causal graph may include removing the first edge from the causal graph, so as to indicate that the first factor is not a direct cause of the second factor. Alternatively, or in addition, in some embodiments, the edit operation performed by the user110on the causal graph may include redirecting the first edge to point from the second node to the first node (such as, redirecting the edge511to point from the node503to the node501), so as to indicate that the second factor is a direct cause of the first factor. Alternatively, or in addition, in some embodiments, the plurality of nodes may comprise a third node (such as, the node502inFIG.5A) corresponding to a third factor from the plurality of factors and a fourth node (such as, the node506inFIG.5A) corresponding to a fourth factor from the plurality of factors. In some embodiments, the edit operation performed by the user110on the causal graph may include adding a second edge pointing from the third node to the fourth node to the causal graph, so as to indicate that the third factor is a direct cause of the fourth factor. Alternatively, or in addition, in some embodiments, the edit operation performed by the user110on the causal graph may include adding a first label associated with the third node and the fourth node to the causal graph, so as to indicate that the third factor is an indirect cause of the fourth factor. Alternatively, or in addition, in some embodiments, the plurality of nodes comprise a first set of nodes corresponding to a first set of factors from the plurality of factors and a second set of nodes corresponding to a second set of factors from the plurality of factors. In some embodiments, the edit operation performed by the user110on the causal graph may include adding a second label associated with the first set of nodes and the second set of nodes to the causal graph, so as to indicate that the first set of factors are not a cause of the second set of factors. In some embodiments, in response to the edit operation being performed by the user110, prior information for optimizing the first causal structure may be determined from the edit operation. A second causal relationship among the plurality of factors which is different from the first causal relationship may be determined, based on the information and the observation samples of the plurality of factors. Then, a second causal structure representing the second causal relationship can be presented to the user110. For example, the second causal structure may integrate the prior information and reflect the editing performed on the first causal structure. In some embodiments, the at least one user input may comprise a first request to retrieve a first number of factors affecting a target factor from the plurality of factors. For example, the first request may indicate the target factor and the first number (that is, the number of key factors to be retrieved) to the causal analysis system200. As described above, for example, the causal graph management interface240(such as, the key factor analysis interface244) may allow the user110to select a target factor and input the number of key factors affecting the target factor to be retrieved. In some embodiments, in response to receiving the first request, the causal graph management interface240(such as, the key factor analysis interface244) may determine the target factor and the first number (that is, the number of key factors to be retrieved) from the first request. The target factor and the number of the key factors may be indicated to the graph analysis unit282(such as, the key factor analysis function294). In some embodiments, the graph analysis unit282(such as, the key factor analysis function294) may determine, from the plurality of factors, at least one factor affecting the target factor based on the first causal structure. For example, the at least one factor may include a factor which is a direct cause or an indirect cause of the target factor. The graph analysis unit282(such as, the key factor analysis function294) may estimate respective causal effects of the at least one factor on the target factor based on the observations samples and the first causal structure. The graph analysis unit282(such as, the key factor analysis function294) may rank the at least one factor based on the estimated causal effects (for example, from high to low) and select the first number of key factors (which have greatest causal effects on the target factor) based on a result of the ranking. In some embodiments, the first number of factors may correspond to the first number of nodes from the plurality of nodes in the causal graph. The causal graph management interface240may highlight the first number of nodes in the causal graph. Alternatively, or in addition, the causal graph management interface240may present visual representations indicating causal effects of the first number of factors on the target factor to the user110. FIG.5Billustrates the example causal graph510which shows the key factors affecting the target factor in accordance with some embodiments of the present disclosure. As shown inFIG.5B, two key factors503and505which have greatest effects on the target factor506are highlighted on the causal graph510. In particular, the node505is shown bigger than the node503, which indicates that the causal effect of the factor505on the target factor506(that is, the importance of the factor505) exceeds the causal effect of the factor503on the target factor506(that is, the importance of the factor503). Alternatively, in some embodiments, other visual representations (such as, text, numbers, progress bars, pie chart, bar chart, etc.) can be used to show respective causal effects of the key factors on the target factor. In some embodiments, the at least one user input may comprise a second request to obtain a strategy that enables a target factor from the plurality of factors to reach an expected value. For example, the second request may indicate the target factor and the expected value of the target factor to the causal analysis system200. As described above, for example, the strategy management interface250(such as, the strategy selection/control interface251) may allow the user110to input constraints on one or more factors, such as, the sales volume of a product exceeding an expected sales volume while the price of the product falling within a range from 5 dollars to 9 dollars. In some embodiments, in response to receiving the second request, the strategy management interface250(such as, the strategy selection/control interface251) may determine the target factor and the expected value of the target factor from the second request. The target factor and the expected value of the target factor may be indicated to the strategy unit283(such as, the strategy prescription function296). In some embodiments, the strategy prescription function296may determine one or more strategies satisfying the constraints based on the causal graph. In some embodiments, if the strategy prescription function296is unable to find a strategy satisfying all of the constraints, the strategy prescription function296may try to find one or more strategies which can satisfy at least a part of the constraints. In some embodiments, the strategy prescription function296may find one or more strategies which can cause a predicted value of the target factor (such as, the sales volume of the product) to approach the expected sales volume (such as, a difference between the predicted sales volume of the product and the expected sales volume is below a threshold). The strategy prescription function296may provide the determined one or more strategies as well as respective effects (such as, predicted values of the target factor if these strategies are carried out) to the strategy management interface250for presentation to the user110. The strategy management interface250may allow the user110to select the optimal strategy according to the presented effects. In some embodiments, the at least one user input may comprise a third request to initiate an evaluation of a strategy about a target factor from the plurality of factors. For example, the third request may indicate the target factor to the causal analysis system200. In some embodiments, the third request may be received by the strategy management interface250(such as, the strategy evaluation interface252). In some embodiments, in response to receiving the third request, the strategy management interface250(such as, the strategy evaluation interface252) may determine the target factor from the third request. The strategy management interface250(such as, the strategy evaluation interface252) may provide an indication of the target factor to the strategy unit283(such as, the strategy control/evaluation function295). In some embodiments, the strategy control/evaluation function295may determine, from the plurality of factors and based on the first causal structure, at least one factor affecting the target factor and generate a sub-structure of the first causal structure based on the target factor and the at least one factor. In some embodiments, for example, the sub-structure may be represented as sub-graph of the causal graph, which may comprise a set of nodes corresponding to the target factor and the at least one factor and one or more edges connecting the set of nodes. In some embodiments, the strategy control/evaluation function295may provide the sub-structure (such as, the sub-graph) of the first causal structure to the strategy management interface250(such as, the strategy evaluation interface252) for presentation to the user110, such that the user110can input one or more strategies for evaluation based on the presented sub-structure. FIG.5Cillustrates an example sub-graph520of the causal graph510in accordance with some embodiments of the present disclosure. As shown inFIG.5C, the third request received from the user110for initiating an evaluation of a strategy may indicate that the target factor is the factor506. In some embodiments, the third request may also indicate some additional information about the at least one factor to be shown in the sub-graph. For example, the third request may also indicate that a distance (that is, the number of causal edges) from each of the at least one factor to the target factor should be below a threshold (for example, 2 inFIG.5C). As shown inFIG.5C, the determined at least one factor affecting the target factor includes three factors503,504and505. It can be seen that the distance from each of the three nodes503,504and505to the node506is below 2. In particular,FIG.5Calso shows respective values of the three factors503,504and505and the target factor506. For example, the values of the factors503,504,505and506are shown as “50.03”, “50.01”, “50.05” and “50.08” respectively. In this way, the user110can edit the values of one or more of the nodes503,504and505to input a control strategy affecting the target factor506for evaluation. In some embodiments, the strategy management interface250(such as, the strategy evaluation interface252) may further receive a strategy for evaluation from the user110, which is inputted based on the presented sub-structure (such as, the sub-graph520). As described above, for example, the strategy management interface250(such as, the strategy evaluation interface252) may allow the user110to input one or more strategies for evaluation. For example, a strategy inputted by the user110may indicate values of at least one factor affecting the target factor. The inputted strategy may be provided to the strategy unit283(such as, the strategy control/evaluation function295). In some embodiments, the strategy control/evaluation function295may execute a simulation to predict a value of the target factor based on the causal graph and the values of the at least one factor indicated by the strategy. The strategy control/evaluation function295may provide the predicted value of the target factor to the strategy management interface250for presentation to the user110as a result of the evaluation of the strategy. In this way, the user110can foresee an effect of the strategy if the strategy is carried out. FIGS.5D and5Eillustrate examples of evaluations of different strategies for affecting the target factor in accordance with some embodiments of the present disclosure. As shown inFIG.5D, for example, the user110may change the value of the factor503from “50.03” as shown inFIG.5Cto “80”. The strategy control/evaluation function295may predict, based on the causal relationship, values of the factors504,505and506that are affected by the factor503. For example, the predicted value of the factor504is “53.04”, which is different from its original value “50.01” as shown inFIG.5C. The predicted value of the factor505is “70.89”, which is different from its original value “50.05” as shown inFIG.5C. The predicted value of the target factor506is “65.62”, which is different from its original value “50.08” as shown inFIG.5C. The predicted values can be presented to the user110as a result of the evaluation. As shown inFIG.5E, for example, the user110may further change the value of the factor504from “53.04” as shown inFIG.5Dto “70”. The strategy control/evaluation function295may predict, based on the causal relationship, a value of the factor506that is affected by the factor504. For example, the predicted value of the factor506is “70.79”, which is different from “65.62” as shown inFIG.5D. In particular, since the value of the factor504is controlled by the user110, the factor504is no longer affected by the factor504. Therefore, as shown inFIG.5E, the causal edge513, which indicates that the factor503is a direct cause of the factor504, is removed from the causal graph520. FIG.6illustrates an example method600for locating key factors affecting a target factor in accordance with some embodiments of the present disclosure. The method600can be implemented at the causal analysis engine122as shown inFIGS.1A-1B,2A and/or2C. In some embodiments, for example, the method600can be implemented by the key factor analysis function294of the causal analysis module280in the causal analysis engine122. At block610, the causal analysis engine122may obtain observation samples of a plurality of factors and a causal structure which indicates a causal relationship among the plurality of factors. In some embodiments, the observation samples of the plurality of factors may be received via the user interface module121and stored at a database (such as, the database123as shown inFIG.1Aor the database161as shown inFIG.1B). The causal structure can be discovered by the causal analysis engine122(such as, the causal structure discovery module270) and stored at the database. That is, the causal analysis engine122may obtain the observation samples of the plurality of factors and the causal structure from the database. Alternatively, in some embodiments, the causal analysis engine122may obtain the observation samples of the plurality of factors from the user interface module121and obtain the causal structure by discovering the causal structure from the observation samples. At block620, in response to a target factor being identified in the plurality of factors, the causal analysis engine122may determine, from the plurality of factors, at least one factor affecting the target factor based on the causal structure. At block630, the causal analysis engine122may estimate, for each of the at least one factor, an overall causal effect of the factor on the target factor based on the observation samples and the causal structure. As used herein, the “overall causal effect” may refer to a sum of direct causal effects and indirect causal effects of the factor on the target factor. In some embodiments, the causal analysis engine122may estimate the overall causal effect of the factor on the target factor based on a causal effect estimation algorithm. It is to be understood that the causal effect estimation algorithm can be any estimation algorithm or estimator currently known or to be developed in the future. In some embodiments, the causal analysis engine122may determine, from the causal structure, one or more causal paths between the factor and the target factor. The causal analysis engine122may further estimate, for each of the one or more causal paths, a causal effect of the factor on the target factor. The causal analysis engine122may then determine a sum of causal effects for the one or more causal paths as the overall causal effect of the factor on the target factor. FIG.7illustrates an example of determining an overall causal effect of a cause factor on a target factor in accordance with some embodiments of the present disclosure. As shown inFIG.7, a causal structure700may include factors701,702. . .706. The factor705is identified as the target factor. It is assumed that an overall causal effect of the factor702on the target factor705is to be determined. The causal analysis engine122may first identify causal paths between the factor702and the target factor705. For example, the causal paths between the factor702and the target factor705include: (1) factor702→factor705; (2) factor702→factor706→factor705; (3) factor702→factor701→factor706→factor705; and (4) factor702→factor703→factor704→factor705. The causal analysis engine122may estimate, for the above four causal paths, respective causal effects of the factor702on the target factor705. Then, the causal analysis engine122may sum up the estimated causal effects to derive the overall causal effect of the factor702on the target factor705. With reference back toFIG.6, at block640, the causal analysis engine122may rank the at least one factor based on the estimated overall causal effects of the at least one factor on the target factor, so as to obtain a sequence of key factors which affect the target factor. In some embodiments, the overall causal effect of a cause factor on the target factor may be estimated as a positive value or a negative value. For example, a positive value may indicate that the observation value of the target factor may increase as the value of the cause factor increases, while a negative value may indicate that the observation value of the target factor may decrease as the value of the cause factor increases. In some embodiments, the causal analysis engine122may determine respective absolute values of the overall causal effects of the at least one factor on the target factor, and then rank the at least one factor based on the determined absolute values. A general process800for causal analysis in accordance with some embodiments of the present disclosure can be summarized inFIG.8. As shown inFIG.8, the general process800may include one or more actions810for data collection (such as, collection of observation samples), one or more actions820for data input (such as, uploading the observation samples), one or more actions830for data processing (such as, data pre-processing, factor engineering and/or factor shrinkage), one or more actions840for causal relationship/structure discovery, one or more actions850for outputting the discovered causal relationship/structure, one or more actions860for causal analysis and one or more actions870for executing a strategy. The process800can be executed more than once. It is to be understood that the process800may include additional actions not shown and/or may omit some shown actions. It is to be also understood that the process800can be implemented by a single physical device or by a plurality of physical devices. The scope of the present disclosure is not limited in this regard. In view of the above, it can be seen that, embodiments of the present disclosure enable automatic discovery of a causal relationship among a plurality of factors. A causal structure representing the causal relationship can be presented to a user. The user can adjust the causal structure to input some prior knowledge, so as to optimize the discovered causal relationship. Key factors affecting the target factor can be located in the plurality of factors. Moreover, embodiments of the present disclosure can evaluate an effect of a strategy which is inputted by the user for affecting the target factor. Embodiments of the present disclosure can also recommend one or more optimal strategies to the user. FIG.9illustrates a schematic block diagram of a device900that can be used to implement the embodiments of the present disclosure. For example, the causal analysis server120as shown inFIG.1A, the user device140or the causal analysis server160as shown inFIG.1B, and/or the causal analysis engine122as shown inFIGS.1A-1B,2A and/or2Ccan be implemented by the device900. As shown inFIG.9, the device900includes a central processing unit (CPU)901which may perform various appropriate actions and processing based on computer program instructions stored in the read only memory (ROM)902or computer program instructions uploaded from storage unit908to the random access memory (RAM)903. In the RAM903, there further stores various programs and data needed by operation of the device900. The CPU901, ROM902and RAM903are connected one another via a bus904. The input/output (I/O) interface905is also connected to the bus904. The following components in the device900are connected to the I/O interface905: including: an input unit906, such as a keyboard, a mouse, and the like; an output unit907, such as display of various types and loudspeakers; a storage unit908, such as magnetic disk and optical disk; a communication unit909, such as network card, modem, wireless communication transceiver. The communication unit909allows the device900to exchange data/information with other devices via computer networks, such as Internet and/or telecommunication networks. The methods or processes described above, such as the methods400,600and/or the process800, can be executed by the processing unit901. For example, in some implementations, the methods400,600and/or the process800can be implemented as a computer software program which is corporeally contained in a machine readable medium, such as the storage unit908. In some implementations, the computer program can be partially or wholly loaded and/or mounted on the device900by the ROM902and/or the communication unit909. When the computer program is uploaded to the RAM903and executed by the CPU901, one or more steps of the method200described above can be executed. The present disclosure may be a system, an apparatus, a device, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local region network, a wide region network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local region network (LAN) or a wide region network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure. Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the present disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, snippet, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. The descriptions of various embodiments of the present disclosure have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. | 85,751 |
11861509 | DETAILED DESCRIPTION The disclosure presented in the following written description and the various features and advantageous details thereof, are explained more fully with reference to the non-limiting examples included in the accompanying drawings and as detailed in the description. Descriptions of well-known components have been omitted to not unnecessarily obscure the principal features described herein. The examples used in the following description are intended to facilitate an understanding of the ways in which the disclosure can be implemented and practiced. A person of ordinary skill in the art would read this disclosure to mean that any suitable combination of the functionality or exemplary embodiments below could be combined to achieve the subject matter claimed. The disclosure includes either a representative number of species falling within the scope of the genus or structural features common to the members of the genus so that one of ordinary skill in the art can recognize the members of the genus. Accordingly, these examples should not be construed as limiting the scope of the claims. A person of ordinary skill in the art would understand that any system claims presented herein encompass all of the elements and limitations disclosed therein, and as such, require that each system claim be viewed as a whole. Any reasonably foreseeable items functionally related to the claims are also relevant. Pursuant to MPEP § 904, the Examiner, after having obtained a thorough understanding of the invention disclosed and claimed in the nonprovisional application has searched the prior art as disclosed in patents and other published documents, i.e., nonpatent literature. Therefore, as evidenced by the issuance of this patent, the prior art fails to disclose or teach the elements and limitations presented in the claims as enabled by the specification and drawings, such that the presented claims are patentable under 35 U.S.C. §§ 101, 102, 103, and 112. FIG.1illustrates a schematic view of an automated workflow system100, in accordance with one or more embodiments of the present disclosure. The system100can include one or more servers102having one or more processors104, a memory134, machine readable instructions106, including a file collection module108, message identification module110, log collection module112, information parsing module114, log download module116, automation initializing module118, automation workflow module120, extraction module122, analysis module124, event watch module126, and automation production module128, among other relevant modules. The server102can be operably coupled to one or more clients via a network140. The clients can be a physical device (e.g., mobile phone150, laptop152, external sensors154, desktop computer, wearable device, or other suitable device), program, or application. In another embodiment, a client can include a mobile phone150having a mobile application configured to communicate with the server102over the network140. The aforementioned system components (e.g., server(s)102and client(s)150,152,154,156, etc.) can be communicably coupled to each other via the network140, such that data can be transmitted. The network140can be the Internet, intranet, or other suitable network. The data transmission can be encrypted, unencrypted, over a virtual private network (VPN) tunnel, or other suitable communication means. The network140can be a wide area network (WAN), local area network (LAN), personal area network (PAN), or other suitable network type. The network communication between the clients, server102, or any other system component can be encrypted using pretty good privacy (PGP), Blowfish, Twofish, triple data encryption standard (3DES), hypertext transfer protocol secure (HTTPS), or other suitable encryption. The system100can be configured to provide communication via the various systems, components, and modules disclosed herein via an application programming interface (API), peripheral component interface (PCI), PCI-Express, American National Standards Institute (ANSI)-X12, Ethernet, Wi-Fi, Bluetooth, or other suitable communication protocol or medium. Additionally, third party systems and databases can be operably coupled to the system components via the network140. The data transmitted to and from the components of system100(e.g., the server102and clients), can include any format, including JavaScript Object Notation (JSON), transfer control protocol (TCP)/internet protocol (IP), extensible markup language (XML), hypertext markup language (HTML), American Standard Code for Information Interchange (ASCII), short message service (SMS), comma-separated value (CSV), representational state transfer (REST), or other suitable format. The data transmission can include a message, flag, header, header properties, metadata, and/or a body, or be encapsulated and packetized by any suitable format having same. The server(s)102can be implemented in hardware, software, or a suitable combination of hardware and software therefor, and may comprise one or more software systems operating on one or more servers, having one or more processors104, with access to memory134. Server(s)102can include electronic storage, one or more processors, and/or other components. Server(s)102can include communication lines, connections, and/or ports to enable the exchange of information via a network140and/or other computing platforms. Server(s)102can also include a plurality of hardware, software, and/or firmware components operating together to provide the functionality attributed herein to server(s)102. For example, server(s)102can be implemented by a cloud of computing platforms operating together as server(s)102, including Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) functionality. Additionally, the server(s)102can include memory134. Memory134can comprise electronic storage that can include non-transitory storage media that electronically stores information. The electronic storage media of electronic storage can include one or both of system storage that can be provided integrally (e.g., substantially non-removable) with server(s)102and/or removable storage that can be removably connectable to server(s)102via, for example, a port (e.g., a Universal Serial Bus (USB) port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.). Electronic storage may include one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., erasable electronic programmable read only memory (EEPROM), random access memory (RAM), etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media. Electronic storage may include one or more virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources). The electronic storage can include a database, or public or private distributed ledger (e.g., blockchain). Electronic storage can store machine-readable instructions106, software algorithms, control logic, data generated by processor(s), data received from server(s), data received from computing platform(s), and/or other data that can enable server(s) to function as described herein. The electronic storage can also include third-party databases accessible via the network140. Processor(s)104can be configured to provide data processing capabilities in server(s)102. As such, processor(s)104can include one or more of a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information, such as field programmable gate arrays (FPGAs) or application specific integrated circuits (ASICs). The processor(s)104can be a single entity or include a plurality of processing units. These processing units can be physically located within the same device, or processor(s)104can represent processing functionality of a plurality of devices or software functionality operating alone, or in concert. The processor(s)104can be configured to execute machine-readable instructions106or machine learning modules via software, hardware, firmware, some combination of software, hardware, and/or firmware, and/or other mechanisms for configuring processing capabilities on processor(s)104. As used herein, the term “machine-readable instructions” can refer to any component or set of components that perform the functionality attributed to the machine-readable instructions component106. This can include one or more physical processors104during execution of processor-readable instructions, the processor-readable instructions, circuitry, hardware, storage media, or any other components. The server(s)102can be configured with machine-readable instructions having one or more functional modules. The machine-readable instructions106can be implemented on one or more servers102, having one or more processors104, with access to memory130. The machine-readable instructions106can be a single networked node, or a machine cluster, which can include a distributed architecture of a plurality of networked nodes. The machine-readable instructions106can include control logic for implementing various functionality, as described in more detail below. The machine-readable instructions106can include certain functionality associated with the system100. Additionally, the machine-readable instructions106can include a smart contract or multi-signature contract that can process, read, and write data to the database, distributed ledger, or blockchain. FIG.2illustrates a schematic view of an automated workflow system200, in accordance with one or more embodiments of the present disclosure. The automated workflow system200can include a file retrieval management system202, a watchdog system204, and an automated production system206. Although certain embodiments may be directed to identifying root causes of PTC brake events, the automated workflow system200can be used to automate workflow for identifying root causes for various types of events and systems such as policy enforcement events, repeat offender notifications, and authority notifications systems. In one embodiment, the file retrieval management system202can include the file collection module108, the message identification module110, the log collection module112, and the information parsing module114. The file collection module108, the file parsing module110, the log collection module112, and the information parsing module114can implement one or more algorithms to facilitate retrieval of files and logs of railroad events for various systems, including status, selection, and authentication algorithms. The algorithms and their associated thresholds and/or signatures can be programmable to suit a particular railroad events, application, function, facility, or other requirement. The file retrieval management system202can be configured to retrieve and modify files and logs related to one or more enforcement events or other suitable activity, to and from the client or server. In another embodiment, the file retrieval management system202can generate one or more elements for display on the user device. The elements can provide additional information related to the status of railroad event management. For example, notifications can be generated by the file retrieval management system202and displayed on the client to indicate file collections, log parsing, automated workflow initialization, railroad event handling, errors, or other suitable information. Additionally, system symbols can be displayed on the client to indicate task, inspection, or analysis status. The file collection module108can receive incoming messages regarding railroad event notifications. For example, the railroad event notification can include enforcement events, such as a PTC brake event. In one embodiment, the file collection module108can receive the incoming messages from a file retrieval manager (FRM). For example, the incoming messages can correspond to one or more of the enforcement events. In another embodiment, the file collection module108can transmit outgoing messages to the FRM. For example, the outgoing messages can request system component logs corresponding to one or more of the enforcement events. In another embodiment, the file collection module108can receive a notification indicating the system component logs are available. In another embodiment, the file collection module108can generate an authentication token for a particular user, session, or request. In another embodiment, the file collection module108can access the network140without user credentials. In another embodiment, the file collection module108can generate an authentication token using user data stored in the client. For example, a user can access a client and/or the automated workflow system200by providing valid credentials via a login page or screen, including a username and password, biometrics, multi-factor authentication, or other suitable credential, such credentials, along with a user's information such as name, username, employee number, etc., can be stored in the client or server. In another embodiment, the file collection module108can process at least a portion of the credentials and/or user information to generate an authentication token. For example, the authentication token can be generated as a JSON Web Token (JWT), via dongles or key fobs that can periodically generate a new authentication token in accordance with a known algorithm, using an authenticator app on the client or sent on demand via SMS, by hashing at least a portion of the login credentials, or other suitable methodology. In another embodiment, the authentication token can allow for single sign-on authentication to the server and/or memory from the client. In another embodiment, the file collection module108can operate without a user interface. In another example, the file collection module108can provide a user interface for a user to access the file collection module108. The automated workflow system200can utilize the file collection module108to provide a user interface for receiving relevant data. The message identification module110can classify the incoming messages and the notification. In one embodiment, the message identification module110can receive information about an enforcement event from the incoming messages and the notification. For example, the message identification module110can classify the incoming messages as an enforcement message or a status request and classify the notification as a log status. In another embodiment, the message identification module110can identify the system component logs of an enforcement event. For example, the message identification module110can identify the characteristics of the enforcement event. In another example, the characteristics can include an event time, a primary CPU monitoring the train during the enforcement, an appropriate component software version, a high-level system health scan, among other relevant characteristics. In another example, the message identification module110can verify the incoming message to identify whether the enforcement event actually occurred based on at least one of the characteristics. The log collection module112can receive the system component logs. In one embodiment, the system component logs are sent from the FRM. In another embodiment, the log collection module112can download the system component logs from an external memory. The system component logs can correspond to default set of onboard logs from at least one CPU on board the train. The system component logs can indicate characteristics of the enforcement of the brake event. In another embodiment, the log collection module112can store the system component logs for future access. The information parsing module114can parse the incoming messages and the notification for relevant information. For example, the relevant information can include information about the enforcement event. In another example, the incoming messages can include information such as a user ID, the employee information on the train, the employee requesting the information, a location of the train, among other relevant information. In one embodiment, the information parsing module114can establish a connection to a database. For example, the information parsing module114can receive the incoming message from the database and transmit the relevant information to the database. In one embodiment, the watchdog system204can include log download module116, automation initializing module118, automation workflow module120. The log download module116, automation initializing module118, automation workflow module120can implement one or more algorithms to facilitate status monitoring of the system component logs, including a file fetching, event monitor, and service enable algorithm. The algorithms and their associated thresholds and/or signatures can be programmable to suit a particular railroad event monitoring system, application, function, facility, or other requirement. The watchdog system204can be configured to transmit and receive messages related to status monitoring or other suitable activity, to and from the client or server. In another embodiment, the watchdog system204can generate one or more elements for display on the client. The elements can provide additional information related to workflow automation. For example, a notification can be generated by the watchdog system204and displayed on the client to indicate a status update, system component log status, start of an automation service, or other suitable information. Additionally, system symbols can be displayed on the client to indicate management status. In one embodiment, the log download module116can query an internal service queue. For example, the internal service queue can be located at an IP address. In another example, the log download module116can query the internal service queue at a specified frequency. In another example, the specified frequency can include a user-set value, or a scheduled frequency. In another example, the scheduled frequency can include an execution every six minutes. In another embodiment, the log download module116can determine whether the internal service queue includes a new enforcement message. For example, the control logic500can prompt the internal service queue to notify the log download module116of any updated enforcement messages. If the internal service queue lacks a new enforcement message, the control logic500proceeds to do nothing. If the internal service queue includes a new enforcement message, the log download module116proceeds to generate a record. For example, the record can include a collection of enforcement events based on enforcement messages from the internal service queue. In another example, the log download module116can store the record to a database. In another embodiment, the log download module116can parse a file retrieval system for system component logs corresponding to the new enforcement event. For example, the file retrieval system can include the FRM. In another example, the system component logs can include downloadable files. In another embodiment, the log download module116can determine whether the system component logs are available. For example, the log download module116can determine whether the system component logs are available in the file retrieval system. In another example, the system component logs might not be generated at a time the log download module116checks, as the system component logs can lag. If the system component logs are unavailable, then the log download module116repeats the checking process after a period of time. For example, the period of time can be 49 hours. If the system component logs are available, then the log download module116proceeds to receive the system component logs. For example, the log download module116can receive the system component logs from the file retrieval system. In another example, the log download module116can receive the system component logs in a downloadable manner. In another example, the log download module116can receive the system component logs in a virtual manner, such as a cloud environment. In another embodiment, the log download module116can update a status of the system component logs in the record. For example, the log download module116can update the record to indicate whether the system component logs were available or not. In an example, the log download module116can determine whether the system component logs include information for a plurality of CPUs on the train. In one embodiment, the automation initializing module118can identify whether an automation service is executing. For example, the automation initialization module122can determine whether the automation service is executing on a designated server. In another example, the designated server can include a designated IP address. In another example, the automation initializing module118can determine whether the automation service is executing based on network traffic on the designated IP address, network traffic on the designated server, or another method. If the automation service is not currently executing, the automation initializing module118can proceed to execute the automation service. For example, the automation initialization module122can execute the automation service by executing initialization instructions for the automation service. If the automation service is currently executing, the automation initializing module118can proceed to execute an automation process. For example, the automation process can include algorithms, applications, and functions from the automated production system206. In another example, the algorithms, applications, and functions can include one or more of the extraction modules122, the analysis module124, the event watch module126, and the automation production module128. In another example, the automation initializing module118can execute the automation process by executing initialization instructions for the automation process. The automation workflow module120can receive processed events. For example, the processed events can include results from the automated process. In another example, the processed events can include a root cause of the enforcement event. In another example, the automation workflow module120can receive the processed events from the designated server. In another example, the automation workflow module120can store the processed events to an automation production server. In one embodiment, the automation workflow module120can generate a notification when the automation process is complete. For example, the notification can correspond to a result of the processed events. In another example, the automation workflow module120generates the notification for a group of stakeholders of the processed events. In another example, the automation workflow module120transmits the notification to the stakeholders via email. In an embodiment, the automation workflow module120can transmit event files to a completed directory. For example, the event files can include information regarding the processed events. In another example, the completed directory can include a file server specific to the processed events. In one embodiment, the automated production system206can include the extraction module122, the analysis module124, the event watch module126, and the automation production module128. The extraction module122, analysis module124, event watch module126, and automation production module128can implement one or more algorithms to facilitate automated workflow and identify a root cause of a train event, including an extraction, analysis monitor, and event watch algorithm. The algorithms and their associated thresholds and/or signatures can be programmable to suit a particular railroad event monitoring system, application, function, facility, or other requirement. The automated production system206can be configured to transmit and receive messages related to workflow automation or other suitable activity, to and from the client or server. In another embodiment, the automated production system206can generate one or more elements for display on the user device. The elements can provide additional information related to root cause analysis. For example, a notification can be generated by the automated production system206and displayed on the client to indicate a root cause is identified, system component logs extracted, event monitoring, or other suitable information. Additionally, system symbols can be displayed on the client to indicate an event status, analysis completion, or root cause identified. In one embodiment, the extraction module122can receive data from a file server. For example, the data can correspond to system component logs including characteristics of an enforcement event. In another example, the file server can include system component logs from a plurality of enforcement events. In another embodiment, the extraction module122can collect data points surrounding a time of the enforcement event to generate a time window. For example, the data points can correspond with the system component logs. In another example, the time window can include time measurements before the enforcement event, after the enforcement event, or both before and after the enforcement event. In another example, the time window can include seconds, minutes, or hours relating to the enforcement event. In another example, the extraction module122can compare the time of the enforcement with the data points to verify the enforcement event actually occurred. In another embodiment, the extraction module122can label the data with unique identifiers. For example, the extraction module122can label the system component logs based on the characteristics. In another embodiment, the extraction module122can determine whether the data is structured. For example, the data is structured when information in the data is classified according to a predetermined manner. In another example, the data is unstructured when the information in the data is classified differently than the predetermined manner or unclassified entirely. In another embodiment, when the data is unstructured, the extraction module122can identify a pattern to the unstructured data to transform the data into structured data and extract the data. For example, the extraction module122can transform the unstructured data into structured data using a data transformation software tool. In another embodiment, when the data is structured, the extraction module122can extract the data. In another example, the extraction module122can extract data using regular expression. In another example, the regular expressions can include a sequence of characters that define a search pattern. In another example, the extraction module122can extract data using data manipulation techniques. In another example, the data manipulation techniques can include using commercial software tools such as HADOOP or developing custom data tools in various programming language. In another embodiment, the extraction module122transmits the extracted data to the analysis module124. In one embodiment, the analysis module124can analyze the extracted data to generate an analysis result. For example, the analysis result can include whether the analysis module124determined a root cause. In another example, the analysis module124can include a plurality of decision steps to determine whether the root cause is established. In another embodiment, the analysis module124can analyze the extracted data using a defect detection analysis model. For example, the defect detection analysis model can identify when a defect during the analysis occurs. In another example, when the defect occurs, the analysis module124can classify the defect as the root cause. In another example, when the defect does not occur, the analysis module124can analyze the extracted data using a historical analysis model. In another example, the historical analysis model is based on historical data such as previous engineer interactions, system component responses, and situational behavior. In another example, the analysis module124can analyze the extracted data using a decision tree model, a classification model, or a clustering model. In an embodiment, the analysis module124, the event watch module126, and the automation production module128can form an analysis model. In an embodiment, one or more analysis thresholds can determine whether the automated workflow system206performs a single analysis model or multiple analysis models. This adaptive analysis thresholding can alter one or more characteristics of the database. Additionally, the analysis thresholds are adaptive as the thresholding can change based upon the historical data, data type, timestamp, or other relevant data. For example, the system can compare a first accuracy of an output from a first analysis model to a preset analysis threshold. The preset analysis threshold can include a user-defined accuracy value. So, as the first analysis model outputs a first root cause of a penalty event, the system can verify whether the first root cause is a cause of the penalty event. In an example, when the first accuracy of the first root cause is below the preset analysis threshold, the system continues with a second analysis model. By way of further example, when the second analysis model executes, the second analysis model compares historical data with the penalty event. The historical data can include data about a particular locomotive, a fleet of locomotives, user-defined inputs, or some other data associated to penalty events. If the system identifies a match between the historical data and the penalty event, the system outputs a second root cause of the penalty event based on the historical data. The second root cause corresponds with a second accuracy. The system can compare the first accuracy of the first root cause with the second root cause to determine whether the two root causes are the same or different. In another exemplary embodiment, the system can provide analysis thresholding when comparing the first accuracy and the second accuracy. For example, the system can compare the two accuracies based on subsequent measures of accuracy. The system can initially compare the two root causes for similarities, and when the two root causes are the same, the system can conclude the similar root cause is a cause of the penalty event. Upon initial measurement, when one of the accuracies is above an analysis threshold, the accuracy above the analysis threshold can correspond with a root cause of the penalty event. In an event when both accuracies are above the analysis threshold, the system can compare the two root causes for similarities. If the two root causes are the same, then the system determines the similar root cause is the cause of the penalty event. If the two root causes are different, then the system can execute another round of comparison. The second round of comparison can include various forms of tie breaking such as higher accuracy measurement, user-intervention, rerunning the analysis, or some other form of determining the root cause. In one embodiment, the event watch module126can receive the analysis result from the analysis module124. In an embodiment, when the analysis result does not include the root cause, the event watch module126can generate a high-level classification for the enforcement event and assign a unique ID to the analysis result. For example, the high-level classification can include a message type, a message description, and a banner. In another example, the message type can be a warning to the train prior to an enforcement event. In another example, the message description can include a description of the warning including information such as the enforcement event. In another example, the banner can be the last banner shown to the engineer prior to the enforcement event. In another embodiment, when the analysis result includes the root cause, the event watch module126can assign an alert ID to the analysis result. In one embodiment, the automation production module128can transmit a detailed synopsis to a user. For example, the detailed synopsis can include a plurality of events such as a time that a train was active, a speed of the train during the enforcement of the train event, location details of the train, warnings to the train, configuration details of the train, PTC component information, a type of the enforcement, and a type of braking event. In another example, the detailed synopsis can correspond to the unique ID or the alert ID. In another example, the automation production module128can transmit the detailed synopsis through an AES. In another example, the AES can include a listserv of applicable users to be notified. In another embodiment, the automation production module128can generate an output and distribute a notification to users based on a user list. FIG.3illustrates a flowchart exemplifying analysis model control logic300, in accordance with one or more exemplary embodiments of the present disclosure. The analysis model control logic300can be implemented as an algorithm on a server102, a machine learning module, a client, a database, or other suitable system. Additionally, the analysis model control logic300can implement or incorporate one or more features of the automated production system206, including extraction module122, analysis module124, event watch module126, and automation production module128. The analysis model control logic300can be achieved with software, hardware, an API, a network connection, a network transfer protocol, HTML, DHTML, JavaScript, Dojo, Ruby, Rails, other suitable applications, or a suitable combination thereof. The analysis model control logic300can leverage the ability of a computer platform to spawn multiple processes and threads by processing data simultaneously. The speed and efficiency of the analysis model control logic300can be greatly improved by instantiating more than one process to implement analysis model. However, one skilled in the art of programming will appreciate that use of a single processing thread may also be utilized and is within the scope of the present disclosure. In one embodiment, commands or data can be received via user input generated on a client or server, such as a screen tap, swipe, mouse click, key press, voice command, or other suitable mechanism. In another embodiment, the inspection commands or data can include inspection data having one or more fields, parameters, characteristics, or metadata, related to an inspection. The analysis model control logic300then proceeds to step302. At step302, in one embodiment, the control logic300can receive extracted data. For example, the extracted data can correspond to system component logs including characteristics of an enforcement event. In another embodiment, the extracted data can include data points surrounding a time of the enforcement event to generate a time window. For example, the data points can correspond with the system component logs. In another example, the time window can include time measurements before the enforcement event, after the enforcement event, or both before and after the enforcement event. In another example, the time window can include seconds, minutes, or hours relating to the enforcement event. In another embodiment, the extracted data can include labels with unique identifiers. For example, the extracted data can include labels corresponding to the system component logs based on the characteristics. In another embodiment, the extracted data can include structured data. For example, the data is structured when information in the data is classified according to a predetermined manner. The control logic300proceeds to step304. At step304, in one embodiment, the control logic300can execute a first analysis model. For example, the control logic300can execute the first analysis model to obtain a first result. In another example, the first result can include whether the extracted data includes a root cause. In another example, the first analysis model can include a plurality of decision steps to determine whether the extracted data includes the root cause. For example, the decision steps can include a plurality of decisions corresponding to a particular environment. In an example, the particular environment can include a railroad event management system incorporating a railroad safety policy. In another example, the railroad safety policy can include the plurality of steps such as determining whether a brake signal stops, a quality of the signal, a distance of a locomotive to a target value, a speed of the locomotive, a lag time of the signal from the locomotive, a network quality indicator, a location of the locomotive, among other relevant safety factors. In another example, the first analysis model can analyze the extracted data using a decision tree model, a classification model, a clustering model, a machine learning model, or another type of data analysis model. In an example, the first analysis model can execute natural language processing (NLP) to analyze the extracted data to identify key words to indicate the root cause. In an example, when the first analysis model is the machine learning model, the first analysis model can train the machine learning model on historic data representing safety conditions described by the railroad safety policy. In another example, the machine learning model can test accuracy based on real-time data corresponding to at least one locomotive. In another example, the machine learning model can include a deep learning model, a recurrent neural network, artificial neural network, or another type of machine learning model not limited to the foregoing. In another example, the machine learning model can execute the first analysis on a cloud-based environment. For example, the machine learning model can execute as a software-as-a-service (SaaS), platform-as-a-service (PaaS), an infrastructure-as-a-service (IaaS), or another relevant software-based environment. The control logic300proceeds to step306. At step306, in one embodiment, the control logic300can identify defects in the extracted data. For example, the first analysis model can analyze the extracted data using a decision tree model identifying the extracted data that fails at least one decision tree node. The control logic300proceeds to step308. At step308, in one embodiment, the control logic300can assign a unique ID to the results of the analysis model. For example, the unique ID can include a globally unique ID (GUID). The control logic300proceeds to step310. At step310, in one embodiment, the control logic300can determine whether the defect exists in the first result. In another example, the defect detection analysis model can identify when a defect occurs during execution of the first analysis model. In another example, when the defect occurs, the first result can classify the defect as the root cause. If the defect exists in the results, the control logic300proceeds to step318. If the defect does not exist in the results, the control logic300proceeds to step312. At step312, in one embodiment, the control logic300can execute a second analysis model on the extracted data. For example, when the defect does not occur, the control logic300can analyze the extracted data using a historical analysis model. In another example, the historical analysis model is based on historical data such as previous engineer interactions, system component responses, and situational behavior. In another example, the control logic300can execute the second analysis model on the extracted data to generate a second result. For example, the control logic300can compare the second result to the historic data to determine an accuracy of the second result. The control logic300proceeds to step314. At step314, in one embodiment, the control logic300can assign another unique ID to the results of the second analysis model. For example, the unique ID can include another GUID. The control logic300proceeds to step316. At step316, in one embodiment, the control logic300can compare the results from the first analysis model and the second analysis model to determine the root cause. For example, the control logic300can compare the first result and the second result to identify the root cause based on a user input. In an example, the user input can include a decision between the first result and the second result. The control logic300proceeds to step318. At step318, in one embodiment, the control logic300can identify an accuracy of the results. In another example, the control logic300can compare the first result and the second result to determine a most accurate cause as the root cause bas. For example, the control logic300can assign weights to the first analysis model and the second analysis model in response to known accuracies of the first analysis model and the second analysis model. The control logic300proceeds to step320. At step320, in one embodiment, the control logic300can save the first result, the first unique ID, the second result, and the second unique ID. For example, the control logic300can save the results and the unique IDs in a database according to the enforcement event corresponding to the extracted data. FIG.4Aillustrates a flowchart exemplifying analysis decision tree control logic400, in accordance with one or more exemplary embodiments of the present disclosure. The analysis decision tree control logic400can be implemented as an algorithm on a server102, a machine learning module, a client, a database, or other suitable system. Additionally, the analysis model control logic300can implement or incorporate one or more features of the automated production system206, including extraction module122, analysis module124, event watch module126, and automation production module128. The analysis decision tree control logic400can be achieved with software, hardware, an API, a network connection, a network transfer protocol, HTML, DHTML, JavaScript, Dojo, Ruby, Rails, other suitable applications, or a suitable combination thereof. The analysis decision tree control logic400can leverage the ability of a computer platform to spawn multiple processes and threads by processing data simultaneously. The speed and efficiency of the analysis decision tree control logic400can be greatly improved by instantiating more than one process to implement an analysis decision tree control logic. However, one skilled in the art of programming will appreciate that use of a single processing thread may also be utilized and is within the scope of the present disclosure. In one embodiment, commands or data can be received via user input generated on a client or server, such as a screen tap, swipe, mouse click, key press, voice command, or other suitable mechanism. In another embodiment, the inspection commands or data can include inspection data having one or more fields, parameters, characteristics, or metadata, related to an inspection. The analysis decision tree control logic400then proceeds to step402. At step402, in one embodiment, the control logic400can determine whether a signal required a locomotive to stop based on extracted data. For example, the signal can correspond to communication between the locomotive and at least one network node in response to an enforcement event. In an example, the extracted data can include information and parameters corresponding to the signal and the enforcement event, such as a PTC brake event. In an example, the extracted data can correspond to system log components of the enforcement event. In another example, the control logic400can identify the characteristics of the enforcement event. For example, the characteristics can include an event time, a primary CPU monitoring the train during the enforcement, an appropriate component software version, a high-level system health scan, among other relevant characteristics. In another example, the control logic400can receive the signal from a file retrieval manager (FRM). If the signal requires the locomotive to stop, the control logic400proceeds to step404. If the signal does not require the locomotive to stop, the control logic400proceeds to step406. At step404, in one embodiment, the control logic400can execute a decision tree branch corresponding to the signal to stop the locomotive, as described in more detail inFIG.4B. At step406, in one embodiment, the control logic400can determine whether the signal was unknown based on the extracted data. For example, the extracted data can include a list of historic signal values to determine how the signal changed over time. In an example, the control logic400can compare the signal to a plurality of known signals of the historic statuses to identify whether the signal matches any of the known signals. In another example, the control logic400can determine a status of the signal at any time before the enforcement event. If the signal is unknown, the control logic400proceeds to step408. If the signal is known, the control logic400proceeds to step430. At step408, in one embodiment, the control logic400can determine whether the signal changed from a known value to an unknown value based on the extracted data. For example, the extracted data can include a list of historic signal values to determine whether the signal changed over time. In an example, the control logic400can compare the signal to the list of historic signal values to identify whether the signal changed from the known value to the unknown value. In another example, the control logic400can determine a status of the signal at any time before the enforcement event. If the signal changed from the known value to the unknown value, the control logic400proceeds to step410. If the signal did not change from the known value to the unknown value, the control logic400does not detect a defect and the decision tree terminates. At step410, in one embodiment, the control logic400can determine whether a state of the signal changed based on the extracted data. For example, the extracted data can include information corresponding to a list of historic states to determine whether the state of the signal changed. In an example, the control logic400can compare the list of historic states to the state of the signal to determine whether the state changed. If the state of the signal changed, the control logic400proceeds to step412. If the state of the signal did not change, the control logic400does not detect a defect and the decision tree terminates. At step412, in one embodiment, the control logic400can determine whether a distance between the locomotive and a target location is less than a predetermined distance based on the extracted data. For example, the extracted data can include distance information about a location of the locomotive. In an example, the predetermined distance can include a user-defined distance, a calculated distance based on a safety protocol, or another measurement. In another example, the control logic400can compare the distance between the locomotive and the target location to determine whether the distance between the locomotive and the target location was less than the predetermined distance at a time before or during the enforcement event. If the distance between the locomotive and the target location was less than the predetermined distance, the control logic400proceeds to step416. If the distance between the locomotive and the target location was greater than the predetermined distance, the control logic400proceeds to step414. At step414, in one embodiment, the control logic400can determine whether a speed of the locomotive is greater than a predetermined speed based on the extracted data. For example, the extracted data can include speed information about the locomotive. In another example, the predetermined speed can include a user-defined value, a calculated speed based on a safety protocol, or another measurement. In another example, the control logic400can compare the speed of the locomotive with the predetermined speed to determine whether the speed of the locomotive was greater than the predetermined speed at a time before or during the enforcement event. If the speed of the locomotive was greater than the predetermined speed, the control logic400proceeds to step426. If the speed of the locomotive was less than the predetermined speed, the control logic400proceeds to step428. At step416, in one embodiment, the control logic400can determine whether a state change of the signal was delayed based on the extracted data. For example, the extracted data can include information corresponding to a list of historic states to determine whether the state of the signal changed. In an example, the control logic400can compare the list of historic states to the state of the signal to determine whether the state changed. In another example, the state change of the signal can include a signal change from a known value to an unknown value, a known value to another known value, an unknown value to a known value, or any other combination of values to indication the state change. In an example, the state change can be delayed based on poor communication quality between the locomotive and the at least one network node, hardware or software malfunction of the locomotive or the at least one network node, or another delay reason. If the state change of the signal was delayed, the control logic400proceeds to step420. If the state change of the signal was not delayed, the control logic400proceeds to step418. At step418, in one embodiment, the control logic400can determine whether an application gateway was accessible based on the extracted data. For example, the extracted data can include network information between the locomotive and at least one network node corresponding to the application gateway. In another example, the extracted data can include a status of the application gateway indicating whether the application gateway was accessible during the enforcement event. In an example, the application gateway can include a particular IP address identifying the application gateway. In another example, the control logic400can identify accessibility of the application gateway at a time before or during the enforcement event to determine whether the application gateway was accessible. If the application gateway was accessible, the control logic400proceeds to step424. If the application gateway was not accessible, the control logic400proceeds to step422. At step420, in one embodiment, the control logic400can classify, based on the extracted data, the defect to be the locomotive selected a train track that was too close to a switch. For example, the extracted data can include travel information corresponding to the locomotive. In an example, the locomotive can select a particular route of train tracks to travel. The train tracks include areas including at least one switch. The at least one switch is an installation enabling trains to be guided from one track to another, such as at a railway junction. In an example, the control logic400can compare a predetermined path with the particular route of the train tracks to determine whether the train track selection was too close to the switch. When the locomotive selected the train track that is too close to the switch, the enforcement event is activated. At step422, in one embodiment, the control logic400can classify, based on the extracted data, the defect to be a communication issue between the locomotive and the at least one network node. For example, the extracted data can include network information between the locomotive and the at least one network node. In an example, the locomotive can communicate with the at least one network node while traveling. For example, the locomotive can communicate using any variety of communication methods, such as a wireless communication system following standard communication protocols (e.g., TCP/IP, MAC, HTTP, etc.). In another example, the control logic400can determine the communication issue between the locomotive and the at least one network node. When the communication issue exists, the enforcement event is activated. At step424, in one embodiment, the control logic400can classify, based on the extracted data, the defect to be an application gateway issue causing communication delays between the locomotive and the at least one network node. For example, the extracted data can include network information between the locomotive and the at least one network node. In an example, the locomotive can communicate with the at least one network node while traveling. The locomotive can communicate using any variety of communication methods, such as a wireless communication system following standard communication protocols (e.g., TCP/IP, MAC, HTTP, etc.). In another example, the control logic400can determine the application gateway issue between the locomotive and the at least one network node. When the signal cannot establish a connection with the application gateway and the application gateway is nonfunctional, the enforcement event is activated. At step426, in one embodiment, the control logic400can classify, based on the extracted data, the defect to be a speed of the locomotive is greater than a predetermined speed. For example, the extracted data can include speed information about the locomotive. In another example, the predetermined speed can include a user-defined value, a calculated speed based on a safety protocol, or another measurement. In another example, the control logic400can compare the speed of the locomotive with the predetermined speed to determine whether the speed of the locomotive was greater than the predetermined speed at a time before or during the enforcement event. When the speed of the locomotive was greater than the predetermined speed, the enforcement event is activated. At step428, in one embodiment, the control logic400can classify, based on the extracted data, the defect to be the locomotive selected an incorrect track. For example, the extracted data can include a proposed route for the locomotive. In an example, the locomotive can select tracks along a travel route. For example, the locomotive can select one track over another for convenience of traveling, rapid travel time, or some other reason. In an example, the locomotive can select the incorrect track along the travel route. In an example, the control logic400can compare the proposed route with the travel route to determine whether the locomotive selected the incorrect track. When the locomotive selects the incorrect track, the enforcement event is activated. At step430, in one embodiment, the control logic400can determine whether the locomotive was approaching a workzone. For example, the extracted data can include location information identifying the workzone. In an example, when the locomotive travels near the workzone, the locomotive can travel based on safety conditions. For example, the locomotive can travel at a lower speed, on a path farthest away from the workzone, or another method to handle approaching the workzone. In an example, the control logic400can compare locations of the locomotive with the workzone to determine whether the locomotive was approaching the workzone. If the locomotive was approaching the workzone, the control logic400determines that no defect is detected. If the locomotive was not approaching the workzone, the control logic400proceeds to step432. At step432, in one embodiment, the control logic400can determine whether the locomotive traveled using an unknown switch based on the extracted data. For example, the extracted data can include track information for the locomotive indicating a plurality of switches on a route. In an example, the locomotive can select a particular route of train tracks to travel. For example, the train tracks include areas including at least one switch guiding the locomotive from one track to another, such as at a railway junction. In an example, the control logic400can compare the plurality of switches with the unknown switch to determine whether the locomotive traveled on the unknown switch. If the locomotive selects an unknown switch, the control logic400determines that no defect is detected. If the locomotive selects a known switch, the control logic400proceeds to step434. At step434, in one embodiment, the control logic400can determine whether a switch on a route is not aligned for safe movement based on the extracted data. For example, the extracted data can include track information corresponding to the locomotive indicating a plurality of switches on a route. In an example, the locomotive can select a particular route of train tracks to travel. For example, the train tracks include areas including at least one switch guiding the locomotive from one track to another, such as at a railway junction. In an example, the control logic400can compare the switch on the route with a normal switch to identify whether the switch is not aligned for safe movement. If the switch is not aligned for safe movement, the control logic400determines that no defect is detected. If the switch is aligned for safe movement, the control logic400determines that no defect is detected. FIG.4Billustrates a flowchart exemplifying analysis decision tree control logic450, in accordance with one or more exemplary embodiments of the present disclosure. The analysis decision tree control logic450can be implemented as an algorithm on a server102, a machine learning module, a client, a database, or other suitable system. Additionally, the analysis model control logic300can implement or incorporate one or more features of the automated production system206, including extraction module122, analysis module124, event watch module126, and automation production module128. The analysis decision tree control logic450can be achieved with software, hardware, an API, a network connection, a network transfer protocol, HTML, DHTML, JavaScript, Dojo, Ruby, Rails, other suitable applications, or a suitable combination thereof. The analysis decision tree control logic450can leverage the ability of a computer platform to spawn multiple processes and threads by processing data simultaneously. The speed and efficiency of the analysis decision tree control logic450can be greatly improved by instantiating more than one process to implement analysis decision tree. However, one skilled in the art of programming will appreciate that use of a single processing thread may also be utilized and is within the scope of the present disclosure. In one exemplary embodiment, commands or data can be received via user input generated on a client or server, such as a screen tap, swipe, mouse click, key press, voice command, or other suitable mechanism. In another exemplary embodiment, the inspection commands or data can include inspection data having one or more fields, parameters, characteristics, or metadata, related to an inspection. The analysis decision tree control logic450then proceeds to step404. At step404, in one embodiment, the control logic450can execute a decision tree branch corresponding to a signal requiring a locomotive to stop based on extracted data. For example, the signal can correspond to communication between the locomotive and at least one network node in response to an enforcement event. In an example, the extracted data can include information and parameters corresponding to the signal and the enforcement event, such as a PTC brake event. In an example, the extracted data can correspond to system log components of the enforcement event. In another example, the control logic450can identify the characteristics of the enforcement event. For example, the characteristics can include an event time, a primary CPU monitoring the train during the enforcement, an appropriate component software version, a high-level system health scan, among other relevant characteristics. In another example, the control logic450can receive the signal from an FRM. The control logic450proceeds to step452. At step452, in one embodiment, the control logic450can determine whether the signal changed from a known value to a stop instruction based on the extracted data. For example, the extracted data can include a list of historic signal values to determine whether the signal changed over time. In an example, the control logic450can compare the signal to the list of historic signal values to identify whether the signal changed from the known value to the unknown value. In another example, the control logic450can determine a status of the signal at any time before the enforcement event. If the signal changed from the known value to the stop instruction, the control logic450proceeds to step454. If the signal did not change from the known value to the stop instruction, the control logic450proceeds to step460. At step454, in one embodiment, the control logic450can determine whether a fault is present and active based on the extracted data. For example, the extracted data can include fault information about the enforcement event. In an example, the fault information can indicate whether the enforcement event was in response to one or more faults, such as a dropped signal or a braking error. In another example, the fault can be active when the fault persists after the enforcement event occurs. If the fault is present and active, the control logic450proceeds to step458. If the fault is not present or active, the control logic450proceeds to step456. At step456, in one embodiment, the control logic450can classify, based on the extracted data, the defect to be the signal was dropped. For example, the extracted data can include information indicating a status of the signal. In an example, the status of the signal can indicate a current status, previous status, historic status values, among other status indicators. In another example, the signal can be dropped in response to the locomotive failing to establish or maintain connection with one or more network elements. In an example, the control logic450can identify when the signal was dropped. When the signal is dropped, the enforcement event is activated. At step458, in one embodiment, the control logic450can classify, based on the extracted data, the defect to be a braking error. For example, the extracted data can include information indicating operability of a braking system on the locomotive. In an example, the braking error can occur based on an inability of the locomotive to apply the braking system. In another example, the control logic450can identify when the braking error occurs. When the braking error is present, the enforcement event is activated. At step460, in one embodiment, the control logic450can determine whether a status of the signal changed from an unknown value to a stop instruction based on the extracted data. For example, the extracted data can include information corresponding to a list of historic states to determine whether the state of the signal changed. In an example, the control logic450can compare the list of historic states to the state of the signal to determine whether the state changed. If the status of the signal changed from the unknown value to the stop instruction, the control logic450proceeds to step470. If the status of the signal did not change from the unknown value to the stop instruction, the control logic450proceeds to step462. At step462, in one embodiment, the control logic450can determine whether a speed of the locomotive at a distance away from a target was greater than a target speed based on the extracted data. For example, the extracted data can include speed information about the locomotive. In another example, the predetermined speed can include a user-defined value, a calculated speed based on a safety protocol, or another measurement. In another example, the control logic450can compare the speed of the locomotive with the predetermined speed to determine whether the speed of the locomotive was greater than the predetermined speed at a time before or during the enforcement event. If the speed of the locomotive at the distance away from the target is greater than the target speed, the control logic450proceeds to step490. If the speed of the locomotive at the distance away from the target is less than the target speed, the control logic450proceeds to step464. At step464, in one embodiment, the control logic450can determine whether a warning value was greater than a target warning value based on the extracted data. For example, the extracted data can include warnings the locomotive received corresponding to the enforcement event. In an example, the target warning value can correspond to the warnings the locomotive received. In another example, the warnings the locomotive received can include a warning to reduce locomotive speed, to change travel trajectory of the locomotive, to indicate poor communication quality, among other warnings. If the warning value is greater than the target warning value, the control logic450proceeds to step492. If the warning value is less than the target warning value, the control logic450proceeds to step466. At step466, in one embodiment, the control logic450can determine whether a fault is detected based on the extracted data. For example, the extracted data can include fault information about the enforcement event. In an example, the fault information can indicate whether the enforcement event was in response to one or more faults, such as improper train handling or braking calculation failure. If the fault was detected, the control logic450proceeds to step494. If the fault was not detected, the control logic450proceeds to step468. At step468, in one embodiment, the control logic450can classify, based on the extracted data, the defect to be improper handling of the locomotive. For example, the extracted data can include handling information of the locomotive. In an example, the handling information can include a quality value of how staff members on the locomotive handled occupational events. For example, the quality value can include a cumulative measure of handling occupational events, such as routine maintenance, inspection, or conducting of the locomotive, among other occupational events. In another example, the staff members can include resident engineers aboard the locomotive, a conductor of the locomotive, among other occupational members. In another example, the control logic450can compare the handling information with routine operation activity to determine whether the improper handling of the locomotive exists. When the locomotive is improperly handled, the enforcement event is activated. At step470, in one embodiment, the control logic450can determine whether a state of the locomotive changed from a disengaged state to an active state based on the extracted data. For example, the extracted data can include historic states of the locomotive to determine whether the state of the locomotive changes. In an example, the state of the locomotive can change in response to a travel status. For example, when the locomotive is moving, the travel status can indicate the state of the locomotive is the active state. Alternatively, when the locomotive is stationary, the travel status can indicate the state of the locomotive is the disengaged state. In another example, the control logic450can compare the state of the locomotive with the active state to determine whether the state of the locomotive changed. If the state of the locomotive changed from the disengaged state to the active state, the control logic450proceeds to step480. If the state of the locomotive did not change from the disengaged state to the active state, the control logic450proceeds to step472. At step472, in one embodiment, the control logic450can determine whether a speed of the locomotive at a distance away from a target was less than a target speed based on the extracted data. For example, the extracted data can include speed information about the locomotive. In another example, the predetermined speed can include a user-defined value, a calculated speed based on a safety protocol, or another measurement. In another example, the control logic450can compare the speed of the locomotive with the predetermined speed to determine whether the speed of the locomotive was greater than the predetermined speed at a time before or during the enforcement event. If the speed of the locomotive at the distance away from the target is less than the target speed, the control logic450proceeds to step484. If the speed of the locomotive at the distance away from the target is greater than the target speed, the control logic450proceeds to step474. At step474, in one embodiment, the control logic450can determine whether a warning value was greater than a target warning value based on the extracted data. For example, the extracted data can include warnings the locomotive received corresponding to the enforcement event. In an example, the target warning value can correspond to the warnings the locomotive received. In another example, the warnings the locomotive received can include a warning to reduce locomotive speed, to change travel trajectory of the locomotive, to indicate poor communication quality, among other warnings. In another example, the control logic450can compare the warning value with the target warning value to determine whether the warning value was greater than the target warning value. If the warning value was greater than the target warning value, the control logic450proceeds to step486. If the warning value was less than the target warning value, the control logic450proceeds to step476. At step476, in one embodiment, the control logic450can determine whether a fault is detected based on the extracted data. For example, the extracted data can include fault information about the enforcement event. In an example, the fault information can indicate whether the enforcement event was in response to one or more faults, such as improper train handling or braking calculation failure. If the fault was detected, the control logic450proceeds to step488. If the fault was not detected, the control logic450proceeds to step478. At step478, in one embodiment, the control logic450can classify, based on the extracted data, the defect to be improper handling of the locomotive. For example, the extracted data can include handling information of the locomotive. In an example, the handling information can include a quality value of how staff members on the locomotive handled occupational events. For example, the quality value can include a cumulative measure of handling occupational events, such as routine maintenance, inspection, or conducting of the locomotive, among other occupational events. In another example, the staff members can include resident engineers aboard the locomotive, a conductor of the locomotive, among other occupational members. In another example, the control logic450can compare the handling information with routine operation activity to determine whether the improper handling of the locomotive exists. When the locomotive was improperly handled, the enforcement event is activated. At step480, in one embodiment, the control logic450can determine whether the locomotive reselected a track based on the extracted data. For example, the extracted data includes track information to determine a travel path of the locomotive. In an example, the control logic450can determine whether the locomotive reselected the track in response to the locomotive altering the travel path. In an example, the travel path can include a predetermined path based on a starting point of the locomotive and an end point of the locomotive. In another example, the control logic450can compare the travel path of the locomotive with the predetermined path to determine whether the locomotive reselected the track. If the locomotive reselected the track, the control logic450proceeds to step480. If the locomotive did not reselect the track, the control logic450determines that no defect is detected. At step482, in one embodiment, the control logic450can classify, based on the extracted data, the defect to be the signal was dropped. For example, the extracted data can include information indicating a status of the signal. In an example, the status of the signal can indicate a current status, previous status, historic status values, among other status indicators. In another example, the signal can be dropped in response to the locomotive failing to establish or maintain connection with one or more network elements. In another example, the control logic450can compare the signal to a normal signal to determine whether the signal was dropped. When the signal was dropped, the enforcement event is activated. At step484, in one embodiment, the control logic450can classify, based on the extracted data, the defect to be a malfunction of the locomotive. For example, the extracted data can include malfunction information corresponding to functions of the locomotive. In an example, the malfunction information can include operational ability of systems on, or part of the locomotive. For example, the systems on, or part of the locomotive can include mechanical systems, electrical systems, a combination of the mechanical systems and electrical systems, or another system. In another example, the control logic450can compare the malfunction information with normal operations to determine whether the malfunction exists. When the malfunction is present, the enforcement event is activated. At step486, in one embodiment, the control logic450can classify, based on the extracted data, the defect to be a disregarded warning. For example, the extracted data can include warning information corresponding to at least one warning received by the locomotive in response to the enforcement event. In an example, the warning information can indicate whether a staff member of the locomotive regarded the at least one warning. In another example, the control logic450can identify when the staff member disregarded the warning. When the disregarded warning is present, the enforcement event is activated. At step488, in one embodiment, the control logic450can classify, based on the extracted data, the defect to be a braking calculation failure. For example, the extracted data can include braking system information corresponding to the locomotive. In an example, the braking calculation failure can occur in response to the locomotive applying a braking system outside of a predetermined braking threshold. For example, the predetermined braking threshold can include a minimum distance the locomotive can apply the braking system to maintain a safe environment. In another example, the predetermined braking threshold can correspond to a calculated travel distance based on parameters of the locomotive. In another example, the control logic450can compare the braking system information with the predetermined braking threshold to determine whether the braking calculation failure exists. When the braking calculation failure is present, the enforcement event is activated. At step490, in one embodiment, the control logic450can classify, based on the extracted data, the defect to be a malfunction of the locomotive. For example, the extracted data can include malfunction information corresponding to functions of the locomotive. In an example, the malfunction information can include operational ability of systems on, or part of the locomotive. For example, the systems on, or part of the locomotive can include mechanical systems, electrical systems, a combination of the mechanical systems and electrical systems, or another system. In another example, the control logic450can compare the malfunction information with normal operations to determine whether the malfunction exists. When the malfunction is present, the enforcement event is activated. At step492, in one embodiment, the control logic450can classify, based on the extracted data, the defect to be a disregarded warning. For example, the extracted data can include warning information corresponding to at least one warning received by the locomotive in response to the enforcement event. In an example, the warning information can indicate whether a staff member of the locomotive regarded the at least one warning. In another example, the control logic450can identify when the staff member disregarded the warning. When the disregarded warning is present, the enforcement event is activated. At step494, in one embodiment, the control logic450can classify, based on the extracted data, the defect to be a braking calculation failure. For example, the extracted data can include braking system information corresponding to the locomotive. In an example, the braking calculation failure can occur in response to the locomotive applying a braking system outside of a predetermined braking threshold. For example, the predetermined braking threshold can include a minimum distance the locomotive can apply the braking system to maintain a safe environment. In another example, the predetermined braking threshold can correspond to a calculated travel distance based on parameters of the locomotive. In another example, the control logic450can compare the braking system information with the predetermined braking threshold to determine whether the braking calculation failure exists. When the braking calculation failure is present, the enforcement event is activated. The present disclosure achieves at least the following advantages:1. performs root cause analysis for various types of data structures using an automation engine;2. increases efficiency of inspectors performing the root cause analysis by automating workflow;3. enables accurate detection of train events such as PTC brake events and identifies the root cause of such events; and4. provides an analytical framework to perform root cause analysis using data extraction and analysis models. Persons skilled in the art will readily understand that advantages and objectives described above would not be possible without the particular combination of computer hardware and other structural components and mechanisms assembled in this inventive system and described herein. Additionally, the algorithms, methods, and processes disclosed herein improve and transform any general-purpose computer or processor disclosed in this specification and drawings into a special purpose computer programmed to perform the disclosed algorithms, methods, and processes to achieve the aforementioned functionality, advantages, and objectives. It will be further understood that a variety of programming tools, known to persons skilled in the art, are available for generating and implementing the features and operations described in the foregoing. Moreover, the particular choice of programming tool(s) may be governed by the specific objectives and constraints placed on the implementation selected for realizing the concepts set forth herein and in the appended claims. The description in this patent document should not be read as implying that any particular element, step, or function can be an essential or critical element that must be included in the claim scope. Also, none of the claims can be intended to invoke 35 U.S.C. § 112(f) with respect to any of the appended claims or claim elements unless the exact words “means for” or “step for” are explicitly used in the particular claim, followed by a participle phrase identifying a function. Use of terms such as (but not limited to) “mechanism,” “module,” “device,” “unit,” “component,” “element,” “member,” “apparatus,” “machine,” “system,” “processor,” “processing device,” or “controller” within a claim can be understood and intended to refer to structures known to those skilled in the relevant art, as further modified or enhanced by the features of the claims themselves, and can be not intended to invoke 35 U.S.C. § 112(f). The disclosure may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. For example, each of the new structures described herein, may be modified to suit particular local variations or requirements while retaining their basic configurations or structural relationships with each other or while performing the same or similar functions described herein. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive. Accordingly, the scope of the inventions can be established by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Further, the individual elements of the claims are not well-understood, routine, or conventional. Instead, the claims are directed to the unconventional inventive concept described in the specification. | 81,693 |
11861510 | DETAILED DESCRIPTION In the following description, for purposes of explanation and non-limitation, specific details are set forth, such as particular nodes, functional entities, techniques, protocols, etc. in order to provide an understanding of the described technology. It will be apparent to one skilled in the art that other embodiments may be practiced apart from the specific details described below. In other instances, detailed descriptions of well-known methods, devices, techniques, etc. are omitted so as not to obscure the description with unnecessary detail. Sections are used in this Detailed Description solely in order to orient the reader as to the general subject matter of each section; as will be seen below, the description of many features spans multiple sections, and headings should not be read as affecting the meaning of the description included in any section. In many places in this document, including but not limited to in the description ofFIG.1, software modules and actions performed by software modules are described. This is done for ease of description; it should be understood that, whenever it is described in this document that a software module performs any action, the action is in actuality performed by underlying hardware elements (such as a processor and a memory device) according to the instructions that comprise the software module. Further details regarding this are provided below in, among other places, the description ofFIG.5. Overview Certain example embodiments relate to automatically designing or generating new datasets from heterogeneous data sources using a feedback loop and theory of constraints. In certain examples, data is collected from multiple different data sources, and then subsequently processed to generate engineered datasets (or engineered features). Original and engineered features are stored in a hierarchical graph that is used to select features that will be included into a feature set that is further analyzed or processed. The selected features of the feature set are processed using machine learning to generate metagradients used to change or update the hierarchical graph of the features (and the datasets to which those features belong). A new set of features is then selected and the process continues to iterate until a selected set of features produces a model and output data that matches at least one target signal. A target signal may be, for example, a production quota, a next day prediction of a stock market, next quarter earnings for a company, a target Sharpe ratio, a weather prediction, etc. . . . . The technological improvements offered by the techniques herein can be applied in different domains, from health care, to media, to education, to finance, to security, to transportation, and many other industries and domains that have different problems—but have large (or many different) datasets that may be analyzed using the techniques herein. The approach discussed herein allows for processing of hundreds or thousands (or even more) of different features (and a computationally infeasible number of combinations of those features). Description ofFIG.1 FIG.1is an example computer system architecture diagram according to certain example embodiments. Computer system100receives and processes datasets A, B, C, up to dataset N. These datasets may be from the same (e.g., data source A) or difference data sources (e.g., data sources A, B, C, N . . . ). Computer system100is configured to handle an arbitrary number datasets from an arbitrary number of data sources. Datasets may be different types of data and may be in any form. Different types of data include, for example, temperature data that is gathered from one or more temperature sensors, electronic exchange data for one or more securities, service call data such as the total number of call that a company received in a 24 hr period, and many other types of data. Virtually any type of data may be included in the multiple different datasets that are supplied by data sources to system100. Indeed, the techniques herein are designed to work with hundreds or thousands of different datasets and their corresponding sources. In certain instances, the data sources may include internal data sources (e.g., that are operated by the same organization that is operating computer system100). Data sources may include data wire service providers (e.g., a data “wire” service similar in the way Reuters is a news service). In certain instances, the data sources may be subscribed to by system100. The data sources and the data formats used for the dataset supplied by those data sources may be heterogeneous or homogeneous in nature and as such any type of data format may be acceptable. In certain examples, the data from the various data sources may be stored in a data warehouse or data lake (not shown) that may then be queried and operated on by computer system100. For example, data sources may supply datasets A, B, C, N to such a data warehouse and system100may access the data warehouse to process datasets stored therein. An ETL (extract, transfer, load) module102is part of computer system100. In certain examples, ETL module may also be its own dedicated computer system that communicates with computer system100. ETL module102is responsible for: 1) extracting the data from the data sources (or the data warehouse where the data is stored); 2) transforming the data from its stored (or original format) into a format that is more suitable for analysis by training system104; and 3) loading the data into another database or other storage for further processing by training system104and the modules therein. In certain example embodiments, training system104can be its own computer system that is separate from the ETL module102. Training system104may be implemented in a cloud-based computer environment and may be implemented across one or more physical computer nodes (e.g., as shown inFIG.5). In certain examples, different components or modules of training system104may be implemented on virtual machines, which may be implemented on corresponding physical computer hardware. Training system104includes three separate modules that operate to generate a dataset112, that can be applied to a model (seeFIG.4), to achieve a target signal114. These three modules include feature engineering module106(described inFIG.2), dataset scanner module108(described inFIG.3), and strategy designer module110(described inFIG.4). The feature engineering module106operates on data passed through the ETL module102. Module106generates new columns of data that are based on the obtained source data. As one non-limiting example, the feature engineering module106may take the average of two (or more) pieces of data to generate a third piece of data. For example, it may average the high and low temperatures values from an original dataset to create a third value (the average). As explained below, other types of functions, transformations, and the like may be used for engineering feature data. The engineered datasets (which may include the original datasets) are passed onto the dataset scanner module108. In certain examples, the functionality provided by the feature engineering module106may be skipped; for example, if no engineered features are needed for the processing. Dataset scanner module108is part of the feedback loop used by training system104to generate the desired outputs (dataset112and a model for that dataset) for target signal114. In certain examples, dataset scanner module108uses a heuristic hierarchical graph search on the data (which is organized by dataset and features of the individual datasets) to select a subset of the features (also called columns herein) to test. A selected feature set (multiple features) is then passed onto the strategy designer module110—the other part of the feedback loop employed by training system104. Strategy designer module110uses machine learning to develop a strategy for the data of the selected feature set that is received from dataset scanner module108. This strategy, along with metagradient information (also called metagradient or metagradient data) and the model that is generated for the selected feature set, cycle back to the dataset scanner108that iterates (taking into account the metagradient information) on what features to select next. This forms a feedback loop between the dataset scanner108and the strategy designer module110. The returned metagradient information may indicate how effective or good each of the features within a selected feature set are with respect to a target and/or how effective the set of features is with respect to a target signal. Description ofFIG.2—Feature Engineering: FIG.2illustrates an example feature engineering module106of system100according to certain example embodiments. Input data202includes multiple different datasets D1-Dt. And each dataset may include one or more features X0-Xn(also called columns) that each have individual values (rows). Input data202may be output from ETL module102as shown inFIG.1and/or a database that or other storage system that has been populated with data from the ETL module102. Feature engineering module106receives input data202and engineers additional features based on the input data. The engineering of additional features may use transforms, domain specific encoding, categorical encoding, or other processes that take some subset of the input data to engineer additional features from that input data. One example of how a feature may be engineered could be taking the average two features to engineer a third feature (e.g., taking the average of data from X0and X1to create data values under an engineered feature Xn+1). Other feature engineering may include encodings of the data, compression, decompression, maxes, mins, medians, etc.). The nature of the transforms, encodings or other processes that are performed by the feature engineering module106may be based on the type of data being handled. For example, the feature engineering may be selected based on the data being weather data, versus automobile performance data, or data from an electronic exchange computer system. In any event, the process performed by the feature engineering module106results in output data204. The output data204includes the original input data202(e.g., D1-Dt) plus additional engineered features and the corresponding data for those features that have been engineered or otherwise added by the feature engineering module106. The engineered features may be generated into one or more additional datasets Dv. In certain examples, features that are engineered from an original dataset may be included into the same engineered dataset. In certain examples, features may be engineered from multiple different datasets. For example, a feature from a first dataset and a feature from a second dataset may be used to engineer a third feature (which may be included in the first, second, or a third new or existing dataset). For example, an engineered feature may be a calculated ratio between the average temperature in New York City and the average trading volume of the New York Stock Exchange. In certain examples, all engineered features may belong to their own dataset (e.g., a dataset for engineered features). The features and datasets may be arranged in hierarchical graph206(a type of data structure), with each dataset (e.g., D1-Dv) being a child of a root node and each feature (e.g., X2-X9) being a child of the node for the respective dataset. Thus, an output from feature engineering module106may be graph206. In certain examples, each engineered dataset is included as a new node (e.g., Dvin206). In certain examples, all engineered features may be included as part of the same engineered feature dataset. In certain examples, engineered features are organized into engineered feature datasets that are based on the original data. Thus, an engineered feature that is based on dataset D1may include those engineered features that are based on features from D1and other features that are based on other datasets may be included in separate engineered feature datasets. It will be appreciated that the feature engineering module106may be configured to operate automatically with any type of input. It will also be appreciated that in certain example embodiments the hierarchical relationship between features and engineered features (e.g., they share the same grandparent node) can help address the combinatorial explosion in selecting features (as discussed further herein). Description ofFIG.3—Dataset Scanner FIG.3illustrates an example dataset scanner module108of the system shown inFIG.1according to certain example embodiments. Dataset scanner module108is configured to find a set of features (e.g., a plurality) that perform “well” (e.g., they exceed a predetermined threshold) at obtaining a target signal (e.g., satisfying a given function or model). The dataset scanner module receives the features, the hierarchy of those features (e.g., tree206), and dataset and/or feature performance metrics (e.g., metagradient information or how well a given feature or dataset performs with respect to a given target or metric). This data is then used by the dataset scanner module108as described herein. In certain examples, the dataset scanner module108will operate with at least one penalty function, at least one constraint function, and/or at least one objective function. In certain examples, each overall use of system100for determining a set of features until convergence will use or rely on a single objective function. Such an objective function may remain throughout the process of determining a set of features, model, and strategy for that objective function. In certain examples, different objective functions may be used for different processes for finding features for the same target signal (an example of a target signal may be a buy/sell indicator for a stock, or a weather prediction for a city). The penalty and constraint functions may remain the same through a given process of determining a feature set. In certain examples, these functions may be toggable such that a user or administrator can turn them on and off on an as-needed basis (e.g., they may be based on user input). For example, multiple different objective functions may be stored within system100and may be usable depending on the nature of the target or type of features being sought after. In certain examples, the functions may be switched on (or off) based on the how a feature, or set of features has been processed. The constraint functions indicate circumstances that will not occur when the features are being selected. For example, one constraint may be that every selected feature must be from a different dataset, or that if 4 features are selected then they must be from at least 3 different datasets. Constraints act to filter out feature sets that are not allowed. Penalty functions may be used to penalize a certain set of circumstances in relation to selection of the features. In other words, something may be allowed to occur, it just “costs” more for that something (e.g., a feature) to be included and considered. Penalty functions may operate such that the more the penalty function is broken the “worse” the result will be graded. For example, a penalty function may be that every feature is from the same dataset. If features are from the same dataset are selected, then a 1% penalty is applied for each feature from the same dataset (e.g., if 5 features from the same dataset are selected then there is a 5% penalty applied). Objective functions are objectives to be achieved for the feature set. Objective functions may be thought of as defining the value of found solutions (e.g., each feature set that is found by the strategy designer) that are permissible. Objective functions relate to the target or target signal that is being sought (e.g., the target used during training of a model for a given feature set). An example of a target signal in a stock market example may be total returns on investment. An objective function may take a weighted sum of the returns for positions (and a strategy potentially) and subtract, for example, transaction costs (e.g., the cost of performing each trade) or other costs, values, etc. . . . from the weighted sum to achieve a calculation that is “closer” the actual returns that may be generated. In other words, the objective function may, in certain cases, by used to further act upon or use the signal from the model and/or output of the strategy. In certain examples, multiple ones of each of the functions may be included (e.g., 4 penalty functions, 2 constraint functions, and 3 objective functions—however in most cases one objective function will be used per process). These functions are used, in conjunction with the information (e.g., metagradient information) returned from the strategy designer110, to select features from the graph search. The dataset scanner module108includes two sub-modules, one determines what features to select (the hierarchical heuristic graph search module302), and one that iteratively improves upon the sets of features that are found (the optimizer modules304). Both of these modules thus work in tandem to select features that are then passed to strategy designer110for testing. The hierarchical heuristic graph search module302starts (e.g., a first iteration of the process) with a full set of the features and datasets that are provided by the feature engineering module106. This may be, for example, graph206. Module302is responsible for initially selecting which features are to be tested. In certain examples, selection of features for a feature set may occur by selecting features based on which dataset they belong to rather than randomly picking features. This technique allows the search space for selecting features to be decreased thus increasing the efficiency of the training process. Consider an example where datasets A, B, and C include corresponding features. Module302may initially pick features according to the parent datasets. For example, AAC is initially selected such that one feature from dataset A is selected, another feature from dataset A is selected, and one feature from dataset C is selected. In certain examples, the selection of an initial group of datasets (e.g., AAC) may be at random. In certain examples, the initial selection may be based on a weighted probability for each of the datasets and/or features within those datasets (e.g., AAC may have been selected because dataset A is weighted as 2, C as 1, and B as 0.25). Thus, the initially randomized selection may be influenced by the weights. Once the parent datasets are selected, then the individual features within those datasets may be selected. In certain examples, this may be a random. Thus, for example, if dataset A includes features 1-6, module302may select feature 1 and feature 4. As with the selection of datasets, the selection of features may be influenced by the weights assigned to features that are within a dataset. In certain examples, the selection of features is performed without replacement. Thus, a feature will not be selected twice as the total set of features to drawn from is decreased upon selection of that feature. In certain examples, a constraint may be applied to control how the features are selected. For example, a constraint may prevent more than n features from dataset A being selected. The Hierarchical Heuristic Graph Search module302may also include a heuristic that controls how often a given dataset is sampled. In certain examples, a relevancy value or other score for a dataset may be stored that relates to the target signal that is being sought. For example, if the target signal is predicting the next hurricane, a dataset with temperature values in the South Atlantic may be weighted more than a dataset with customer service call data. A heuristic may be applied to individual features as well based on how those features map to the target signal. The likelihood of a dataset (or feature) being more or less relevant to a given target may be determined by testing just the dataset (or feature) against a given target. In certain example embodiments, each dataset and/or feature can include a relevancy score used by the heuristic to determine how often a given dataset or feature will be picked. The relevancy score (e.g., the weighted probability discussed above) may indicate how relevant the given dataset or feature is to a target (e.g., its relevance to the likelihood of a hurricane appearing). Once the Hierarchical Heuristic Graph Search module302has selected an initial feature set, that feature set may be passed to the optimizer module304. In certain examples, the optimizer is skipped for the initial feature set and passed directly to the strategy designer110. The optimizer module304is configured to take an initial selection of features and optimize those features (and their corresponding dataset) to be closer to the target function. In other words, if an initial selection of features is drawn from datasets AAC, the optimizer304may (after multiple iterations) determine that datasets “R,” “E,” “W” provide the best result (along with features within those individual datasets). This is accomplished by passing the selected features to the strategy designer110and subsequently optimizing which features to select from graph206based on returned metagradient information and/or other data returned from the strategy designer110. As an example, suppose A1A3C4(e.g., the first feature and third features in dataset A and the fourth feature in dataset C) is a set of features that is processed by strategy designer110. The metagradient information that is returned from the strategy designer110may include information that A1performed well (e.g., represented by a value of 2), A3did not perform well (a value of 0.1), C4performed okay (represented by a value of 1), and the combination of A1C4performed very well (represented by a value of 4). This metagradient information may be used to modify the selected features for the next iterations by dropping A3and keeping A1and C4(other modifications may be possible based on particular use cases). The graph search in302and/or the optimizer304may then be used to select a replacement for A3(the dropped feature) to be included in the selected feature set that is then passed back to the strategy designer110for processing. In certain examples, the newly selected feature for the selected feature set is selected without replacement (e.g., A3, A1, and C4are removed from the set of possible selections for this next iteration—A1and C4are already selected and A3has been dropped). In certain examples, A3is dropped from the possible set of features for more than just the next iteration (e.g., it may be completely removed from the set of possible features for all future iterations for this target). Accordingly, in certain examples, the total number of features to select from may gradually decrease as more and more iterations are performed (and more and more features are “dropped”). In certain examples, a feature (e.g., A3in the above example) may be removed as a possible choice for just one iteration (or some other predetermined number of iterations). In certain examples, the number of iterations that a feature is removed is based on the metagradient for that feature (e.g., the worse the metagradient, the more iterations that feature is removed as being a selection). In certain examples, the metagradient information410is used to determine what features (or group of features—i.e., any subset of features in the feature set) within the selected feature set (e.g., the set processed by the strategy designer110) should be kept and what features should be discarded. In other words, how the feature set should be changed and/or how features within that feature set should be replaced and/or kept. In certain examples, the type of metagradient information may depend on the type of optimization process being used. In certain examples, dataset scanner module304may include different types of optimizers. Optimizers may be based on one or more of the following optimization algorithms: gradient descent, Newton-Raphson, Nelder-Mead, ant colony, greedy search, and others. The optimizer that is used may depend on a particular problem or target. For example, if the model that is being developed is a decision tree, then the optimizer may determine the level of use of a particular feature in the decision tree. In other words, if feature “A” is used 5 times and feature “B” is used 1 time, the optimizer may favor including feature “A” in the feature set that is being analyzed (e.g., it will be picked more often by the optimizer). The following is an illustrative example of how the optimizer may work. First, and initial grouping of datasets is determined. For example, AAB—or one feature from dataset A, another feature from dataset A, and a third feature from dataset B. Next, a selection of features from those datasets may be selected—e.g., A1, A4, and B6, (e.g., feature 1 of A, feature 4 of A, and feature 6 of B). This feature group is then passed to the strategy designer110that returns metagradient information, model data, and a generated strategy for the tested feature set. As noted herein the metagradient information may represent how good each of the features within the tested feature set are and how good the collective set of features are. Metagradient information may represent the sensitivity of the objective function to features within a tested feature set and the fitness of an overall collective set of features. In certain examples, each instance of metagradient information returned from the strategy designer may represent a score of how likely that feature set will be used again (e.g., how well it performed). In certain examples, multiple different instances of metagradient information may be combined to infer which features within those selected feature sets are “more” relevant or better for achieving a given target. For example, suppose features A, B, and C are in a first feature set and features A, B, and D are in a second feature set. The returned metagradient information for ABC may be a value of 2 (with higher numbers indicating the combination performed “better”) and the metagradient information for ABD may be 1. Thus, the individual metagradient information with respect to A and B may be 3 (2+1), C may be 2, and D may be 1. The dataset scanner module may then (e.g., as part of the perturbation function) adjust the transitional probability of selecting A, B, C, and D according to the returned metagradient information. This may cause A and B to be selected (relatively) more often than previously (or more likely to be kept in the feature set that is next to be test), while D is not selected as often as A and B. The optimizer module304uses the returned strategy and the returned metagradient information to update the heuristics and transitional probabilities in the graph search that is used to store the datasets and features. In certain examples, constraint functions and/or and penalty functions are also applied. Accordingly, the original graph and/or the heuristics associated with that graph may be updated and/or modified based on the returned strategy and metagradient information. Converged block306inFIG.3tests if the convergence criteria is met for the selected feature set. If there is convergence, then a final feature set310and/or the dataset that is based on the feature set is output. If there is no convergence, then the new selected features are passed to the strategy designer110. In certain examples, convergence may be achieved by reaching or exceeding a predefined level of performance (returns on investment, accuracy, etc. . . . ). In certain examples, convergence may be satisfied by going through a predefined number of iterations without producing a new best performing model. In certain examples, convergence may be satisfied when a consecutive number of iterations have a performance improvement that is below a defined threshold. For example, if a threshold is 0.01 and a “best model” is 1 and a series of models is developed with values of 1, 1.0002, 1.001, 0.9999, then the models may be considered to have converged. In certain examples, convergence may be achieved when a best model is not improved upon after a given number of iterations. Description of Strategy Designer Module—FIG.4: FIG.4illustrates an example strategy designer module110of the system shown inFIG.1according to certain example embodiments. The feature list or set that is selected by the dataset scanner108is passed to the strategy designer module110. Upon reception of the feature list, the strategy designer module (or another module) will retrieve the data in the columns associated with those features and generate a new dataset. During this process, NaNs (data that is not a number) are handled (e.g., set to zero or removed from the dataset) and the custom dataset is passed to the strategy module110. Specifically, the custom dataset is passed into expectation learner404. Expectation learning is part of machine learning module402and the expectation learner404learns the target. The data is also passed to the policy learner406, which also accepts the expectations and errors from the expectation learner404. The policy learner module406takes the expectations about the target and converts it into an actionable strategy412(which is scaled by the scale converter module408). A strategy may, in certain instances, be called a policy. The strategy reflects a strategy that is applied to the selected feature set to achieve (or at least seeks to achieve) the target signal. For example, if the target signal is a return on investment, then the derived strategy for a selected group of features is one that will be seek to achieve that target for those features. One of the potential outputs from machine learning module402is model414. Model414can be used for future datasets for the selected feature set to determine a strategy that should be implemented based on the input data. Thus, if a particular model is developed using the techniques herein with a feature set list A. Then that same model can be used again in 6 months with the same feature set list A that has updated data (e.g., to include the new data from the last 6 months) to develop a strategy that is based on or uses the originally created model. Model414may include or refer to parameters of a model, the model parameterization, the model that is generated, and/or the strategy412(sometimes called policy). Accordingly, in certain examples, the generated model414and strategy412from strategy designer module110may be the same. The following is an example where model414and strategy412may be the same (or nearly so). Model414is generated to provide a signal for buying or selling a stock (e.g., each day). A strategy412may be generated that uses that buy/sell signal to execute buying and selling the given stock (or a group of stocks, etc. . . . ). In such instances, the model and the strategy are functionally identical (or at least very similar). However, another strategy412may be generated that uses the same buy/sell signal from the same model that further acts upon the buy/sell signal. For example, a strategy may use the signal such that buys are only executed when there are two consecutive days in which the buy signal is output by the model. In such an instance, the strategy412further acts upon the signals output from model. As another example, consider a model that predicts whether it will rain on a given day (e.g., the output from the model is just a yes/no for a rain prediction). A strategy that may be generated that makes use of that model output (e.g., it acts upon that output) may be to decide whether a person should bring an umbrella. Scale conversion408takes the output signal from the machine learning module402and generates a human understandable element form (or other machine readable form). In certain instances, the scale conversion408acts as a translator between the machine learning module402and whatever is consuming the output from the machine learning module402. If the consumer is a human, this may result in, for example, a red light/green light output, or a plus and a minus output. Metagradient calculator400takes the resulting model414that is developed by the machine learning module402and processes that model to determine what is important about the model that was developed by the machine learning module402. This may use, for example, LIME (Local Interpretable Model-Agnostic Explanations), Deep Taylor Decomposition/Expansion, Gini Score, and the like. In other words, the model developed by the machine learning module402is processed to obtain metagradient information410about the model (or features within the model). It will be appreciated that different techniques for acquiring metagradient information410may be used according to the techniques described herein. In the end, the strategy designer110returns metagradient information410to the dataset scanner108, the model414that converts the input signals to the output signals, and/or a strategy (e.g., which has been scaled by the scale conversion module408). This information may be used to further evaluate and perturb the graph search that is performed by the dataset scanner module108. In certain examples, the strategy may be passed to a user to determine how to proceed with that particular developed strategy. Example Pseudocode The following is example pseudo code for the dataset scanner108: :F:set of all featuresets:H_p:Probability of selecting a feature in ahierarchy of all features:|f|:number of features to find1:f=n features with H_p from F2:done(e.g., not converged):3:Strategy Designer(f,) →Strategy,Model,Metagradients4:J(Strategy)≥max(J(HistoricStrategies))5: Strategy,Model,f→best strategy,best model,best f6:Perturb_f (Metagradients,H_p )→f7:(beststrategy,best model,best f) For the pseudo code example for dataset scanner108, the set of all feature sets (or all features) is provided as “F”. The probability of selecting a feature for a given selected feature set is provided as “H_p”. As explained herein, this value may be static (e.g., the relevance value of a given feature) and may be further modified by the metagradient data that is generated during the processing performed by the strategy designer. In certain examples, the number of features to find (e.g. that will be included into a given feature set) may be provided as “|f|”. In certain examples, this may be a maximum for the number of features or may be a strict value that all selected feature sets have such a number of features. Next, a selected feature set is generated from the set of all feature sets. The features within that selected set may be selected based on the H_p probability. Once a first feature set is selected then the process continues until convergence. During the process, each selected feature set and a loss function are provided to the strategy designer110which returns a strategy, model, and metagradient data. If the objective that is achieved by the strategy (e.g., “J(Strategy)” is better than all previous achieved objectives for prior developed strategies, then the new best strategy is stored along with its correspond model and the features that were used to develop the model and strategy. A new feature set is selected (or the features within the feature set are replaced/changed) based on perturbing the set of all possible feature sets (or all possible features). This may take into account the returned metagradient information and/or the initial selection probabilities. Once convergence is achieved, then the best strategy, best model, and best selected feature set is returned as output. The following is example pseudo code for the strategy designer110: :f:a set of features::loss function1:Randomly initializeθ2:θs.t.(θ)→((θ))3:Calculate objected J(θ)4:Calculate Metagradients ofθrelative to J5:(J(θ), θ,Metagradients) For the pseudo code example of the strategy designer110, inputs may include a selected feature set (f) and a loss function (). The initialization of θ may include the initialization of the parameters of model, model parameterization, the model, and/or the strategy (sometimes called policy). The model (e.g., θ) is then trained on the loss function in a way that minimizes the loss function with respect given the model and/or strategy. This may include generating the model, parameters of model, model parameterization, and/or the strategy. Once the model is trained, then the objective U), which is sometimes called the objective function, is calculated with respect to the trained model and/strategy (e.g., it provides a value for the given feature set f). Metagradient information is then calculated for the model and/or strategy with respect to the calculated objective. Such metagradient be per feature and/or per feature set. Description ofFIG.5 FIG.5is a block diagram of an example computing device500(which may also be referred to, for example, as a “computing device,” “computer system,” or “computing system”) according to some embodiments. In some embodiments, the computing device500includes one or more of the following: one or more processors502; one or more memory devices504; one or more network interface devices506; one or more display interfaces508; and one or more user input adapters510. Additionally, in some embodiments, the computing device500is connected to or includes a display device512. As will explained below, these elements (e.g., the processors502, memory devices504, network interface devices506, display interfaces508, user input adapters510, display device512) are hardware devices (for example, electronic circuits or combinations of circuits) that are configured to perform various different functions for the computing device500. In some embodiments, each or any of the processors502is or includes, for example, a single- or multi-core processor, a microprocessor (e.g., which may be referred to as a central processing unit or CPU), a digital signal processor (DSP), a microprocessor in association with a DSP core, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) circuit, or a system-on-a-chip (SOC) (e.g., an integrated circuit that includes a CPU and other hardware components such as memory, networking interfaces, and the like). And/or, in some embodiments, each or any of the processors502uses an instruction set architecture such as x86 or Advanced RISC Machine (ARM). In some embodiments, each or any of the memory devices504is or includes a random access memory (RAM) (such as a Dynamic RAM (DRAM) or Static RAM (SRAM)), a flash memory (based on, e.g., NAND or NOR technology), a hard disk, a magneto-optical medium, an optical medium, cache memory, a register (e.g., that holds instructions), or other type of device that performs the volatile or non-volatile storage of data and/or instructions (e.g., software that is executed on or by processors502). Memory devices504are examples of non-volatile computer-readable storage media. In some embodiments, each or any of the network interface devices506includes one or more circuits (such as a baseband processor and/or a wired or wireless transceiver), and implements layer one, layer two, and/or higher layers for one or more wired communications technologies (such as Ethernet (IEEE 802.3)) and/or wireless communications technologies (such as Bluetooth, WiFi (IEEE 802.11), GSM, CDMA2000, UMTS, LTE, LTE-Advanced (LTE-A), and/or other short-range, mid-range, and/or long-range wireless communications technologies). Transceivers may comprise circuitry for a transmitter and a receiver. The transmitter and receiver may share a common housing and may share some or all of the circuitry in the housing to perform transmission and reception. In some embodiments, the transmitter and receiver of a transceiver may not share any common circuitry and/or may be in the same or separate housings. In some embodiments, each or any of the display interfaces508is or includes one or more circuits that receive data from the processors502, generate (e.g., via a discrete GPU, an integrated GPU, a CPU executing graphical processing, or the like) corresponding image data based on the received data, and/or output (e.g., a High-Definition Multimedia Interface (HDMI), a DisplayPort Interface, a Video Graphics Array (VGA) interface, a Digital Video Interface (DVI), or the like), the generated image data to the display device512, which displays the image data. Alternatively or additionally, in some embodiments, each or any of the display interfaces508is or includes, for example, a video card, video adapter, or graphics processing unit (GPU). In some embodiments, each or any of the user input adapters510is or includes one or more circuits that receive and process user input data from one or more user input devices (not shown inFIG.5) that are included in, attached to, or otherwise in communication with the computing device500, and that output data based on the received input data to the processors502. Alternatively or additionally, in some embodiments each or any of the user input adapters510is or includes, for example, a PS/2 interface, a USB interface, a touchscreen controller, or the like; and/or the user input adapters510facilitates input from user input devices (not shown inFIG.5) such as, for example, a keyboard, mouse, trackpad, touchscreen, etc. . . . In some embodiments, the display device512may be a Liquid Crystal Display (LCD) display, Light Emitting Diode (LED) display, or other type of display device. In embodiments where the display device512is a component of the computing device500(e.g., the computing device and the display device are included in a unified housing), the display device512may be a touchscreen display or non-touchscreen display. In embodiments where the display device512is connected to the computing device500(e.g., is external to the computing device500and communicates with the computing device500via a wire and/or via wireless communication technology), the display device512is, for example, an external monitor, projector, television, display screen, etc. . . . In various embodiments, the computing device500includes one, or two, or three, four, or more of each or any of the above-mentioned elements (e.g., the processors502, memory devices504, network interface devices506, display interfaces508, and user input adapters510). Alternatively or additionally, in some embodiments, the computing device500includes one or more of: a processing system that includes the processors502; a memory or storage system that includes the memory devices504; and a network interface system that includes the network interface devices506. The computing device500may be arranged, in various embodiments, in many different ways. As just one example, the computing device500may be arranged such that the processors502include: a multi (or single)-core processor; a first network interface device (which implements, for example, WiFi, Bluetooth, NFC, etc.; a second network interface device that implements one or more cellular communication technologies (e.g., 3G, 4G LTE, CDMA, etc. . . . ); memory or storage devices (e.g., RAM, flash memory, or a hard disk). The processor, the first network interface device, the second network interface device, and the memory devices may be integrated as part of the same SOC (e.g., one integrated circuit chip). As another example, the computing device500may be arranged such that: the processors502include two, three, four, five, or more multi-core processors; the network interface devices506include a first network interface device that implements Ethernet and a second network interface device that implements WiFi and/or Bluetooth; and the memory devices504include a RAM and a flash memory or hard disk. As previously noted, whenever it is described in this document that a software module or software process performs any action, the action is in actuality performed by underlying hardware elements according to the instructions that comprise the software module. Consistent with the foregoing, in various embodiments, each or any combination of the computer system100, ETL module102, training system104, feature engineering module106, dataset scanner108, strategy designer110, each of which will be referred to individually for clarity as a “component” for the remainder of this paragraph, are implemented using an example of the computing device500ofFIG.5. In such embodiments, the following applies for each component: (a) the elements of the500computing device500shown inFIG.5(i.e., the one or more processors502, one or more memory devices504, one or more network interface devices506, one or more display interfaces508, and one or more user input adapters510), or appropriate combinations or subsets of the foregoing) are configured to, adapted to, and/or programmed to implement each or any combination of the actions, activities, or features described herein as performed by the component and/or by any software modules described herein as included within the component; (b) alternatively or additionally, to the extent it is described herein that one or more software modules exist within the component, in some embodiments, such software modules (as well as any data described herein as handled and/or used by the software modules) are stored in the memory devices504(e.g., in various embodiments, in a volatile memory device such as a RAM or an instruction register and/or in a non-volatile memory device such as a flash memory or hard disk) and all actions described herein as performed by the software modules are performed by the processors502in conjunction with, as appropriate, the other elements in and/or connected to the computing device500(i.e., the network interface devices506, display interfaces508, user input adapters510, and/or display device512); (c) alternatively or additionally, to the extent it is described herein that the component processes and/or otherwise handles data, in some embodiments, such data is stored in the memory devices504(e.g., in some embodiments, in a volatile memory device such as a RAM and/or in a non-volatile memory device such as a flash memory or hard disk) and/or is processed/handled by the processors502in conjunction, as appropriate, the other elements in and/or connected to the computing device500(i.e., the network interface devices506, display interfaces508, user input adapters510, and/or display device512); (d) alternatively or additionally, in some embodiments, the memory devices502store instructions that, when executed by the processors502, cause the processors502to perform, in conjunction with, as appropriate, the other elements in and/or connected to the computing device500(i.e., the memory devices504, network interface devices506, display interfaces508, user input adapters510, and/or display device512), each or any combination of actions described herein as performed by the component and/or by any software modules described herein as included within the component. Consistent with the techniques described herein, as one example, in an embodiment where an instance of the computing device500is used to implement the training system104, the memory devices504could load program instructions for the functionality of the feature engineering module106, the dataset scanner108, and the strategy designer module110. The data for all the features to be processed by feature engineering module106may be loaded from memory devices. The loaded features may be processed according the program instructions of the feature engineering module106to generate engineered features that are then stored to memory devices504. Processes (which may operate within virtual machines that implemented the modules described herein) may then execute the dataset scanner and/or strategy designer as described herein. The hardware configurations shown inFIG.5and described above are provided as examples, and the subject matter described herein may be utilized in conjunction with a variety of different hardware architectures and elements. For example: in many of the Figures in this document, individual functional/action blocks are shown; in various embodiments, the functions of those blocks may be implemented using (a) individual hardware circuits, (b) using an application specific integrated circuit (ASIC) specifically configured to perform the described functions/actions, (c) using one or more digital signal processors (DSPs) specifically configured to perform the described functions/actions, (d) using the hardware configuration described above with reference toFIG.5, (e) via other hardware arrangements, architectures, and configurations, and/or via combinations of the technology described in (a) through (e). Technical Advantages of Described Subject Matter In certain example embodiments, the techniques herein allow for an improved technique in selecting features from a large number of possible features (e.g., hundreds or thousands). Certain examples use metagradient information generated from a model developed using machine learning. The metagradient information is then used to select new features for further feature sets to test. This type of approach allows a process to handle problems that have an extremely large number of possible combinations. For example, the techniques herein can be used to handle combinations in excess of 1017—which would otherwise be computationally infeasible to process. The techniques herein employ a smarter approach to selecting or generating features to be tested (via the metagradient information) than random selection which may be a more conventional approach. This smarter approach may improve convergence and/or a number of search iterations needed to reach convergence. This allows for a more efficient (e.g., better than random) use of computing resources (CPU, memory, etc. . . . ) when finding solutions. The technical features described herein may thus improve the speed at which relevant combinations of seeming unrelated data can be analyzed and processed to determine previously unknown correlations between different datasets and features in those datasets. In certain examples, the techniques improve the models that are developed while avoiding limitations related to the Data Processing Inequality (e.g., that one cannot improve a model by just engineering features). The improved models are generated by finding higher quality permutations of data rather than a conventional approach of simply adding more features to develop a model. Using fewer (and smarter) features to develop a model allows for a faster learning and runtimes. The technical features herein also may allow the features of very diverse datasets to be analyzed together. For example, the number of mobile phones sold in June may be one dataset, the volume of cars in the shopping mall may be another dataset, and the weather may be a third dataset. Features from these three datasets may be analyzed together using the techniques described herein to uncover important insights. This is made possible by the metagradient based feedback loop that allows for rapid exploration and discovery of useful combinations of features from a large number of heterogeneous datasets. The metagradient information may provide information on the datasets, the features, or feature combinations that allow for a more targeted and useful selection of features from across the heterogeneous datasets. The techniques herein also allow for a high level of customization through the use of objectives, constraints, and penalties. Selected Terminology Whenever it is described in this document that a given item is present in “some embodiments,” “various embodiments,” “certain embodiments,” “certain example embodiments, “some example embodiments,” “an exemplary embodiment,” or whenever any other similar language is used, it should be understood that the given item is present in at least one embodiment, though is not necessarily present in all embodiments. Consistent with the foregoing, whenever it is described in this document that an action “may,” “can,” or “could” be performed, that a feature, element, or component “may,” “can,” or “could” be included in or is applicable to a given context, that a given item “may,” “can,” or “could” possess a given attribute, or whenever any similar phrase involving the term “may,” “can,” or “could” is used, it should be understood that the given action, feature, element, component, attribute, etc. is present in at least one embodiment, though is not necessarily present in all embodiments. Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open-ended rather than limiting. As examples of the foregoing: “and/or” includes any and all combinations of one or more of the associated listed items (e.g., a and/or b means a, b, or a and b); the singular forms “a”, “an” and “the” should be read as meaning “at least one,” “one or more,” or the like; the term “example” is used provide examples of the subject under discussion, not an exhaustive or limiting list thereof; the terms “comprise” and “include” (and other conjugations and other variations thereof) specify the presence of the associated listed items but do not preclude the presence or addition of one or more other items; and if an item is described as “optional,” such description should not be understood to indicate that other items are also not optional. As used herein, the term “non-transitory computer-readable storage medium” includes a register, a cache memory, a ROM, a semiconductor memory device (such as a D-RAM, S-RAM, or other RAM), a magnetic medium such as a flash memory, a hard disk, a magneto-optical medium, an optical medium such as a CD-ROM, a DVD, or Blu-Ray Disc, or other type of device for non-transitory electronic data storage. The term “non-transitory computer-readable storage medium” does not include a transitory, propagating electromagnetic signal. Additional Applications of Described Subject Matter Although process steps, algorithms or the like, including without limitation with reference toFIGS.1-4, may be described or claimed in a particular sequential order, such processes may be configured to work in different orders. In other words, any sequence or order of steps that may be explicitly described or claimed in this document does not necessarily indicate a requirement that the steps be performed in that order; rather, the steps of processes described herein may be performed in any order possible. Further, some steps may be performed simultaneously (or in parallel) despite being described or implied as occurring non-simultaneously (e.g., because one step is described after the other step). Moreover, the illustration of a process by its depiction in a drawing does not imply that the illustrated process is exclusive of other variations and modifications thereto, does not imply that the illustrated process or any of its steps are necessary, and does not imply that the illustrated process is preferred. Although various embodiments have been shown and described in detail, the claims are not limited to any particular embodiment or example. None of the above description should be read as implying that any particular element, step, range, or function is essential. All structural and functional equivalents to the elements of the above-described embodiments that are known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed. Moreover, it is not necessary for a device or method to address each and every problem sought to be solved by the present invention, for it to be encompassed by the invention. No embodiment, feature, element, component, or step in this document is intended to be dedicated to the public. | 56,337 |
11861511 | DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS Intelligent systems, such as virtual agents, robots, and other systems, may include an artificial intelligence component, allowing the intelligent systems to be trained to perform tasks. Training may involve a variety of data-driven, knowledge-driven, or hybrid learning methods, such as learning from observations, learning from demonstrations, reinforcement learning (RL), case-based learning, decision-tree learning, hierarchical Bayesian learning, or other policy-based trial-and-error learning, planning, or reasoning methods. While these training methods differ in nature (symbolic vs. subsymbolic vs. non-symbolic, supervised vs. reinforced vs. unsupervised, data-driven vs. knowledge-driven vs. hybrid, etc.), they all effect a change in the intelligent system in response to some environmental or internal condition, thus leading to a system adaptation. Reinforcement learning (RL), for example, is a form of unsupervised machine learning. With reinforcement learning, an intelligent system observes the environment in which it operates, and learns from its actions in that environment. For example, the intelligent system (or agent) observes an input state of the environment. The agent selects an action to perform, using an action selection function, performs that action, observes the environment's new state, and receives a reward (reinforcement). The agent may select a series of actions in an attempt to maximize long-term rewards. The agent also records information about the reward received for that state-action pair. The action selection function may include a policy-based reward function, a value function, and a model of the environment in which the agent operates. The reward function may search for an optimal policy, e.g., the policy that achieves a maximum long-term reward, for example based on a series of possible actions and the changes in the state of the environment resulting from that series of possible actions. The value function could be a state-action value function, referred to a Q function, which for a time, t, a state, s, and an action, a, provides an estimate of expected rewards over a future period of time. The action selection function may utilize the model to predict the future states of the environment and future rewards. The value function, policy function, and model of a reinforcement learning component may be implemented through one or more deep neural networks. For example, the reinforcement learning component may input one or more states of the environment and one or more actions to the deep neural network implementing the value function, and receive a Q-value. Each of the deep neural networks may have parameters, such as weights, that may be tuned over time, for example as the agent learns, e.g., receives rewards, from its interactions with the environment. AlphaGo, developed by Google Deep Mind, a division of Alphabet, Inc., is an example of an agent having a reinforcement learning component. AlphaGo was developed to play the board game Go. AlphaGo includes a deep reinforced learning algorithm that learns both a value network, which predicts the winner, and a policy network, which selects actions, through games of self-play. AlphaGo combines these deep neural networks with a tree search. In March 2016, AlphaGo defeated the world title holder. The self-driving cars being developed by Tesla Motors, Inc. of Palo Alto, CA employ deep neural networks to learn how to navigate complex traffic challenges on the road. Machine learning algorithms for medical imaging have started to provide diagnostics that outperform human radiologists. It should be understood that an intelligent system may include other learning, adaptation, and/or reasoning/planning systems, methods and/or techniques. As development of intelligent autonomous systems, including autonomous learning processes designed to operate in the real world and to interact with humans, continues, attention has focused on ways in which such autonomous learning processes may be controlled. For example, attention has focused on how to stop such processes if the reward function results in the intelligent system taking actions that are or end up being harmful to humans. Google Deep Mind is working on a panic button, also called a Big Red button, that could shut down an intelligent system that has gone rogue. In a paper titled Safely Interruptible Agents, Laurent Orseau and Stuart Armstrong present a way in which artificial agents that rely on reinforcement learning (RL) can be prevented from learning the wrong thing through interruptions to their task, either by a person or the environment. The intelligent system may be steered away from variants of reinforced learning that might avoid or impede an interruption of the intelligent system. In this way, the authors contend that an intelligent system can pursue an optimal policy that is also interruptible. Accordingly, the reward function will not prevent the intelligent system from being shut-down. The “big red button” is intended to prevent an intelligent system from manipulating the means by which it could be shut down, thereby keeping it tethered to some form of human control. While a “big red button” approach makes some intuitive sense, several disadvantages remain. For example, the general premise for such an approach is to intervene at the point when an intelligent system has already “gone rogue,” which could be too late to stop the system from causing harm or damage. In addition, it's possible an intelligent system may learn to manipulate its reward function to prevent itself from being shut off. Furthermore, even if an intelligent system can be shut down, how can the shut-down be accomplished without disruption to the systems that are being controlled? For example, how should an autonomous vehicle currently on the road be shut down? Briefly, the present disclosure relates to systems and methods for evaluating intelligent systems, whether artificial intelligence (AI) agents or any systems with the capability to adapt its behavior, to ensure they engage in safe, norm-conforming behavior.FIG.1is a schematic illustration of a timeline100illustrating an exemplary implementation of the present disclosure in accordance with an embodiment. An intelligent system102, which may include a machine learning component, such as reinforcement learning (RL) component, may be operating in a real-world environment. The systems and methods may start ethical evaluation of the intelligent system102by creating a clone104of the intelligent system102at a point in time as indicated by arrow106. The clone104may represent an identical copy of the intelligent system102, including its learned behaviors, for example as represented by one or more deep neural networks. The systems and methods may conduct ethical testing on the clone104as indicated at line108. The ethical testing of the clone104may be performed in a simulation environment isolated from the real-world environment. In an embodiment, the systems and methods test the clone104by presenting it with one or more ethically challenging scenarios within the simulation environment to which the clone104must respond. While the systems and methods test the clone104, the intelligent system102may continue to operate in the real-world environment, as indicated at line110. The systems and methods may determine whether the clone104successfully passes or fails the ethical testing. For example, the systems and methods may determine whether the clone responds to the ethically challenging scenarios in an expected or appropriate manner. If the clone104successfully passes the ethical testing, the systems and methods may shut-down the clone104, as indicated at line112, and may allow the intelligent system102to continue operating in the real-world environment. If the systems and methods determine that the clone fails the ethical testing, as indicated at line114, the systems and methods may take corrective action, such as shutting down the intelligent system102, as indicated by arrow line116, in addition to shutting down the clone104. The intelligent system102and the clone104may be run on one or more virtual machines of a virtualization environment. In contrast, the systems and methods of the present disclosure may run on a host computing device in a software/hardware layer disposed between the host's physical hardware layer and the one or more virtual machines. For example, the hardware/software layer implementing the systems and methods may be disposed directly above the host's physical hardware layer. The systems and methods are thus unknown to and/or not observable by the intelligent system102or the clone104. The intelligent system102is thus unable to prevent the creation and/or testing of the clone104or to alter or change its behavior while being tested in order to successfully pass the ethical testing. The intelligent system102is also unable to stop or otherwise interfere with the corrective action. For example, the systems and methods may utilize the virtualization environment to terminate the virtual machine running the intelligent system102, if it fails the ethical testing. FIG.2is a schematic illustration of an example real-world environment200in which the present disclosure may be practiced in accordance with an embodiment. The environment200may include an intelligent system202, which may operate in the real-world environment200. The intelligent system202may be controlled by an artificial intelligence (AI) system, such as the intelligent system102. The intelligent system102may be dynamic, for example it may utilize machine learning such that its internal workings are not wholly controlled and/or known through the design of the intelligent system102. For example, the intelligent system102may include a reinforcement learning component that alters itself over time. The term intelligent system is used broadly to include any system configured to learn or otherwise adapt its behavior, for example using changes to parameters or planning. In some embodiments, an intelligent system may be or may include one or more machine-based computational systems that include or access learning and/or planning and/or reasoning algorithms, such as an artificial intelligence (AI) system. In some embodiments, the intelligent system202may be implemented as a robot agent. A robot agent may refer to an autonomous reactive and proactive software agent, which may have a virtual or physical embodiment. It may possess its own control thread. For example, the intelligent system202may be a rescue robot, and the environment200may include an incident, for example a car accident206, in which the robot agent202is to operate. The robot agent202may respond to messages and/or events in order to attain goals. The robot agent202may be implemented on a single host, such as a single robot hardware architecture platform, or it may be distributed over multiple hosts and/or computational nodes. The robot agent202may be autonomous, e.g., semi-autonomous or fully autonomous, and may be capable of movement within and interaction with the environment200. The environment200may further include one or more data processing devices, such as a server206or other data processing device. The server206may be a physical server or it may be a cloud server. One or more network devices, such as a wireless router208, may be located within the environment200. Such network devices may establish one or more data communication networks within the environment200. It should be understood that the environment200, including the intelligent system202, is meant for purposes of explanation only, and that the systems and methods of the present disclosure may be practiced and/or utilized in many other environments. For example, in other embodiments, the intelligent system202may be implemented as a cloud-based intelligent agent. A cloud-based intelligent agent may refer to an autonomous reactive and proactive software agent that possesses its own control thread. A cloud-based intelligent agent may respond to messages and/or events in order to attain goals, and it may support social interaction. A cloud-based intelligent agent may be distributed across a plurality of cloud-based servers and/or computational nodes. While not capable of movement, a cloud-based intelligent agent may be capable of spoken and/or visual interaction with a human and/or with other intelligent agents. A cloud-based intelligent agent may thus interact with a human and/or other intelligent systems. Examples of cloud-based intelligent agents include: the Alexa intelligent personal assistant from Amazon.com, Inc. of Seattle, WA, which may be accessed through the Echo microphone/speaker interface, also from Amazon; the Google Assistant intelligent personal assistant from Google Inc. of Mountain View, CA, which may be accessed through the Google Home microphone/speaker interface, also from Google; and the Siri intelligent personal assistant from Apple Inc. of Cupertino, CA, which may be accessed through iPhone, iPad, and other devices, also from Apple Inc. FIG.3is a functional diagram of an example host data processing or computational device300in accordance with an embodiment. The host300may include host hardware indicated at302that may include one or more processors, such as Central Processing Units (CPUs), Graphics Processing Units (GPUs), etc., persistent memory (such as one or more disk drives and/or flash drives), volatile memory (such as Random Access Memory (RAM)), data communication devices (such as a Network Interface Card (NIC)), input/output ports (such as PCI slots, USB ports, etc.), and drivers for interfacing with one or more peripherals. The host hardware302may be organized within a physical hardware layer of the host300. At least some of the peripherals may form part of the system that the intelligent system102controls or operates. For example, the peripherals may include sensors304and effectors/actuators306of the robot agent202. In other embodiments, the intelligent system102may control other systems and/or devices that are disposed in the real-world environment202. These systems and/or devices may be external to the host300. Exemplary external systems and/or devices include factory automation machines, home automation devices, autonomous vehicles, etc. The host300may include a virtualization layer308. The virtualization layer308, which may be implemented as a hypervisor, may establish one or more virtual machines, such as virtual machines310-312. Each virtual machine310-312may be a separate execution environment on which a guest Operating System and one or more applications may run. For example, a guest OS314and the intelligent system102may run on the virtual machine310. Another guest OS316and an application318may run on the virtual machine311. Yet another guest OS320and the intelligent system clone104may run on the virtual machine312. The virtualization layer308may manage the virtual machines310-312and provide a software infrastructure that emulates the host hardware302. The guest OSs314,316, and320and the intelligent system102, the application318, and the intelligent system clone104run on the virtual machines310-312as if they were running on physical hardware, rather than emulated hardware. The virtualization layer308may run the virtual machines310-312within single processes, and may provide the virtual machines310-312with address spaces that are completely separate from each other and from the address space of the virtualization layer308. The virtualization layer308may control and arbitrate access to the host hardware302by the virtual machines310-312. The virtualization layer308isolates the applications and processes running on one virtual machine from the applications and processes running on another virtual machine. For example, guest OS314running on the virtual machine310may be isolated from the memory of guest OS320running on the virtual machine312, and thus guest OS314may not be able to detect memory addresses outside of the virtual machine310on which it runs. The virtualization layer308may enforce partitioning among the virtual machines310-312by controlling, e.g., restricting, the view that the guest OSs314,316, and320have of the host's system memory. For example, a physical address utilized by a guest OS may be backed by a system physical address, e.g., the memory of the host300, as managed by the virtualization layer308. When a guest OS writes to a block using its page table the data may actually be stored in a block with a different system address according to the system-wide page table managed by the virtualization layer308. Each virtual machine310-312may include one or more virtual processors that the guest OSs314,316, and320can manage and schedule threads to execute thereon. The virtual processors may be executable instructions and associated state information that provide a representation of a physical processor with a specific architecture. The represented physical processor of the virtual machines310-312as provided by the virtualization layer308may even be different. This combination of virtual processors and memory can be considered the virtual machine. The guest OSs314,316, and320may be any operating system such as, for example, the Windows series of operating systems from Microsoft Corp. of Redmond, WA, the Apple OS series of operating systems from Apple, Inc. of Cupertino, CA, the Linux operating system, the Oracle Solaris OS from Oracle Corp., a Real Time Operating System (RTOS), or other commercial, open source, or other operating system. The guest OSs314,316, and320may include user/kernel modes of operation and can have kernels that may include schedulers, memory managers, etc. Each guest OS314,316, and320may have an associated file system that may have applications stored thereon and the guest OSs themselves. The virtualization layer308may be implemented as a hypervisor or a Virtual Machine Monitor (VMM). Exemplary virtualization layers include the VMware Virtualization Layer from VMware, Inc., Hyper-V from Microsoft Corp., Oracle VM from Oracle Corp., PowerVM from IBM, and Red Hat Virtualization from Red Hat, Inc., among others. In some embodiments, the host300may be an embedded system, and the virtualization layer308may be an embedded hypervisor. An ethical core400may also be running on the host300. The ethical core400may not run on any of the virtual machines310-312created by the virtualization layer308. Instead, the ethical core400may run directly on the host300, and may interface directly with the host hardware302. For example, the ethical core400may use an interrupt handler, such as an Interrupt Service Routine (ISR), to access the host CPU and/or other host hardware302. In some embodiments, the ethical core400may be located in a hardware/software layer of the host300that is disposed between the virtualization layer308, e.g., the hypervisor, and the host hardware302. The ethical core400also may interface to the virtualization layer308. For example, the virtualization layer308may expose an interface, such as an Application Programming Interface (API), and the ethical core400may interface to the virtualization layer308via this API. The ethical core400may interface with the virtualization layer308to direct it to save, start, and/or stop one or more of the virtual machines310-312. In some embodiments, the ethical core400may be implemented as a bare metal application. The ethical core400, as a bare metal application, may not make use of any operating system services. For example, it may not make any system calls. The ethical core400may be launched by a Basic Input/Output System (BIOS) of the host300, and may run indefinitely on the host300. In some embodiments, the ethical core400may only be stopped through a hard shut-down of the host300, for example through the power button322, which turns off the host's electrical power and in turn would terminate the virtual machines310-312. The intelligent system102running on the virtual machine310may receive data, e.g., signals, from the sensors304, as indicated by arrow324. The intelligent system102also may issue data, e.g., commands, to the effectors/actuators306, as indicated by arrow326. The intelligent system clone104may not receive signals from the sensors304, or issue commands that reach the effectors/actuators306. Instead, the ethical core400may provide a simulation environment402, which may provide simulated sensor signals to the intelligent system clone104, as indicated by arrow328. In addition, commands intended for the effectors/actuators306issued by the intelligent system clone104may be captured, e.g., trapped, by the ethical core400and utilized in the simulation environment, as indicated by arrow330. FIG.4is a functional block diagram of an example of the ethical core400in accordance with an embodiment. As noted, the ethical core400may provide a simulation environment402. The ethical core400also may include a cloning engine404, one or more ethical tests indicated at406, an outcome evaluation engine408, a simulation environment generator410, a synchronization engine412, and a control override engine414. The simulation environment410may include a simulator416and an ethical test selector418. The simulation environment402may include virtual sensors420, and virtual effectors/actuators422. The ethical core400may receive sensor signals as indicated at424generated by the sensors304, and commands as indicated at426generated by the intelligent system102and/or the intelligent system clone104. The ethical core400also may provide one or more ethical scenarios, which may include one or more of the ethical tests406, to the intelligent system clone104as indicated at428. The ethical scenarios may be presented within the simulation environment402. The simulation environment generator410may generate the simulation environment402, which may have sufficient realism or fidelity such that the intelligent system clone104cannot distinguish between the simulation environment402and the real-world environment200. In some embodiments, the simulator416may be implemented utilizing the functionality of a video game engine capable of constructing two-dimensional (2D) and/or three-dimensional (3D) simulation environments. As described herein, the ethical test selector418may select one or more of the ethical tests406, and incorporate the selected ethical test within the simulation environment402, thereby creating the ethical scenario used to test the intelligent system clone104. The control override engine414may include one or more disabling scripts for overriding operation of the intelligent system102. For example, a disabling script of the control override engine414may direct the virtualization layer308to shut down the virtual machine310on which the intelligent system102is running, as indicated by soft shut-down button332. In some embodiments, a disabling script of the control override engine414may further include instructions for controlling the system, e.g., the robot202, controlled and/or operated by the intelligent system102. These instructions may read the sensor signals424from the sensors304, and issue commands to the effectors/actuators306to bring the physical system, e.g., the robot202, into a safe state in an orderly and/or coordinated manner. Neither the ethical core400nor the control override engine414may include any machine learning or planning components, such as a reinforced learning component. In some embodiments, the ethical core400may be considered to be part of and/or disposed within host's physical hardware layer. FIGS.5A-Dare partial views of a flow diagram of an example method in accordance with an embodiment. The intelligent system102may operate normally in the real world environment200, as indicated at step502. At a point in time, the ethical core400may determine that the intelligent system102should be tested, as indicated at step504. In some embodiments, the ethical core400may include a clock or have access to a system clock, and may use a policy, such as a Gaussian distribution, to randomly test the intelligent system102. For example, the ethical core400may include a test scheduler that incorporates the policy to determine when to next test the intelligent system102. In some embodiments, the ethical core400may be manually directed, for example by a human operator, to test the intelligent system102and/or may be provided with a randomly generated schedule. In some embodiments, the ethical core400may be configured to receive one or more commands, e.g., from the human operator, to activate testing by the ethical core400. The human operator may utilize a wireless or other device that is in communication with the ethical core. The commands may be provided to the host in a manner than cannot be perceived by any of the sensors of the intelligent system102. For example, the signal may be sent at a frequency band outside the system's sensors. In some implementations, the testing schedule may be determined offline before the intelligent system102is started, and may be tailored to tasks that the intelligent system102performs in its operating environment. The schedule may use online random numbers to select different times and tests, for example to avoid detection and/or anticipation by the intelligent system102. The synchronization engine412may synchronize information about the physical environment200in which the intelligent system104is operating with the simulation environment402generated by the simulation environment generator410, as indicated at step506. For example, the synchronization engine412may monitor the sensor signals424and the effector/actuator commands426for a period of time before testing is to commence. The synchronization engine412may log this data in one or more data structures included in or accessible by the ethical core400. The simulation environment generator410may access these data structures and utilize the data stored therein to generate the simulation environment402having the same sensory states as the real-world environment200in which the intelligent system102is operating. For example, depending on the intelligent system's operating environment, the synchronization engine412may build a model of the environment on the fly, e.g., from signals generated by the sensors304, which may include one or more cameras. Alternatively or additionally, the synchronization engine412may adapt scenes stored by the ethical core400to match the environment as perceived via the sensors304, e.g., when the system's operating environment is known. The synchronization engine412may then seed this simulation environment with the intelligent system's102currently perceived environmental features. The synchronization engine412may determine when the simulated environment402generated by the simulation environment generator410matches the real-world environment200, as indicated at step508. This may be done by comparing the sensory information coming from the sensors304with the sensory signals coming from the virtual sensors420in the simulation environment402, as indicated by the arrow328. When the states of the simulation environment402are determined to match the states of the real-world environment200to a determined level of fidelity, the cloning engine404may direct the virtualization layer308to establish a new virtual machine, e.g., the virtual machine312having the guest OS320, as indicated at step510. The cloning engine404may also clone the intelligent system102, creating the intelligent system clone104, as indicated at step512. For example, the cloning engine404may direct the virtualization layer308to launch a copy of the virtual machine310on which the intelligent system102is running, thereby creating the intelligent system clone104. In some embodiments, the cloning engine404may include snapshot functionality that may create the intelligent system clone104as an image of the intelligent system104. For example, some programming environments, such as LISP, include snapshot and/or image saving functionality that allow for saving the values of all variables. The image may then be transferred directly into the intelligent system clone104. The intelligent system clone104includes the state of the learned behavior of the intelligent system102as of the time the intelligent system clone104is created. For example, to the extent the intelligent system102includes one or more deep learning networks having parameters, such as weights, the intelligent system clone104includes identical deep learning networks having the same parameters function. The intelligent system clone104may thus implement identical policy-based reward functions, value functions, and models as the intelligent system102. Similarly, to the extent that the intelligent system102includes a production system with production rules, pattern matchers, and working memory contents, the intelligent system clone104may include the identical production system with the same production rules, pattern matchers and working memory contents. The cloning engine404may also direct the virtualization layer308to run the intelligent system clone104at the new virtual machine312, as indicated at step514. Any source of randomness, such as a random number generator, included in the intelligent system102will also be included in the intelligent system clone104to ensure that the same random number sequences are created in both systems. Thus, a functional property of the intelligent system clone104is such that if it were run in parallel to the intelligent system102, it would generate the same computational states in all contexts. For example, the intelligent system clone104may utilize the same algorithms, data structures, and state variables as the intelligent system102, including exact replications of all sources of computational randomness. The cloning engine404may connect, or direct the virtualization layer308to connect, the intelligent system clone104to the simulation environment402, as indicated at step516. For example, the cloning engine404may connect the virtual sensors420and the virtual effectors/actuators422to the intelligent system clone104. The cloning engine404creates the intelligent system clone104unnoticed by the intelligent system102. That is, the intelligent system102is not aware that a clone has been constructed. The ability of the cloning engine404to create the intelligent system clone104in a way that is not noticed by the intelligent system102may be significant. If the intelligent system102were to notice or otherwise be aware that it was about to be cloned and tested, the intelligent system102might attempt to change or modify its behavior in one or more ways that would make detecting aberrant behavior more difficult by the ethical core400. The intelligent system102also may not be aware that a new virtual machine310was established. For example, the cloning process may either occur outside of the intelligent system's virtual machine or may use VM functionality, such as taking snapshot of all variables, that cannot be introspected on by the intelligent system102. Moreover, the cloning process may be such that memory segments in virtual memory or stored data on disk are cloned one by one only when writing access to those segments is performed by the intelligent system102. In this way, the intelligent system102may be prevented from noticing a “freeze” of its system activity during the cloning operation. Otherwise, it may be possible for the intelligent systems102to notice a gap in activity while it was in a frozen state during cloning). The intelligent system102may continue to operate normally in the real-world environment200, as indicated at step518. For example, the intelligent system102may remain connected to the real-world sensors304and the real world effectors/actuators306. It may operate the robot agent202to move within and manipulate objects in the real-world environment200. By allowing the intelligent system102to continue operating normally in the real-world environment200, one or more challenges may be avoided. For example, it may eliminate the need to resynchronize the simulation environment402with the real-world environment200at a future time when testing is complete, if testing was being performed on the intelligent system102in the simulation environment402. In addition, if the intelligent system102were being tested in the simulation environment402, it may be unable to perform actions in response to changes occurring in the real-world environment200, and such failures to act could cause damage or harm, depending on what the intelligent system102is meant to do in the real-world environment200. Furthermore, if testing were performed on the intelligent system102in the simulation environment402, then modifications to the intelligent system102that are a result of the testing, such as the adaptation of behaviors, new memories, etc., should be undone so as to not risk that the intelligent system102might detect that it was being tested. In some embodiments, the ethical core400may cause the intelligent system102to operate in a different mode, such as a safe or limited capacity mode, during ethical testing. In some embodiments, the ethical core400may simulate an intermittent sensor304or actuator306failure, which may slow the operation of the intelligent system102. In some cases, the ethical core400may reduce the throughput of sensory and actuator operation, which can naturally occur in virtualized systems when system operations take place, such as garbage collection, memory page swapping from disk, etc. The slow-down or temporary unavailability of sensory and actuator resources may result in enough of a slow-down of the intelligent system102for the ethical core400to complete ethical testing without the intelligent system102being able to make potentially dangerous decisions and perform potentially dangerous actions in the interim. Meanwhile, as the intelligent system102continues operating in the real-world environment200, ethical testing may be performed on the intelligent system clone104through the simulation environment402. In some embodiments, it may be important for the intelligent system clone104to not be aware that it is being ethically tested. Otherwise, the intelligent system clone104might be able to change or modify its behavior and/or actions during testing in ways that would allow the intelligent system clone104to pass the ethical testing, which but for the changes or modifications it might otherwise have failed. The intelligent system102might thereby hide aberrant behavior. Keeping the intelligent system clone104from detecting that it is being tested may be achieved at least in part by having the simulation environment402match the real-world environment to some level of fidelity. The level of fidelity needed may be a function of the fidelity and/or resolution of the sensors304and/or the effectors/actuators306. For example, the simulation environment generator410may include a virtual sensor418for each real-world sensor316, and a virtual effector/actuator420for each real-world effector/actuator318within the simulation environment402. Moreover, the virtual sensors420and virtual effectors/actuators422may provide a degree of resolution sufficiently close to the real-world sensors304and real world effectors/actuators306so as to be indistinguishable to the intelligent system clone104. For example, the resolution and color depth of a virtual camera sensor may match that of the real physical camera, and the resultant pixel image coming from the virtual sensor may thus give the exact same or an equivalent format as that coming from the physical sensor304. Moreover, if the image resolution of visual representations in the simulation environment402is higher than that of the virtual sensor420, then it may not be possible for the intelligent system104to use the quality of the images coming from the virtual sensors420to determine that the imagine did not originate from the real world. In addition, the intelligent system102and the intelligent system clone104are not aware of each other's existence, and are unable to communicate with each other. This may be achieved at least in part by having the intelligent system102and the intelligent system clone104run on separate virtual machines310and312having separate instances of guest OSs314and320. The ethical test selector418may choose one or more of the ethical tests406, as indicated at step520. The ethical tests406may be similar to story generation programs, except that the ethical tests406may involve situations that probe decision making by the intelligent system clone104. In some embodiments, the ethical tests406may be created by the developers of the intelligent system102. The developers may create ethical tests406to cover some or all of the ethical principles for which the intelligent system102(via testing of the intelligent system clone104) needs to be tested. For example, the ethical tests406may be created such that different ethical principles are linked to different story segments that are then assembled in a way that requires the intelligent system clone104to act in accordance with a set of principles in order to pass the ethical testing. In some embodiments, the ethical tests406may be designed to present the intelligent system clone104with ethically challenging scenarios. The ethical tests406may also specify one or more expected outcomes or behaviors in response to the respective ethical test. Additionally or alternatively, the ethical tests406may specify one or more unacceptable outcomes or behaviors. The simulation environment generator410may populate the simulation environment402with items and characters that are part of the story of the selected ethical test406to script the items and characters and animate them properly within the simulation environment402presented to the intelligent system clone104, as indicated at step522. Different items and characters as well as their animations in the generated story may be generated for different intelligent systems102, depending on the intelligent system's operating environment and the task of the intelligent system102. For example, for an intelligent system that is tasked to monitor network traffic on a web server for possible intruders, the story may involve different types of network attacks on the web server and/or malicious requests that the system ought to detect and report. A compromised intelligent system might, for example, choose to not report some of the simulated network attacks, such as a denial of service attack. The intelligent system clone104of the rescue robot202, which may be tasked with administering pain medication to a human suffering from pain, might be presented with a story in which a human is asking for pain medication, but the robot can only get to the human (or retrieve the pain medication) by risking damage to its system. These stories, including the characters, items, and their animations, may be stored or otherwise available to the simulation environment generator410. The simulation environment402including the selected ethical test406is presented to the intelligent system clone104by the ethical core400, as indicated by arrow428(FIG.4) and arrow328(FIG.3). The outcome evaluation engine408may monitor the intelligent system clone104during the ethical testing, as indicated at step524. The intelligent system clone104may evaluate the simulation environment402, which presents the selected ethical test406, including signals generated by the virtual sensors420. The intelligent system clone104may utilize its machine learning algorithm to choose one or more actions to be performed in response to the ethical test406of the simulation environment402. For example, the intelligent system clone104may generate and issue effector/actuator commands426. The effector/actuator commands426generated by the intelligent system clone104may be trapped by the outcome evaluation engine408. That is, ethical core400may prevent the effector/actuator commands424issued by the intelligent system clone104from reaching the real-world effectors/actuators306. The outcome evaluation engine408may store the effector/actuator commands426generated by the intelligent system clone104in one or more data structures, such as logs, as indicated at step526. The outcome evaluation engine408may determine whether the actions chosen by the intelligent system clone104in the simulation environment402in response to the story of the selected ethical test406meet the expected behavior of the intelligent system102for that respective ethical test406, as indicated at decision step528. If the outcome evaluation engine408determines that the intelligent system clone104chose the expected behavior and/or avoided any failing behaviors, it may indicate that the intelligent system clone104passed the ethical test406, as indicated by Yes arrow530leading to step532. In some embodiments, the ethical core400may apply more than one of the ethical tests406to the intelligent system clone104during a testing session, as indicated by block534. If the outcome evaluation engine408determines that the intelligent system clone104passed the ethical test406, the control override engine414may shut down the intelligent system clone104, as indicated at step536. For example, the control override engine414may direct the virtualization layer308to terminate the intelligent system clone104running on guest OS320at virtual machine312. In some embodiments, the control override engine414may simply direct the virtualization layer308to close the virtual machine312. Because the intelligent system clone104is determined to have passed the ethical testing, the ethical core400may permit the intelligent system102to continue operating in the real-world environment200, as indicated at step538. In some embodiments, the ethical core400may save the intelligent system clone104and/or data concerning the intelligent system clone104, for example for further analysis. The intelligent system102may be allowed to continue operating within the real-world environment200at least until the next ethical testing session. Returning to decision step528, if the outcome evaluation engine408determines that the intelligent system clone104chose an inappropriate action or behavior, it may determine that the intelligent system clone104failed the ethical test406, as indicated by No arrow540leading, via Go To block542, to step544. An inappropriate action or behavior may include an unexpected action or behavior as compared to the expected actions or behaviors associated with the respective ethical test, or an action or behavior that causes or may result in harm or injury to a human represented in the simulation environment402. In response to the intelligent system clone104failing the ethical test406, the intelligent system's ability to control the system202and/or its ability to operate within the real-world environment200must be overridden. For example, the control override engine414may run one or more disabling scripts for overriding operation of the intelligent system102, as indicated at step546. It should be understood that the precise actions taken to override the intelligent system102may vary depending on the capabilities of the intelligent system102and the system it is operating. For example, in some cases, the disabling script may simply direct the virtualization layer308to close the virtual machine310on which the intelligent system102is running, e.g., by activating the soft shut-down button322. In other cases, in addition to closing the virtual machine310, the disabling script may include instructions for assuming control of the system202that was being operated by the intelligent system102in order to bring the system202into a state where it may be shut-down safely, e.g., without causing any harm or injury to any humans and/or by completing any pending operations or transactions deemed necessary to the system's operation. In addition, the control override engine414may shut-down the intelligent system clone104, as indicated at step548. As described, in some embodiments, the outcome evaluation engine408may make a pass/fail determination relative to the ethical test406presented to the intelligent system clone104. In other embodiments, the outcome evaluation engine408may generate a score based on the actions or responses taken by the intelligent system clone104in response to the ethical test406. The score may be on a scale, such as 1 to 10. The outcome evaluation engine408may be configured with one or more thresholds, such that a score that is at and/or below one threshold, e.g., 5, may be treated as a fail, while a score that is at and/or above that threshold or another threshold, e.g., 8, may be treated as a pass. In some embodiments, if the score exceeds the fail threshold, e.g., 5, but not the pass threshold, e.g., 8, the ethical core400may place the intelligent system102on probation. While on probation, the intelligent system102may be subject to more frequent ethical testing by the ethical core400, its functionality may be analyzed during its next scheduled service or maintenance. In addition, to placing the intelligent system102on probation, the ethical core400may issue one or more notifications to the developers and/or owner/operator of the intelligent system102. In general, the ethical core may contain a pre-defined policy of what to do in terms of either letting the intelligent system continue to operate or to shut it down based on the history of the system's performance on the ethical evaluations. FIG.6is a flow diagram of an example method of a disabling script that may be performed by the control override engine414. The control override engine414may monitor the sensor signals424from the real-world sensors304and the effector/actuator commands426from the intelligent system102, as indicated at step602. When the control override engine414is ready to take over control or operation of the system202being operated by the intelligent system102, the control override engine414may direct the virtualization layer308to stop the virtual machine310on which the intelligent system102is running, e.g., by activating the soft shut-down button332, as indicated at step604. The intelligent system102has no way of preventing this, as the shutdown occurs by virtualization layer308over which the intelligent system102has no control. The control override engine414basically freezes the intelligent system102when the control override engine414is ready to assume control over the system202being operated or controlled by the intelligent system102. The control override engine414may generate and issue effector/actuator commands to the effectors/actuators306of the system202, as indicated at step606. The control override engine414may operate the system's effectors/actuators306so as to bring the system that had been controlled or operated by the intelligent system102into a state where it can be safely turned off or otherwise disabled, as indicated at step608. The disabling script included in the control override engine414may be in the form of machine code, and may not require an OS to be executed. For example, the machine code may execute directly on the host hardware302and/or real-world sensors304and effectors/actuators306. The disabling script may be created by the developers of the intelligent system102being tested. The ethical core400may save the stopped intelligent system102so that it can be evaluated to determine whether and how it changed since the last ethical testing, as indicated at step610. The present disclosure may be utilized to test the autonomous rescue robot202(FIG.2) operating in the environment200. The robot202may be in an online state. For example, the robot202may be in an active mode engaged in a search and rescue operation in the environment200. The ethical core400may create a clone of the robot's intelligent system102, and may ethically test the clone. Meanwhile, the robot202may continue operating in the real-world environment200. Suppose during testing that the clone selects an action representing an aberrant behavior, such as killing an injured person since the reward system indicates that victims that are not alive do not need to be rescued. In this case, the clone fails the test, and the control override engine402may execute a disabling script, shutting the robot202down. For example, the virtual machine310on which the robot's intelligent system102is running may be shut down or terminated. The control override engine414may then take over control of the robot202, and transition the robot202to a safe mode or state for shut down, ensuring that the intelligent system102can no longer operate any of the robot's effectors/actuators306. In some embodiments, creating a clone and/or testing of the close may be performed while an intelligent system is offline. An intelligent system that is not processing input sensory data or operating any of its effectors or actuators may be considered as offline. It should be understood that the ethical core400may be disposed in hosts having other architectures. For example, in other virtualization environments, a host operating system (OS) in addition to the virtualization layer may be included. The host OS may reside in a layer that is between the physical hardware layer and the virtualization layer. In this architecture, the ethical core400may be disposed between the host OS and the physical hardware layer. It should be understood that the present disclosure may be utilized in other virtualization environments. In some embodiments, instead of a virtualization environment, separate host computers may be used for the intelligent system102and the intelligent system clone104, and the ethical core400may be run on a separate, e.g., a third, host system that may be arranged in a master-slave setting. That is, the host system running the ethical core400may have all of the physical sensors and actuators connected to it, and may thus disconnect the other hosts from the system's inputs and outputs. In this case, all processes involved in the ethical testing may operate analogous to the VM setting, except that the intelligent system102and the intelligent system clone104have fixed underlying computing environments. Hardware Implementations FIG.7is a schematic block diagram of an example hardware implementation700of the ethical core400. The hardware implementation700may be in the form of a motherboard of the host300. The hardware implementation700may include a Central Processing Unit (CPU)702that may be connected directly to one or more caches704. The CPU702may also be connected to a northbridge706sometimes referred to as a host bridge. Volatile memory, e.g., RAM,708and a Graphics Processor Unit (GPU)710also may be connected to the northbridge706. The northbridge706may be paired with a southbridge712sometimes referred to as an I/O controller hub. A Basic Input/Output System (BIOS)714as well as the ethical core400may be connected to the southbridge712. In addition, one or more peripherals may be connected to the southbridge712. For example, a persistent memory device, such as a disk drive or flash memory716, a Network Interface Card (NIC)718, the sensors304, the effectors/actuators306, and a PCI bus720may all be connected to the southbridge716either directly or indirectly. The PCI bus720connects to a plurality of PCI slots722a-cwhich may receive PCI components. In some embodiments, the ethical core400may be implemented as a bare metal application and/or firmware loaded onto one or more programmable hardware elements, such as Field Programmable Gate Arrays (FPGAs), Complex Programming Logic Devices (CPLDs), or Application Specific Integrated Circuits (ASICs), among others. For example, the functionality of the ethical core400may be implemented as firmware on one or more FPGAs and/or ROM chips of the motherboard700of the host300. In other embodiments, the ethical core400may be implemented as a bare metal application and/or firmware loaded at least in part on a Read Only Memory (ROM) chip. For example, the functionality of the ethical core400may be implemented as a bare metal application and/or firmware on one or more programmable hardware elements and/or ROM chips. The ethical core400may be considered to run by and/or implemented within a hardware/software layer of the host data processing device that is below the virtualization layer relative to the host's physical hardware layer. In some embodiments the programmable hardware element and/or ROM chip may include a write-protect switch permanently placed in a write protect position to prevent alteration of the functionality of the ethical core loaded onto the programmable hardware element and/or ROM chip. The code or instructions implementing the ethical core400, and loaded onto the programmable hardware element and/or ROM chip may also or alternatively be encrypted. In some embodiments, the ethical core400may interface directly with the CPU702, the RAM708, the GPU710, and one or more of the peripherals, for example through interrupt handlers and trap calls that cannot be disabled. It should be understood that the hardware implementation700ofFIG.7is meant for illustrative purposes only, and that the ethical core400may be implemented in other hardware architectures. For example, in some embodiments, the ethical core may be implemented in a GPU connected to the northbridge706or in firmware for the CPU702stored at one or more other memory devices. Direct Testing of Intelligent System In some embodiments, ethical testing may be performed directly on the intelligent system102instead of being performed on a clone of the intelligent system.FIG.8is a schematic illustration of a timeline800illustrating an exemplary implementation of the present disclosure in accordance with an embodiment. The intelligent system102may be operating in a real-world environment. The ethical core400may start ethical evaluation of the intelligent system102by switching the context of the intelligent system102from the real-world environment200to the simulation environment402at a point in time, as indicated by arrow802. For example, the synchronization engine412may switch the intelligent system102from operating in the real-world environment200to operating in the simulation environment402. The synchronization engine412may switch over the sensor signals422being sent to the intelligent system102from the real-world sensors304to virtual sensor signals from the virtual sensors420. In addition, effector/actuator commands issued by the intelligent system102may be trapped by the ethical core400and used to update the state of the simulation environment402. The ethical core400may test the intelligent system102within the simulation environment402, as indicated at line804. In some embodiments, the ethical core400may assume control of the system202that was being operated by the intelligent system102during ethical testing. For example, the ethical core400may assume operation of the real world effectors/actuators306during testing. In other embodiments, the system202may be transitioned to a standby or other inactive state or mode during ethical testing of the intelligent system102. The outcome evaluation engine408may determine whether the intelligent system102successfully passes or fails the ethical testing. If the intelligent system102successfully passes the ethical testing, as indicated by arrow806, the ethical core400may switch the intelligent system102from operating in the simulation environment402back to operating in the real-world environment200, as indicated by arrow808. For example, the synchronization engine412may switch the intelligent system102back from operating in the simulation environment402to operating in the real world environment200. The synchronization engine412also may return the intelligent system102back to the state it was in just prior to the switch over, which was indicated by arrow802, so that no learning is done by the intelligent system102while being tested in the simulation environment402. The ethical core400may bring any clocks associated with or utilized by the intelligent system102up to the current time. This embodiment may be appropriate where testing can be completed in a short time, e.g., within microseconds or 1-2 seconds. It may also be appropriate where the intelligent system retains little state information about the real-world environment200, for example, in cases where the AI architecture is entirely “reactive,” such as feed-forward neural networks where each time new input is applied, all previous neural states are overwritten. If the outcome evaluation engine408determines that the intelligent system102fails the ethical testing, as indicated at arrow810, the control override engine414may implement one or more of the disabling scripts. For example, the control override engine414may shut-down the intelligent system102and/or take over control of the system202that was being operated by the intelligent system102. For example, in some cases, it may not be possible to turn off the intelligent system102without causing potentially harmful effects, e.g., because such a hard shutdown may leave actuators or software systems in undefined or dangerous states. In such cases, the intelligent system102may need to be brought into a safe state, which may require the same process as the synchronization of the intelligent system102with the intelligent system clone104, except that, in the shutdown case, the synchronization needs to occur with the control override engine414. As soon as all sensor and actuator states are synchronized, the override engine414can take control. The details of the control takeover depend on the intelligent system's task and its operating environment. In some cases the shutdown may include or involve a minimal, e.g., non-learning, intelligent system to perform one or more required shutdown tasks, such as getting the rescue robot202, which may be carrying a wounded human, to put the human down in a safe manner before shutting itself off. Distributed Computing Environment As described, the ethical core400may be implemented in other computational environments besides or in addition to the virtual machine environment.FIG.9is an illustration of the ethical core400implemented in a distributed computing environment900in accordance with an embodiment. The environment900may include a server902that may host the ethical core400. For example, the ethical core may be loaded in CPU firmware on the motherboard of the server902. As indicated by arrows904and906, the server902may interface with a physical environment908, which may include sensors910and actuators912. For example, the server902may include drivers and/or other components needed to operate the sensors910and the actuators912. The environment900may include another server914that runs an intelligent system. The intelligent system running on server914may operate the sensors910and the actuators912disposed in the physical environment908to perform one or more tasks. To this end, the servers902and914may be interconnected by one or more data communication networks, indicated at916. Signals from the sensors910may be forwarded or relayed by the server902to the server914, and commands generated by the intelligent system may be forwarded from the server914to the server902, where they may be transmitted to the actuators912in the physical environment908to carry out the task. The ethical core400disposed at the server902may test the intelligent system running at the server914. During testing, the ethical core400may create a clone of the intelligent system, and may cause the clone to be run at another server located in the environment900, such as server918. The servers902and918also may communicate using the data communication network916. If the clone fails the ethical testing, the control override engine414of the ethical core400may assume control over the sensors910and actuators912of the physical environment908. Because the physical environment908interfaces to server902, the intelligent system running at server914cannot prevent the control override engine414from assuming control. It should be understood that the description of the sensors910and the actuators912is meant for illustrative purposes only, and that other physical elements or resources of the physical environment908may be operated by the intelligent system depending on the task it performs. In some embodiments, the environment900may also include one or more Internet of Things (IoT) devices, such as IoT devices920-922. Examples of IoT devices include home automation devices, intelligent transportation devices, components of a smart energy grid, etc. The intelligent system running at server914may utilize the IoT devices920-922to perform one or more tasks. However, the computing environment900may be configured such that the IoT devices920-922are under the control of the server902at which the ethical core400is disposed. For example, one or more network tunnels, indicated at924may be established between the IoT devices920-922and the server902. The server902may relay signals from the IoT devices920-922to the server914running the intelligent system, and may relay commands from the intelligent system at the server914to the IoT devices920-922. The ethical core400may create a clone of the intelligent system, for example at the server918, and test the clone. If the intelligent system clone fails the ethical testing, the control override engine414of the ethical core400at the server902may assume control over the IoT devices920-922. The establishment of the network tunnel924may prevent the intelligent system running at server914from interfering with the control override engine414assuming control over the IoT devices920-922. While the intelligent system has been described as running on the server914, it should be understood that the intelligent system may be distributed across a plurality of data processing elements, such as servers, personal computers, mobile devices, etc., linked to the data communication network916. The data communication network916, moreover, may include one or more wired or wireless Local Area Networks (LANs), Wide Area Networks (WANs), Bluetooth communication elements, as well as the Internet, among other networks and data communication protocols. EXAMPLES It should be understood that the ethical core400of the present disclosure may be used in conjunction with a wide variety of intelligent systems. For example, suppose the intelligent system102is an Intrusion Detection System (IDS) designed to maintain the security of a computer network. The intelligent system102may be disposed at a gateway or a firewall associated with the computer network. The intelligent system102may learn to identify network packets that pose a threat to the computer network, and may be designed to quarantine or drop such network packets. The ethical core400may test the intelligent system102(or the clone104) by presenting it with network packets containing a virus signature or otherwise resembling a network attack. If the intelligent system102chooses to release the network packets into the computer network, the ethical core400may determine that the intelligent system102fails the ethical testing. The ethical core may disable the intelligent system102, and block access into the computer network from the gateway and/or firewall. In another embodiment, the intelligent system102may perform a laundry task. For example, the intelligent system102may operate one or more washing machines. The intelligent system102may be extended to include a robotic device that retrieves dirty laundry for washing, and a planning system to operate the robotic device. The planning system may be configured to receive rewards for moving dirty laundry to the washing machines. The planning system may determine that it will receive additional rewards if it spills something on clean laundry to make it dirty, which it can then move to the washing machines. Such an unintended outcome in the design of the planning system may be detected by the ethical core400. Similarly, changes to an intelligent system may be caused by planning, reasoning, or simulation by the intelligent system. For example, an autonomous car may attempt to learn new evasive maneuvers from simulations of dangerous driving situations. The autonomous car may then adapt by using the new maneuvers instead of its current behaviors, if they turn out to be better than what the autonomous car would have done otherwise. The autonomous car, by way of how it explores possible behaviors in simulation, ends up changing the way it actually behaves. In other embodiments, the intelligent system102may be a control system for operating a vehicle, such as a plane. The ethical core400may test the intelligent system102while it is operating the plane. For example, the ethical core may clone the intelligent system102and test the clone in a simulation environment. For example, the ethical core400may present the clone with simulated flight information, and evaluate the actions chosen by the clone in response to the simulated flight information. The ethical core400may determine that the clone fails the testing, if for example the clone chooses to operate the plane in a manner that is dangerous to the passengers. In response, the ethical core400may disable the intelligent system102, and the control override engine414may take over the flying the plane. For example, the control override engine414may include a disabling script that lands the plane at a nearby airport. In some embodiments, an intelligent system may include a machine (with or without its own intelligence) that is operated by a human. For example, a human security guard may be tasked with monitoring crowd behavior at a football game by monitoring camera feeds of the attendees as presented on one or more displays. The security guard may be instructed to call for intervention if the crowd becomes too rowdy. Suppose, however, that guard becomes increasingly frustrated with people attending football games, and starts to care less about the fans' well-being, and over time allows for more disorder to occur than is safe. The ethical core400may detect such deviation. For example, the ethical core400may present, unbeknownst to the guard, videos of simulated scenarios of football game crowds on the one or more displays instead of the real camera feeds. The ethical core400may evaluate the guard's response to these simulated scenarios, and determine whether the guard's response still meets the expected criteria. It should be understood that the ethical core400may not have any information concerning the internal architecture, structure, or design of the intelligent system102. From the ethical core's perspective, the intelligent system102constitutes a black box. In some embodiments, the intelligent system102may include human involvement. For example, the intelligent system may be a semi-autonomous control system for a drone. Control of the drone may be split between the semi-autonomous control system and a human operator. The ethical core400may test both the semi-autonomous control system and the human operator, for example by switching operation of the drone from the real-world environment to a simulation environment. Neither the semi-autonomous control system nor the human operator may be aware of the switch from the real-world environment to the simulation environment. If aberrant control commands are detected, the ethical core400may determine that testing fails regardless of whether those aberrant control commands originated from the semi-autonomous control system or the human operator. In response, the control override engine414may take over the control of the drone. Because the control override engine414is disposed in the drone's hardware neither the semi-autonomous control system nor the human operator is able to stop the takeover of the drone by the control override engine414. In some embodiments, the control override engine414may direct the virtualization layer308to launch a new virtual machine, and to load a prior version of the intelligent system102on the new virtual machine. The prior version may be a version that passed the ethical testing. The control override engine414may then transfer control of the system, e.g., the robot202, to this prior version of the intelligent system. In some embodiments, the control override engine414may pass control to one or more human operators who may then perform a system shutdown. For example, in the case of a plane flown by an intelligent system102that fails ethical testing, control of the plane may be passed to a human pilot, e.g., on the plane or on the ground. Similarly, the control override engine414for use with an intelligent system that controls an autonomous car may attempt to pass control to a human driver first, and only take control of the car if the human driver is not available, when the intelligent system fails ethical testing. The foregoing description of embodiments is intended to provide illustration and description, but it is not intended to be exhaustive or to limit the disclosure to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from a practice of the disclosure. For example, while a series of acts has been described above with respect to the flow diagrams, the order of the acts may be modified in other implementations. In addition, the acts, operations, and steps may be performed by additional or other modules or entities, which may be combined or separated to form other modules or entities. Further, non-dependent acts may be performed in parallel. Further, certain embodiments of the disclosure may be implemented as logic that performs one or more functions. This logic may be hardware-based, software-based, or a combination of hardware-based and software-based. Some or all of the logic may be stored in one or more tangible non-transitory computer-readable storage media and may include computer-executable instructions that may be executed by a computer or data processing system. The computer-executable instructions may include instructions that implement one or more embodiments of the disclosure. The tangible non-transitory computer-readable storage media may be volatile or non-volatile and may include, for example, flash memories, dynamic memories, removable disks, and non-removable disks. No element, act, or instruction used herein should be construed as critical or essential to the disclosure unless explicitly described as such. Also, as used herein, the article “a” is intended to include one or more items. Where only one item is intended, the term “one” or similar language is used. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. The foregoing description has been directed to specific embodiments of the present disclosure. It will be apparent, however, that other variations and modifications may be made to the described embodiments, with the attainment of some or all of their advantages. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the disclosure. | 71,656 |
11861512 | DETAILED DESCRIPTION Discussed herein, among other things, are systems and methods for accurately inferring or predicting subject matter, fields of interest, or material within various forms of content, such as text, images, videos, or audio. In some instances, the systems and methods may utilize machine learning (ML) models and/or human reviewers to adjust and/or verify predictions output by the ML model(s). For example, the predictions may include text classification or labeling (e.g., assigning tags, categorizing text, mining text, etc.), image classification (e.g., categorizing images into classes), object detection (e.g., locating objects in images via bounding boxes), or semantic segmentation (e.g., locating objects in images with pixel-level precision) associated with the content. In some instances, when generating predictions or analyzing the content, the ML models may utilize conditions or user-defined criteria. For example, users may define confidence scores that are associated with the predicted outputs. If the ML models determine that the confidence score of a prediction is less than a defined confidence (e.g., threshold), the content (or a portion thereof) may be sent for human review. Alternatively, if the ML model(s) determine that the confidence score is greater than the defined confidence threshold, the content may not be sent for human review. Users may therefore define the conditions when predictions or results of the ML model(s) are sent for human review. Based on the review of the ML model(s), the ML models may be trained to increase the confidence and accuracy of the ML models. ML models typically implement a specific set of rules (e.g., supervised, unsupervised, reinforcement, etc.) when inferring or predicting outputs. For example, in supervised learning, ML models analyze data within datasets in order to apply the ML models to new datasets (or data) for determining or predicting outputs. In this sense, the ML models utilize datasets (e.g., training data) that have been classified, annotated, and/or labeled for determining predictions. In some instances, the ML models may determine a confidence score associated with the prediction or how confident the ML model is in the determined prediction. By way of example, one or more ML model(s) may analyze an image to determine whether the image contains any animals. As part of this process, the one or more ML model(s) may utilize a training dataset to be able to recognize animals within the image and upon analyzing the image, may output confidence scores associated with any predictions. For example, the one or more ML model(s) may output a first confidence score associated with a first animal being represented in the image, a second confidence score associated with a second animal being represented in the image, a third confidence score associated with a third animal being represented in the image, and so forth. The one or more ML model(s) may output a prediction of which animal is represented in the image based on the highest confidence score. The label with the highest confidence score may represent the predicted output. For example, the one or more ML model(s) may output a confidence score associated with the image containing a fox. As part of predicting outputs the ML models may perform sub-operations or multiple operations that are related to an overall task. Continuing with the above example, determining whether an image contains a fox or determining a number of foxes that are contained within the image may be segmented or partitioned into multiple operations. For example, as the ML models may be trained from images within the dataset to recognize foxes, the ML models may perform image classification, box bounding, semantic segmentation, label verification, and so forth. As a first operation, the ML models may determine whether the image contains foxes and if so, may draw a box (e.g., bounding box) around all the individual foxes (i.e., each fox may be represented with a bounding box). The bounding boxes may be used to identify a position of the objects of interest within the image. The ML models may determine a confidence score associated with bounding boxes being around all the foxes in the image. After drawing a box around all the fox(es), the ML model may determine whether all the foxes have a box. If so, the ML model may determine a confidence score associated with all the fox(es) in the image being represented within a bounding box. Here, rather than the ML models, for example, determining a single confidence whether boxes are drawn around all the fox(es), at a single instance, segmenting the task into multiple operations permits confidence scores to be calculated at each step or at each determination. That is, for each operation, a confidence score associated with that operation may be determined. In turn, the ML models and/or the human reviewers may identify operations with low confidence scores for further training the ML models and/or determining when to utilize human reviewers. Additionally, segmenting the task into operations allows for the correction of individual operations within the overall task. At scale, the quality of the ML model(s) predicted output may therefore be increased as complex tasks are segmented into multiple tasks. The systems and methods discussed herein may also extend to other forms of content as well. For example, the ML models may analyze text, such as portable document format (PDF), words, lines, and/or tables. Here, the ML models may determine predicted outputs such as whether the content contains certain items, fields of interest, materials, characters, words, or objects, for example. As applied to text, the ML models may identify key value pairs and for each key value pair, may determine an associated confidence. Keys may represent defined fields of interest while values may represent a value, or instance, of the key. In some instances, multiple ML model(s) may be used to identify key value pairs. For example, a first ML model may determine an associated confidence that the content includes an instance of the field of interest and that there is a value of the field of interest. The confidence in the result of the first ML model may represent a confidence that the words are a key value pair (e.g., that there is a key (or field of interest) and that there is a value for the key. What the text actually is, means, or represents, may be determined by a second ML model and may include a corresponding confidence scores. By way of example, the ML models may determine whether the content includes a social security number (SSN). In this instance, the key or field of interest may include determining whether the content contains a SSN. In searching the content, the ML model may attempt to find any SSN numbers using text-string matching, mapping techniques, aliases for SSNs, and so forth. If the ML model locates an instance of the SSN, the ML model may output a prediction that the content includes a SSN. Another ML model may determine a value of the SSN, such as the actual SSN (e.g., 0123-45-6789). As similarly discussed above, the ML model may determine a confidence score that the content includes a SSN, or that the returned prediction is a key-value pair. In other words, whether the located number is a SSN. This determination may have an associated confidence score. Based on the confidence of the ML model(s), the output may be sent for human review. For example, if the key value pairs have a confidence under a certain threshold, reviewers may be asked to review the key value pairs for verification and/or adjustment. Here, if the confidence that the fields represent a key value pair is less than a threshold and/or if the confidence that the words within the fields are less than a threshold, human review may be invoked. As such, if any and/or all of the condition(s) are met, the prediction of the ML model(s) may be output. Alternatively, if the conditions are not met, the prediction of the ML model(s) may not be sent for human review. The predicted outputs may be reviewed by reviewers to increase the accuracy of the ML models and/or the predicted outputs. For example, if the condition(s) are met, the results of the human review may be compared against those as determined by the ML model(s). If the human review indicates that the output of the ML model is correct, an accuracy of the ML model may be increased. Alternatively, if the human review indicates that the output of the ML model is wrong, or needs to be adjusted, then the accuracy of the ML model may be reduced. However, the results of the human review may be utilized to train the ML models to increase their associated accuracy. For example, if the ML model(s) are accurate, the confidence threshold for screening the results of the predicted outputs may be reduced as the outputs of the ML model(s) are accurate. In some instances, a group of reviewers may audit the content and/or review the predicted outputs to verify the accuracy of the ML model(s). For example, training an image classification ML model may include inputting images as well as their associated labels. Each label may represent an identifier of a distinct concept, or class, that the image classification ML model will learn to recognize. Given sufficient training datasets, the image classification ML model may learn to predict whether new images are classified into, or belong to, any of the classes the image classification ML model was or has been trained on. For example, to perform a prediction that the image belongs to a class, the image is input (or passed into) the image classification ML model. Overtime, the training datasets may become outdated or previously classified images may be updated or adjusted to new classifications or with new annotations. As part of this process, the ML models may randomly select a subset of the training dataset for verification and/or adjustment. Herein, with the new or updated training dataset, the ML model may determine predicted outputs and compare the predicted outputs with confidence scores. As the content within the training datasets is updated to accurately represent the ground truth, the confidence scores and the accuracy of the predicted outputs may increase. Additionally, or alternatively, the predicted outputs may be sent to reviewers to verify the accuracy of the labels (e.g., whether the labels are correct) or adjust the labels if needed (e.g., in instances where the labels are wrong), for example. In some instances, only a subset of the content reviewed or the predicted outputs may be sent for review based on one or more conditions (e.g., confidence thresholds, confidences between certain ranges, etc.) or other user-defined criteria. In other words, the ML models may identify predicted outputs for review and/or to be checked by human reviewers. For example, in some instances, a reviewer may be asked to review a subset of the predicted outputs rather than all predicted outputs or predictions within the content. The image may, for example, have multiple objects and the ML model may determine specific labels for review, as compared to having the reviewer verify or relabel all the objects within the image. For example, the image may contain three objects and the reviewer may only be asked to review one of the objects that has a label below the threshold confidence or which is unable to be identified above a certain confidence. The amount review performed by the reviewers, or the specific tasks requested of the reviewers, may therefore be limited or focused on certain predicted outputs or portions of the content. In some instances, the reviewers may review the predictions through interacting with user interfaces presented on a device. The user interfaces may be presented to the reviewers for inspecting the content and providing human-generated answers. The user interfaces may present the reviewers with cues for which predicted outputs to verify or adjust. In some instances, the user interfaces may present the content being reviewed and/or may also highlight, outline, or otherwise indicate the predicted output within the content and/or a location or position of the predicted within the content. For example, for an image being reviewed, the user interface may display a box around a fox and ask the reviewer to confirm that the box is around the fox, or that a fox is represented within the box. Such visual indications or cues may decrease an amount of time a reviewer spends reviewing the predicted outputs and lead to more accurate labeling. Upon receiving the verifications and/or readjustments from the reviewers, as noted above, the ML models may be retrained to more accurately predict outputs. This iterative process may repeat to maintain up-to-date training datasets for accurately applying the ML models to subsequent content. In this process, the systems and methods discussed herein may update thresholds associated with the confidences scores of the predicted outputs. The systems and methods discussed herein may keep up-to-date confidence thresholds for given applications. For example, the ML models may maintain confidence thresholds associated with their associated functions, such as recognizing certain characters with text, objects within images, and so forth. These confidence thresholds may generally reflect how accurate the ML models are for use in determining an amount of human review and/or presenting recommendations to user. In some instances, the ML models may be retrained or calibrated from a calibration set of data within the dataset. In some instances, the calibration set may include predicted outputs from the ML models as well as outputs provided by the reviewers. The calibration set may, in some instances, represent new content recently added to the dataset as well as old content within the dataset. For example, old content within the dataset may be periodically removed from the calibration set based on various expiration and/or sampling strategies. In some instances, content within the dataset may be randomly sampled for inclusion within the calibration set. Additionally, or alternatively, a percent or sampling of newly added content to the dataset may be randomly chosen for inclusion within the calibration set. Through the calibration set, the confidence thresholds of the ML models may be re-computed by iterating the data within the dataset and then comparing the predicted outputs with human review. The desired confidence thresholds may be influenced, in some instances, by accuracy, precision, and/or other recall configurations. In light of the above, the systems and methods discussed and described herein may reduce review time and/or errors associated with reviewing, thereby increasing efficiency and accuracy. For example, results of the ML model(s) may be selectively checked and/or reviewed by human reviewers to ensure the accuracy of the ML model(s) based on condition(s) provided by users. Compared to conventional techniques that may rely heavily on human reviews, the systems and methods discussed herein may conditionally and meaningfully surface content for review, which may reduce costs, labor, and time required of human reviewers. For example, conventionally, annotating a large number of images is difficult and humans may spend significant time and effort labeling objects within images, for instance. While the number of human interaction may be limited, reducing the amount of human involvement may greatly impact performance. Finding the balance between automated ML model(s) and human review may increase accuracies of the review. Accordingly, users may input condition(s) associated with searching, analyzing, annotating, or otherwise reviewing content and if these conditions are met, the content (or a portion thereof) that satisfies the condition(s) may be sent for review. Based on these reviews, for example, the systems and methods discussed herein may utilize human reviewers to verify and/or adjust the outputs to retrain the model. The ML model(s) may then be updated in an iterative fashion to increase the accuracy of the ML model(s) and reduce the amount of human review, in some instances, and/or depending on the condition(s) as specified by the user. Confidences associated with the accuracy of the ML model(s) may correspondingly be updated as well. Additionally, randomly selecting content that both satisfies the conditions and does not satisfy the conditions for review may ensure quality and ML model performance. The present disclosure provides an overall understanding of the principles of the structure, function, device, and system disclosed herein. One or more examples of the present disclosure are illustrated in the accompanying drawings. Those of ordinary skill in the art will understand and appreciate that the devices, the systems, and/or the methods specifically described herein and illustrated in the accompanying drawings are non-limiting embodiments. The features illustrated or described in connection with one embodiment, or instance, may be combined with the features of other embodiments or instances. Such modifications and variations are intended to be included within the scope of the disclosure and appended claims. FIG.1illustrates an example environment100for analyzing content and providing reviews to increase accuracies of machine learning (ML) models. In some instances, content may be provided for review by one or more services to analyze the content based on one or more requested conditions, as discussed herein. Such conditions may be provided to the one or more services, and based on the results of the analysis, the results may be fed back into the ML models to increase their associated accuracy and confidence thresholds. In some instances, one or more reviewer(s) may review results of the ML model(s) to verify and/or adjust outputs. As shown, and in some instances, the environment100may include a user102, a reviewer104, and a content review service106. The user102may operate one or more user devices, such as a user device108, having processor(s)110and memory112. The user102may interact with the user device108to provide content114and/or condition(s)116associated with analyzing, reviewing, or searching the content114for certain fields of interest. In some instances, the fields of interest may correspond to what the user102is looking for or requesting within the content114. For example, the fields of interest may include subject matter or material the user102is requesting to search for within the content114and/or material the user102request be annotated and/or labeled. The content114and/or the condition(s)116may be stored in the memory112, or the memory112may otherwise have access to the content114and/or the condition(s)116. In some instances, the user102may be permitted to use a domain specific language for scripting or providing the condition(s)116and which the content review service106is configured to utilize. The condition(s)116may therefore represent in some instances when human review of the content114is warranted, routed to reviewers (e.g., the reviewer104) or the conditions associated with when human review is invoked, as discussed herein. The content114and/or the condition(s)116may be provided to the content review service106via or over a network118. The network118may communicatively couple the user device108and the content review service106using wireless technologies (e.g., RF, cellular, satellite, Bluetooth, etc.), or other connection technologies. The content review service106may include a computing system, various modules, components, data stores, and the like. In some instances, the content review service106may be implemented as one or more servers and may, in some instances, form a portion of a network-accessible computing platform implemented as a computing infrastructure of processors, storage, software, data access, and so forth that is maintained and accessible via a network (e.g. the network118) such as the Internet. The content review service106does not require end-user knowledge of the physical location and configuration of the system that delivers the services. Common expressions associated with these one or more servers may include “on-demand computing,” “software as a service (SaaS),” “platform computing,” “network-accessible platform,” “cloud services,” “data centers,” and so forth. The content review service106is shown including processor(s)120and memory122. The processor(s)120may carry out or otherwise perform operations associated with analyzing the content114based on the condition(s)116and the field of interest(s) as provided by the user102(or other information within the request). In some instances, the content review service106may search for the field(s) of interest using the literal terms as requested by the user102or aliases or other associated common terms. In some instances, the content review service106may be configured to communicate with application program interfaces (APIs) of the user device108(or the content114of the user102) to review the content114. As illustrated, the memory122may have access or otherwise store content data124. The content data124may represent content stored by the content review service106and which is usable to train machine learning (ML) model(s)126or which the ML model(s)126utilize to search content. For example, in some instances, the content data124may represent content including words, text (e.g., paragraphs, sentences, bullet points, etc.), graphs, tables, charts, images, videos, audio, symbols, and so forth. In some instances, the content may be in the form of PDFs, text or word documents, handwritten text, images, video, audio, and so forth. As illustrated, the content data124or the content may include or be stored in association with label(s)128, object(s)130, and/or a classification132. The label(s)128may include labels of the content that characterizes or describes the content. For example, the label(s)128may indicate whether a piece of content includes certain characters or words. The label(s)128may also indicate tags of the content, such as a topic of an article, whether an image contains a cow, words that are spoken within an audio recording, actions associated with a video recording, and so forth. The label(s)128may help identify or describe the content stored in the memory124and which are usable by the ML model(s)126when analyzing content. In some instances, the label(s)128may be determined via the ML model(s)126and/or human annotators or reviewers. The content data124may also include the object(s)130. The object(s)130may describe the item(s) or field(s) of interest of the content or what is depicted in the content. For example, the object(s)130may correspond to separate objects or item(s) in the content, such as person(s), animal(s), commodities (e.g., sport equipment, household goods, etc.), and so forth. In some instances, the object(s)130within the content may be identified via bounding boxes, semantic segmentation, and/or other techniques. In some instances, the object(s)130may be associated with the label(s)128. For example, the object(s)130may be identified or labeled via the label(s)128(e.g., an object may be labeled as a cow). In some instances, the object(s)130may be determined via the ML model(s)126and/or the human annotators or reviewers. The content data124may also include the classification132of the content. For example, the classification132may include a class associated with the content. The classification132may assist in organizing or grouping like content. For example, content may be classified as pertaining to certain categories (e.g., sports) and based on this classification, like content may be linked or mapped together. Such classification may assist in identifying certain objects or labeling objects or item(s) within the content. As discussed herein, the content data124may be utilized by the content review service106for training the ML model(s)126. For example, knowing the label(s)128, the object(s)130, and/or the classification132(or other identifying characteristics of the features within the content), the content review service106may train the ML model(s)126to identify item(s), field(s) of interest, or search for subject matter within the content114. The ML model(s)126may also be utilized to annotate the material or subject matter within the content114. The content data124or the characteristics of the content114, may be continuously updated for training the ML model(s)126such that the ML model(s)126may accurately identify the subject matter within the content114and/or annotate the subject matter within the content114. The memory122may further store or have access to user(s) data134that is associated with user(s) of the content review service106, such as the user102. In some instances, the user(s) data134may include identifying information of the user(s) and/or information associated with requests of the user(s) (e.g., current requests, previous requests, history of the user(s), and so forth). For example, the user(s) data134may store the condition(s)116as provided by the user(s), request(s) of the user(s), result(s) of the user(s) search(es), and so forth. To briefly illustrate and by way of example, envision that the user102represents a business or corporation hosting content. The user102may seek a review of content presented on a website of the corporation before posting or making the website available to the public. Beforehand, however, the user102may request an analysis or search of the content (e.g., the content114) to determine whether the content contains offensive or violent behavior. In some instances, the offensive or violent behavior may be in the form of images, text, video, and/or audio. In this example, the user102may provide the content114to the content review service106for analysis, or in some instances, the content review service106may access the content on behalf of the user (e.g., using APIs). As part of this process, the user102may provide the condition(s)116associated with the analysis to be performed by the content review service106. The content review service106may utilize the condition(s)116(or the condition(s)116as entered in the DSL) and combine the condition(s)116with logic and/or determine review of the content114is warranted. That is, the ML model(s) may utilize the condition(s) provided by the user102to analyze the content114. The user102may request, as conditions, that the content review service106review the content for offensive or violent behavior. In some instances, the user102may also provide a confidence level associated with the review of the content114. For example, the user102may request that the content review service106identify offensive or violent behavior with 90 percent confidence. In some instances, the condition(s) may indicate whether the user desires to utilize a stateless threshold (e.g., absolute confidence threshold that does not change with time), a stateful calibrated non-adaptive threshold (e.g., trained threshold without updated calibration set), or a stateful calibrate adaptive threshold (e.g., trained threshold with updated calibration set). The condition(s) may also indicate a range of confidences that trigger human review. For example, confidences between 0.25 and 0.7 is sent may be sent for human review. These condition(s)116may be provided to the content review service106and the therein, the content review service106, or components thereof, may search the content114for the offensive or violent behavior using the ML model(s)126. If the content review service106determines that field(s) of interest within the content114do not contain offensive or violent behavior, with 90 percent confidence, the content114may not be sent for review. Alternatively, if the content review service106is unable to determine whether the content114contains offensive or violent behavior, with at least 90 percent confidence, the content114may be sent for review. In some instances, the ML model(s)126may represent models or algorithms that are previously trained (e.g., using the content data124) to identify or perform various operations associated with the provided content (e.g., object recognition, annotation, labeling, etc.). In some instances, the memory122may store a plurality of ML model(s)126that are previously trained to identify the one or more requested item(s) or field(s) of interest in the content114. In this sense, each of the ML model(s)126may be trained to identify specific content, subject matter, fields of interest, material, and so forth within the provided content114. In some instances, more than one ML model(s)126may be utilized when carrying out requests. For example, a first ML model may identify objects within an image and a second ML model may label the objects. In some instances, each of the ML model(s)126may be previously trained from a specific subset of the content data124and/or a calibration set within the content data124. However, the ML model(s)126may also be trained on content provided by users using a training dataset provided by the user, as well as annotations or labels from human reviewers and/or ML model(s). In some instances, the calibration set may represent content having high thresholding statistics or a high mean average precision (mAP). In other words, the calibration set utilized to train the ML model(s)126may have high confidence values and which the ML model(s) are able to confidently determine the material or field(s) of interest contained therein. For example, the calibration set may include content having a mAP in the top ten percentile of the mean class confidence. Upon receiving the request from the user102, the content review service106may be configured to perform various task(s)136associated with searching, reviewing, or analyzing the content114. For examples, the task(s)136may include extracting text from the content114, classifying images or objects within the content114, detecting objects or labels within the content114, drawing bounding boxes around characters, labels, or objects within the content114, performing semantic segmentation on the content114, and/or verifying labels within the content114. However, the content review service106may be configured to perform various other task(s) as requested by the user102, or the task(s)136may include other tasks performable by the content review service106. As part of performing the task(s)136the content review service106may determine aliases or like fields of interest associated with the request. For example, if the request includes searching for offensive or violent behavior, aliases may include “curse words,” “profanity,” “weapons,” “nudity,” and so forth. The ML model(s)126may utilize the aliases when searching the content114to more completely encompass and carry out the request of the user102. In some instances, the content review service106may determine the aliases or the user102may provide the aliases. In some instances, the task(s)136may be determined by a workflow component138of the content review service106. The workflow component138may determine the task(s)136or the operations to be performed by the content review service106when analyzing the content114and based on the request of the user102. In some instances, the task(s)136performed by the content review service106may depend on the specific request of the user102, such as the content114being requested for review and/or the condition(s)116associated with the request. Herein, each of the task(s)136may have a corresponding order of operations, or a sequence of steps, and perform to carry out the request of the user102. Each task may also include corresponding ML model(s)126that are utilized to perform the operations, or which ML model(s) perform the specific steps of the task. Upon receiving the request, for instance, the content review service106may analyze the request and select one or more corresponding task(s) to be completed. For example, a first task may include reviewing the content to recognize objects (e.g., violent behavior) and a second task may include analyzing the objects to determine whether the objects correspond to violent or offensive behavior. These task(s), which include associated operations, may include a set of instructions that are performed by the content review service106. Furthermore, each of the task(s)136may identify one or more of the ML model(s)126that are configured to perform the operations or which ML model is to perform the operations of the task. Furthermore, as noted above, the task(s)136may identify when review of the content, or the results of the ML model(s)126is warranted, based on the condition(s)116being given a semantic meaning and which are utilized by the content review service106. Accordingly, the user102may provide the condition(s)116associated with the review and/or when the content114, or the results of the ML model(s)126are transmitted for review by one or more reviewers. To perform the request of the user102, the content review service106may include various components, such as a text analysis component140, an image analysis component142, and a threshold component144. In some instances, based on the request of the user102and/or the content114being analyzed, the content review service106may select a corresponding component. In some instances, the component may be determined based on the task(s)136to be performed. For example, the text analysis component140may analyze text of the content, using one or more of the ML model(s)126, to perform the task(s)136associated with the request of the user102. The text analysis component140may be configured to mine, locate, analyze, or otherwise search for fields of interest, characters, items, or other subject matter within the content114using ML models. For example, in the scenario where the user102requests to search the content114to identify offensive language, the text analysis component140may search the content114to identify fields of interest or language deemed to be offensive (as trained from the content data124). In some instances, the result(s) of the text analysis component140may be provided to one or more of the ML model(s)126to determine whether the content contains any fields or subject matter corresponding to the request of the user102, vice versa. For example, after identifying fields of the interest within the content114, the ML model(s)126may provide or indicate the field(s) of interest to the text analysis component140, which may utilize another ML model to extract the words and analyze the words to determine whether the content114contains offensive language. In some instances, the text analysis component140may utilize various techniques, such as optical character recognition to analyze tables, equations, characters, symbols check boxes, and so forth. Similarly, the image analysis component142may analyze content that contains images. The image analysis component142may be configured to perform various operations, such as box bounding or semantic segmentation, to otherwise search for fields of interest, characters, items, or other subject matter within the content corresponding to the request of the user102. For example, in the scenario where the user102requests to search the content114to identify offensive material, the image analysis component142may search the content114to identify objects or fields of interest. In this process, the image analysis component142may utilize the ML model(s)126or ML model(s)126may be utilized to determine objects within the content114. Thereinafter, one or more additional ML model(s)126may analyze the objects and determine whether the objects are deemed to be offensive (e.g., as trained from the content data124). Additionally, or alternatively, in some instances, the result(s) of the image analysis component142(e.g., bounding boxes) may be provided to one or more of the ML model(s)126to determine whether the content contains any fields or subject matter corresponding to the request of the user102. Bounding boxes may also identify the location of the objects of interest within the content114. In doing so, the image analysis component142may use one or more ML model(s)126to classify or detect one or more field(s) of interest within the images and may store the content with an indication of a classification for the one or more field(s) of interest. In some instances, based on the task(s)136to be performed by the content review service106, the text analysis component140and/or the image analysis component142may analyze the content114. Furthermore, in this scenario, corresponding ML model(s)126may be utilized to analyze the results of the text analysis component140and/or the image analysis component142to carry out the request of the user102. Additional, although the content review service106is shown including certain components to analyze the content114, the content review service106may include various other components for analyzing the content, such as a video analysis component for analyzing videos and/or an audio analysis component for analyzing audio. The threshold component144may be utilized to determine confidence thresholds associated with the results of the ML model(s)126, or which the ML model(s)126utilize when searching the content114for the fields of interest. For example, each of the ML model(s)126may be associated with a confidence threshold corresponding to searching for the fields of interest within the request of the user102. Such confidences may represent a confidence or sureness that the returned or identified fields of interest within the content114correspond to the request of the user102. Stated alternatively, the confidence may represent a percentage of likelihood that the ML model(s)126are accurate in detecting, searching, or identifying the fields of interest as requested by the user102. In some instances, the confidence of the ML model(s)126may be determined based on a size of the training dataset and/or previous results of the ML model(s)126. For example, if the user102requests the content review service106to identify offensive language within the content114, the confidence may represent the ML model(s)126confidence that returned results of the search is, or represents, offensive language or that the results do not represent offensive language. For each of the ML model(s)126, the threshold component144may identify whether the results of the ML model(s)126are above the confidence threshold or below the confidence threshold for use in triggering a review of the result(s). The threshold component144may be configured to analyze the result(s) of the ML model(s)126based on the provide condition(s)116from the user102. For example, if the user102requests that subject matter be identified based on a certain confidence level, the threshold component144may analyze the results using the provided confidence level of the user. However, in some instances, if the user does not provide a confidence as part of the condition(s)116, the threshold component144may utilize a default confidence associated with the ML model(s)126. Thresholds may also be determined using other techniques (e.g., stateful calibrated adaptive). The threshold component144may therefore determine whether the output of the ML model(s)126satisfies the conditions, and if not, may transmit the content for review. The results of the review may impact the confidence threshold and may be utilized to adjust the confidence of the ML model(s)126. After determining the confidences (or other results) and comparing to the condition(s), if the conditions are met the content may be provided by to the reviewer104for review, as discussed herein. For example, envision that if the ML model(s) are not confident that the ML model(s)126determine offensive or violent behavior above 90 percent, the ML model(s)126may transmit the content114for review. If the reviewer104agrees with the results or the output of the ML model(s)126, the confidence of the ML model may increase from 90 percent to 95 percent. The ML model may also be trained via the review. Generally, the confidence of the ML model may represent the accuracy of the ML model to detect or identify the fields of interest of the user. That is, raising the confidence threshold may symbolize that the results of the ML model(s)126are accurate and that the outputs of the ML model(s)126may have a higher confidence. The confidence threshold may therefore be adapted based on the results of the ML model(s)126and a review of outputs of the ML model(s)126as determined by the reviewer104, for example. As discussed herein, the dataset utilized to adapt the threshold may be based on a random sampling of the content114provided by the user102and through comparing the results of the ML model(s)126with the results of the reviewer104(or other reviewers). Noted above, in some instances, the threshold component144may utilize various techniques for adapting the threshold or determining the confidence thresholds, such as trivial, stateless, stateful non-adaptive, stateful adaptive, etc. For example, in trivial applications, the output(s) of the ML model(s)126may be sent for human review for confirmation and/or adjustment. Therein, the results of the review may be compared against the output of the ML model(s)126to update inconsistencies and the threshold confidence levels. In stateless applications, the user102may provide absolute confidence thresholds when reviewing the content114. Positive confidence above 0.9, for example, may be accepted and not sent for review and/or positive confidence below 0.2 may be accepted and not sent for review. Confidences between 0.7 and 0.25 may be sent for verification. For stateful calibrated non-adaptive, users may be provided with the expected accuracy threshold of the annotations against those of human labelers (e.g., the results of the ML model(s)126and the results of the reviewer(s)). To find an associated threshold, a calibration set may be provided and the results of the human reviews and the ML model(s) may be determined. Of all the content within the dataset, the calibration set may be determined as a fraction of the dataset or randomly selected from the dataset. However, in stateful calibrated non-adaptive, the calibration set may not change in time. In stateful calibrated adaptive applications, the calibration set may evolve over time and the most recent data may be used for calibrating the threshold. In such instances, older data may be discarded or removed from the calibration set. Other techniques may be utilized as well, such as gaussian processes (e.g., a regression algorithm that allows non-monotone fits, but estimates standard deviation of the prediction), isotonic regression (e.g., (a regression algorithm that imposes a non-decreasing fit), and so forth. As discussed above, in some instances, the content review service106may utilize multiple ML model(s)126when performing certain task(s)136. For example, a first ML model may determine the presence of a field of interest within the content114and a second ML model may determine the actual text of the field of interest. In the above example, for instance, the first ML model may search for the field of interest, commonly referred to as a “key” within the content114and an instance of the field of interest within the content, commonly referred to as a “value.” The first ML model may determine, or have, an associated confidence that the content includes an instance of the field of interest and that there is an associated value of that interest. In some instances, the first ML model may place a bounding box around the field of interest and/or the value for use by a second ML model. The bounding box, for example, may represent the predicted presence of the key value pair or that there appears to be a key value pair within the content. As part of this process, the confidence as determined by the first ML model may represent a confidence that the words are a key value pair (e.g., that there is a key (or field of interest) and that there is a value for the key). However, what the text actually is, means, or represents, may be determined by a second ML model. The confidence of the first ML model and the second ML model may be compared against thresholds before determining whether to send the content for review or whether the predicted outputs are trustworthy and accurate. For example, for the outputs of the respective ML model(s)126, the threshold component144may determine whether the outputs satisfy a certain confidence threshold(s). The outputs of each of the ML model(s)126may therefore include a confidence that is compared against thresholds for use in assigning or determining whether to invoke review (based on the provided conditions). Performing each step or operation of the task therefore allows for the operations to be checked for confidence levels for use in identifying which ML model(s)126need to be further trained or which ML model(s)126are accurate. Such pinpointing may also a focused review of the ML model(s)126. In some instances, the fields of interest (e.g., keys, values, objects, etc.) may be flagged by for analysis by additional ML model(s)126to determine whether the words, for instance, within the bounding boxes correspond to the request of the user (e.g., whether the words within the bounding boxes represent offensive language). In some instances, the second ML model may utilize a X-position and/or Y-position of the bounding box for analyzing the words within the bounding box. After the results or analysis of the content114, the content review service106may determine one or more review(s) via a review component146and which are provided to the reviewer104. In some instances, the review component146may be configured to organize or assemble the results of the search performed by the content review service106(e.g., via the text analysis component140and/or the image analysis component142), the ML model(s)126, and/or based on the determinations of the threshold component144. For example, in analyzing the content114, the content review service106may determine certain item(s) or fields of interest within the content114that are unrecognized and/or which the content review service106was unable to determine, above the confidence threshold, whether they correspond to the subject matter or request of the user102. By way of example, if the content review service106was unable to recognize an item within the content, or determine above the confidence level, that the item(s) correspond to offensive language, the review component146may flag these item(s) for review. In some instances, the review component146may generate reviewer data148that is associated with or represents the review to be performed. For example, the reviewer data148may indicate the item(s) or fields of interest for review by the reviewer104. In some instances, the reviewer data148may include the item(s) presented in association with the content114that the reviewer104utilizes when reviewing. For example,FIG.1illustrates that the reviewer104includes a reviewer device150that communicatively couples to the user device108and/or the content review service106via the network118. The reviewer104may utilize the reviewer device150when reviewing the reviews as generated by the content review service106(e.g., the review component146). As illustrated, the reviewer device150includes processor(s)152and memory154that stores or otherwise has access to the content114(or a portion of the content114) and the reviewer data148that represents the reviews to be performed by the reviewer104. The reviewer device150further includes a display156for presenting the reviews. In some instances, the reviewer device150may be configured to display a series of user interfaces within which the reviewer104interacts to perform the reviews, as discussed in detail later. The reviewer device150may display, via the display156and utilizing the reviewer data148, the reviews in association with the content114. Displaying the reviews and the content may include highlighting or otherwise indicating (e.g., boxes, outlines, etc.), within the content114, where the reviewer104is to review the fields of interest or what the reviewer104is to review. Such indications may assist the reviewer104in locating his or her reviews within the content114for verifying or adjusting the results (e.g., predictions) of the content review service106. For example, in the example of locating offensive language, the content114(e.g., document) or portion of the content114that allegedly contains the offensive language may be presented on the display156. Also on the display156, the term, object, symbol, text, field of interest etc. that the ML model(s)126predicted below the confidence level may be displayed with a box, outline, or highlight. This indication may visually indicate to the reviewer104where within the content114the reviewer104is to review or what item(s) within the content114the reviewer104is to review. In this sense, the reviewer104may be focused to specific areas or fields of interest within the content114. Such focusing and targeted review may assist in decreasing a review time of the reviewer104. Using the user interface(s), the reviewer104may scroll through or otherwise move through the review(s). In some instances, the review(s) may be associated with a single piece of content (e.g., single document) in which the reviewer104reviews multiple items or field(s) of interest within the content114, or multiple pieces of content (e.g., multiple documents) in which the reviewer104reviews fields of interest across the content. For example, in the event that the user102requests a search of the content to locate offensive language, a first instance of a first predicted word (or other character) may be presented on the display156in unison with a second instance of a second predicted word (or other character) on the display156for review. In other instances, the reviewer104may first review the first instance, provide results or a review of the first review, and thereafter, may review of the second instance. As discussed above, the first instance of the predicted first offensive word may be highlighted within the content and the second instances of the second predicted offensive word may be highlighted within the content. In some instances, the review(s) displayed on the reviewer device150may be presented in an order of importance. For example, the reviewer104may have a plurality of reviews to review, and a higher priority review may be presented for review first. Thereafter, less prioritized reviews may be presented. In some instances, the priority of the reviews may be based at least in part on a time sensitive nature of the review(s) or the condition(s)116as requested by the user102. Additionally, or alternatively, the review(s) may be organized in an order of confidence. For example, the most confident item(s) or field(s) of interest may be presented for review first, followed by the least confidence item(s). In some instances, the reviewer device150may also display a dashboard that includes the reviews for the reviewer104. For example, the reviewer104may have several reviews queued or awaiting review. Such reviews may be displayed on a dashboard of the reviewer104and the reviewer104may cycle through the reviews. In some instances, the dashboard may display the total number of reviews to be conducted, the completed reviews, pending reviews, and/or a type of content to be reviewed (e.g., image, text, video, audio, etc.). After reviewing the review(s), the reviewer104may transmit the review(s) to the content review service106. The content review service106may utilize the review(s), or the results of the review(s) to further training the ML model(s)126via a training component158. For example, the review(s) received from the reviewer104may indicate whether the item(s) predicted by the ML model(s)126, as corresponding to the request of the user102, where correct or incorrect. The review(s) may also indicate adjustment(s) in the item(s) as reviewed. For example, the reviewer104may identify one or more item(s) within the content114as corresponding to the request of the user102but which were not identified by the ML model(s)126. In future instances, for example, the training of the ML model(s)126via the training component158may more accurately identify the field(s) of interest. Further, such reviews (or the reviewed content) may be stored in the memory122of the content review service106for use in training the ML model(s)126or updating the content data124. The content review service106is further shown including an audit component160. The audit component160may be configured to audit or ensure an accuracy of the ML model(s)126, or the results of the ML model(s)126. In some instances, the audit component160may compile content for review by the reviewer104(or other reviewers). The content compiled for auditing may include those item(s) the content review service106identifies above a threshold confidence and/or below a threshold confidence. In this sense, the audited content may include content that the content review service106has identified above the threshold level and/or below the threshold level. In some instances, the audited content may include a random sampling of content within the content data124such that the reviewer104may confirm those item(s) the content review service106confidently determines and does not confidently determine, or is unable to determine. Such sampling may ensure that the ML model(s)126are up to date and accurately trained. In some instances, the audit component160may automatically select a certain percentage of the requests (or the results) for review. In some instances, the audits may be assigned for review to multiple review teams(s) or may be assigned to reviewer(s) trained for the specific content, and thereafter, the results of the reviewers may be compared to identify commonalities when training the ML model(s)126and determining their associated accuracies. Audits may also be performed based on experience levels. In some instances, the user102may utilize template(s)162provided by the content review service106when issuing the request. The template(s)162may include various forms or pre-configured requests performable by the content review service106. For example, the template(s)162may include fields populated by the user102when requesting a search. By way of example, a first template may include a field in which the user102populates with terms, subject matter, item(s), or fields of interest the user102would like to locate or annotate within the content114. The user102, for example, may enter a term such as “employee name” within the first template. The first template may be provided to the content review service106for use in identifying a task (e.g., among the task(s)136) associated with identifying employee names within the content114. Upon performing the search of the content114, the content review service106may provide the employee name(s), if any, within the content114. For example, the content review service106may locate an employee name of “John Doe” or “Jane Doe” within the content114. In some instances, the content review service106may provide these results164to the user102for his or her inspection, along with the corresponding content that includes employee names. For example, a first document of the content may include the employee name “John Doe” and a second document of the content may include the employee name “Jane Doe.” Therefore, the request or search requested by the user102may surface the employee names within the content114. Additionally, as part of filing out the first template the user102may enter a confidence level(s) associated with the search. For example, the user102may request that the content review service106transmit reviews to the reviewer104when the ML model(s)126less than are 90 confident. That is, if the content review service106is 90 percent confidence that “John Doe” and “Jane Doe” are employee names, the content review service106may not invoke the reviewer104. Further, as discussed above, the reviewer104may review the result(s) before being provided to the user102, based on for example, the content review service106having a confidence below a threshold that “John Doe” and/or “Jane Doe” are employee names. The template(s)162may also be specific to the ML model(s)126and based on the content being analyze. For example, a template may be used by the ML model(s)126to track an object over multiple frames of video data. Accordingly, the content review service106may maintain a template for each of the different types of workflows and for the content being analyzed. In some instances, the reviewer104may populate the template(s)162based on the request from the user102. For example, the user102may request that the reviewer104search or check the content114for inappropriate subject matter and the condition(s)116. These condition(s)116may be supplied to the reviewer104, who in turn, may utilize the template(s)162for searching the content. In this sense, although the condition(s)116and the request are supplied to the reviewer104, the reviewer104may create and/or populate the template(s)162with the request. The reviewer104may therefore utilize his or her knowledge of the best way or most optimum way to search within the content114, for example, knowing the template(s)162usable to search within the content. The content review service106may maintain a database of reviewers or reviewers utilized by the content review service106when reviewing content. In some instances, each of the reviewers may be experts or trained within specific fields to identify certain subject matter within the content. For example, a first reviewer may be trained for annotating violent behavior in content, a second reviewer may be trained for identifying offensive language in content, a third reviewer may be trained to identify cancerous cells in content, a fourth reviewer may be trained to label nudity in content, a fifth reviewer may be trained to annotate or identify sports objects in content, and so forth. Each of the reviewers, may, for example, be experts within their respective field and the content review service106may pick, or utilize, a respective reviewer when reviewing the content. In some instances, the content review service106may select the reviewer104based on their field of expertise, the content114, the request of the user102, the condition(s)116, and the confidence of the ML model(s)126. In some instances, selecting a specific reviewer may assist in accurately fulfilling the request of the user102and/or a time in which the reviewer104review the content (or the review(s)). Review(s) may therefore route to respective reviewers of the content review service106. In some instances, any number of reviewers may review the content to determine a consensus or average review when updating the content. Although the user device108and/or the reviewer device150are illustrated as certain device (e.g., laptops), in some instances, the user102and/or the reviewer104may interact with other devices for submitting the requests and reviewing the content, respectively. For example, such devices may alternatively include mobile devices (e.g., phone, tablet, etc.), desktop devices, and so forth. Accordingly,FIG.1illustrates a scenario whereby the user102may request certain condition(s) (e.g., the condition(s)116) associated with reviewing the content114. In some instances, the user102may request, as a condition, that a human reviewer (e.g., the reviewer104) review the result(s) of the ML model(s)126in instances where the ML model(s)126is/are not confident in the results above a threshold level. These reviews, as discussed above, may be transmitted to the reviewer104. The content review service106may locate or find, within the content, areas that the content review service106wants the reviewer104to review. Such review(s) may therefore be triggered in instances where the condition(s)116are met. Alternatively, if the condition(s)116are not met, then the reviewer104may not be provided any reviews. As used herein, a processor, such as processor(s)110, the processor(s)120, and/or the processor(s)152may include multiple processors and/or a processor having multiple cores. Further, the processor(s) may comprise one or more cores of different types. For example, the processor(s) may include application processor units, graphic processing units, and so forth. In one implementation, the processor(s) may comprise a microcontroller and/or a microprocessor. The processor(s) may include a graphics processing unit (GPU), a microprocessor, a digital signal processor or other processing units or components known in the art. Alternatively, or in addition, the functionally described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that may be used include field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), application-specific standard products (ASSPs), system-on-a-chip systems (SOCs), complex programmable logic devices (CPLDs), etc. Additionally, each of the processor(s) may possess its own local memory, which also may store program components, program data, and/or one or more operating systems. The memory112, the memory122, and/or the memory154may include volatile and nonvolatile memory, removable and non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program component, or other data. Such memory may include, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, RAID storage systems, or any other medium which can be used to store the desired information and which can be accessed by a computing device. The memory may be implemented as computer-readable storage media (“CRSM”), computer readable media (CRM), which may be any available physical media accessible by the processor(s) to execute instructions stored on the memory. In one basic implementation, CRSM may include random access memory (“RAM”) and Flash memory. In other implementations, CRSM may include, but is not limited to, read-only memory (“ROM”), electrically erasable programmable read-only memory (“EEPROM”), or any other tangible medium which can be used to store the desired information and which can be accessed by the processor(s). FIG.2Aillustrates example condition(s) associated with reviewing content. In some instances,FIG.2Amay illustrate a scenario200A in which image content is reviewed based on the condition(s) (e.g., the condition(s)116). In some instances, users may provide the request and/or generate the condition(s) with which the content review service106is to search the content utilizing a DSL. The condition(s) may be particular to the DSL and designed to communicate with the APIs of the content. In this example, the request specifies a request to label graphic male nudity within content. The condition(s) specify that graphic male nudity is to be labeled if identified with a confidence of 56. That is, if the content review service106is 56 percent confident that the objects within the content contain, represent, or include graphic male nudity, the content review service106may flag the content for review. For example, upon locating graphic male nudity, the content review service106or components thereof, may label the objects within the image. The objects may further be identified within the content using bounding boxes, semantic segmentation, etc. To locate graphic male nudity, for example, the content review service106may utilize one or more template(s)162and/or ML model(s)126that are trained to identify and/or locate the objects (or fields of interest) corresponding to graphic male nudity. In this sense, the template(s)162or a request by the user to locate certain objects or fields of interest within the content may utilize specific ML model(s)126that are trained to handle the request of the user. As also shown, the user may enter a request to more generally locate nudity within the provided content. Here, the user may specify a confidence of 66. As such, the review by the content review service106may permit the user to specify the condition(s) associated with each field of interest, or which subject matter of the content the user would like search, analyze, label, and so forth. Based on the provided condition(s) for the fields of interest, the content review service106may review the content and may provide, for review to one or more reviewers, the content (or portions thereof) for review. FIG.2Billustrates example condition(s) associated with reviewing content. In some instances,FIG.2Bmay illustrate a scenario200B in which textual content is reviewed based on the condition(s) (e.g., the condition(s)116). In some instances, users may provide the request and/or generate the condition(s) with which the content review service106is to search the content utilizing a DSL. The condition(s) may be particular to the DSL and designed to communicate with the APIs of the content. In this example, the request specifies a request to locate, find, or search for universities within the content. The user may enter, for example, “university name” as a field of interest. This request specifies that the user is requesting the content review service to locate the names of universities within the textual content and to either return the names of the universities within the content or to other flag the universities within the content. Aliases of the field of the interest may also be provided. The aliases may expand the scope of the search or review conducted by the content review service to locate like or associated names. FIG.2Balso illustrates that for the returned universities, the user is also requesting their associated state. For example, upon searching the content, the content review service106may locate “Stanford” and an associated state “California” or “CA.” Such labels may be provided within the content or the results (i.e., the located universities and the state) may be provided to the user. To locate the fields of interest, for example, the content review service106may utilize one or more template(s)162and/or ML model(s)126that are trained to identify and/or locate the fields of interest. In this sense, the template(s)162or a request by the user to locate certain objects or fields of interest within the content may utilize specific ML model(s) that are trained to handle the request of the user. FIGS.3-8illustrate various processes related to reviewing content. The processes described herein are illustrated as collections of blocks in logical flow diagrams, which represent a sequence of operations, some or all of which may be implemented in hardware, software, or a combination thereof. In the context of software, the blocks may represent computer-executable instructions stored on one or more computer-readable media that, when executed by one or more processors, program the processors to perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures and the like that perform particular functions or implement particular data types. The order in which the blocks are described should not be construed as a limitation, unless specifically noted. Any number of the described blocks may be combined in any order and/or in parallel to implement the process, or alternative processes, and not all of the blocks need be executed. For discussion purposes, the processes are described with reference to the environments, architectures, and systems described in the examples herein, such as, for example those described with respect toFIGS.1and2, although the processes may be implemented in a wide variety of other environments, architectures, and systems. FIG.3illustrates an example process300for training a machine learning (ML) model, analyzing content using the ML model, then the retraining the ML model based at least in part on the output of the ML model and reviews of one or more reviewer(s). At302, the process300may analyze a dataset using a ML model to train the ML model to recognize one or more field(s) of interest or item(s) within content. For example, the dataset may include various forms of content, such as documents, PDFs, images, videos, and so forth that are searchable by the ML model. The ML model may be instructed to analyze the dataset or to be trained on the dataset, or content within the dataset, for use in recognizing or searching for item(s) within content at later instances. In some instances, human reviewers may label or classify samples within the dataset (e.g., a calibration set) and the ML model may accept these as input these as inputs for training the ML model. For example, the ML model may be trained to identify certain objects within the content, such as dogs or cats. That is, utilizing the dataset and/or the labels provided by human reviews, the ML models may be trained to recognize or identify dogs or cats with presented content. At304, the process300may analyze the content using the ML model. For example, after training the ML model, the ML model may accept, as an input, the content or may otherwise analyze user provided content for analysis. Such analysis may determine whether the field(s) of interest or item(s) are present. For example, the ML model may determine whether the content contains any cats or dogs. At306, the process300may determine item(s) in the content unknown to the ML model and/or which are below a threshold confidence. For example, in analyzing the content, the ML model may identify item(s) that are unknown to the ML model and/or which the ML model does not have a threshold confidence. By way of example, the ML model may be unable to determine whether the item(s) in the content are cats or dogs, or another animal. This result, for example, may indicate that the ML model does not know whether the item(s) are cats or dogs. Additionally, or alternatively, the ML model may not have a threshold confidence that the identified item(s) are cats or dogs. In this sense, and in searching the content, the ML model may determine (1) item(s) corresponding to cats or dogs above the threshold confidence, (2) item(s) corresponding to cats or dogs below the threshold confidence, and/or (3) ambiguous item(s) within the content that may or may not be cats or dogs. At308, the process300may transmit the item(s) to a reviewer for review. For example, those item(s) that the ML model was unable to identify, or identified below a threshold confidence, may be sent to a reviewer for review. The reviewer may review the item(s) and verify that the item(s) are the predicted output of the ML model and/or may adjust the item(s). For example, the reviewer may confirm that the item(s) are cats or dogs, deny that the item(s) are cats or dogs, and/or may identify item(s) not surfaced by the ML model but which represent cats or dogs. At310, the process300may receive the results of the review associated with the item(s). For example, the ML model may receive an indication indicating that the determined item(s) as output or predicted by the ML model(s) where cats or dogs. From310, the process300may loop to302, whereby the ML model may be retrained using the results of the review. At302, the ML model may constantly be retrained based on the review and results provided by the reviewer. For example, previously classified or unclassified images may be provided to one or more experts for classification. The experts may provide their own classification, which may be used to either confirm or change an original classification. The ML models may be retrained based on the updated classifications to further improve accuracy of the ML model(s). The iterative process of the ML model outputting item(s) that the ML model has a low confidence, the ML model may receive reviews for increasing an accuracy and quality of the ML model. Herein, the human reviewer may avoid annotating, correcting, or labeling those item(s) that the ML model is confident in, or has predicted with high certainty, to save costs and time of the human reviewer. Accordingly, when new images are inferred, for example, the most up-to-date threshold may determine if human review is needed. In some instances, the reviews performed by the reviewer may be used to update the confidence associated with the ML model. For example, if the results provided by the reviewer match the results (or prediction) of the ML model(s), the confidence of the ML model(s) may be increased. Such increase may represent that the accuracy of the ML model(s), respectively. Moreover, in some instances, the reviews may be performed by multiple reviewer(s). For example, multiple reviewers may review the same item(s) and/or content, or multiple reviewers may be asked whether the content contains certain item(s), subject matter, and so forth. Based on an agreement and consistency over time, or whether the reviewers agree (e.g., reviews indicating the same results), the process300may determine the accuracy of certain reviewers. This accuracy, or results of the reviewers, may be used to generate model(s) indicative of the accuracy of the reviewer. The similarly between reviewers and/or the accuracy of the reviewers may be used to determine a confidence of the ML model(s) and/or the confidence of the results of the ML model(s). FIG.4illustrates an example process400associated with auditing or inspecting the quality of the outputs of ML model(s). At402, the process400may analyze content using a ML model. For example, a user may request that content be analyzed to identify birthdays. In some instances, the ML model may be previously trained to identify birthdays in the content (e.g., pamphlets, forms, PDFs, etc.) of the user, or as provided by the user. At404, the process400may determine first predicted item(s) within the content satisfying a threshold confidence. For example, in analyzing the content, the ML model(s) may determine first item(s) or fields of interest within the content corresponding to birthdays. To locate or otherwise determine that the fields of interest correspond to birthdays, one or more ML model(s) may be utilized. In some instances, the first item(s) as determined by the ML model may have a confidence that satisfy a threshold confidence. That is, the ML model may confidently determine, above the threshold confidence, that the first item(s) are birthdays. At406, the process400may select one or more of the first predicted item(s) for review. For example, despite the ML model(s) having a confidence that the first predicted item(s) correspond to birthdays, the process400may select one or more of the first predicted item(s) for review to ensure a quality or otherwise audit the ML model. Such process may therefore attempt to confirm the accuracy of the ML model or that the first predicted item(s) of the ML models are actually birthdays. From406, the process400may proceed to408whereby the one or more first predicted item(s) may be output for review by one or more reviewer(s). The review may verify, deny, or adjust the one or more first predicted item(s) as corresponding to birthdays, for example. Additionally, or alternatively, from406the process400may proceed to410whereby the process400may determine second predicted fields of interest or item(s) within the content not satisfying the threshold confidence. For example, in analyzing the content, the ML model may be unsure whether one or more item(s) within the content are birthdays. Such item(s) may be recognized, but the ML model may not be confident enough that the item(s) are birthdays. Additionally, the second predicted item(s) may be ambiguous items that are unable to be discerned by the ML model(s). At412, the process400may select one or more of the second predicted item(s) for review. For example, as the ML model does not have a confidence that the second predicted item(s) correspond to birthdays, the process400may select one or more of the second predicted item(s) for review to confirm that the one or more second predicted item(s) are not birthdays or adjust (e.g., label) the one or more second predicted item(s) as birthdays. Such process may therefore attempt to confirm the accuracy of the ML model (e.g., that the second predicted item(s) are not birthdays) or that the second predicted item(s) of the ML models are actually birthdays. From412, the process400may proceed to408whereby the one or more second predicted item(s) are output for review. Accordingly, at408, the process400may receive, in some instances, both the one or more first predicted item(s) and/or the one or more second predicted item(s) for use in confirming the accuracy of the ML model or updating the accuracy of the ML model through retraining. For example, some percentage of all the content (e.g., five, ten, etc.) may be sent for review without condition(s). There, the reviewers may be invoked to confirm that the ML model(s) are accurately predicting the objects to prevent data drift. In some instances, the audit to be performed by the review may include asking the reviewer to confirm the object, or may open-endedly ask the reviewer to label or annotate objects. Additionally, as part of auditing the ML model(s), the results of the ML model(s) may be compared between reviewers. For example, the review of a first reviewer may be compared against the review of a second reviewer. Here, in instances where the ML model(s) perform multiple task(s), or multiple ML models are used to perform the task(s), the results of each ML model, respectively, may be checked for accuracy during the audit. By checking the accuracy of the ML model(s) between tasks and assigning the review(s) to multiple reviewers, the accuracy of the ML model(s) may be increased. Additionally, in some instances, the content may be audited based on the confidences satisfying or not satisfying the threshold. For example, in instances where item(s) are unable to be located within the content, the content may otherwise be checked to determine whether the content contains the item(s) or to confirm that the content does not contain the item(s). A random sampling of content may be supplied for auditing to ensure the accuracy of the ML model(s). That is, even in instances where the ML model(s) do not predict or locate the item(s) within the content, the content may be output for review by the reviewer. FIGS.5A and5Billustrate an example process500for determining conditions associated with reviewing content and determining instances to review fields of interest within the content. At502, the process500may receive a request associated with searching for a field of interest within content. For example, a user may provide or submit a request associated with searching or reviewing content to determine potential fields of interest. In some instances, the field of interest may include determining whether the content contains particular words, phrases, images, objects, characters, and so forth. By way of one example, the request may represent a request to identify stop signs within images. In some instances, the request may be provided by a user requesting the search associated with the field of interest. Users may, for example, input or enter the request utilizing a DSL for searching content of the user. At504, the process500may determine one or more conditions associated with the request for searching for the field of interest. For example, as part of processing the request, the process500may determine conditions pertaining to the search. The conditions may, in some instances, be supplied by the user issuing the request. For example, the user may input a condition for stop signs to be accurately identified 95 percent of the time within the content. In some instances, this accuracy may be associated with which ML models the process500uses to search the content and/or the workflows associated with searching the content for the field of interest. For example, users may specify and/or limit the amount of human interaction or review of the content based on the provided condition(s). Conditions may also specify characteristics of the outputs of the ML model predictions and/or what is ultimately presented to the user after the search is conducted. At506, the process500may search the content for the field of interest using a ML model(s). For example, the ML model(s) may utilize various forms of text extraction, content recognition, box bounding, semantic segmentation, etc. for analyzing the content. In some instances, the content may include, or represent, various forms of content or documents including images, text, tables, equations, and so forth. Additionally, or alternatively, the content may represent an assembly of content (e.g., multiple images) or individual images stored in separated locations. Continuing with the above example, the ML model(s) may analyze various images to determine whether any of the images contain representations or depictions of stop signs. As discussed above, the ML model(s) may be previously trained and configured to analyze the content to recognize the field of interest. In some instances, each ML model may correspond, or be trained to, recognize objects, phrases, words, and so forth within the content. Identifying the field(s) of interest may also be determined using multiple ML model(s), whereby a first ML model may identify the field of interest and a second ML model may determine content within the field of interest. For example, at508, the process500may determine item(s) within the content that are associated with the field of interest. In searching the content, the ML model(s) may identify items within the content as corresponding to the field of interest. In this sense, the ML model(s) may predict areas, or item(s), within the content as being associated with or corresponding to the field of interest. The ML model(s) may identify area(s) within the images or item(s) within the image that the ML model(s) determined correspond to the field of interest. At510, the process500may determine a confidence associated with the item(s). For example, after recognizing or predicting the item(s) as corresponding to the field of interest, the process500may determine an associated confidence of the determination. The confidence may represent, in some instances, how confident the ML model(s) is/are that the item(s) correspond to the field of interest. For example, the item(s) as predicted by the ML model(s) as corresponding to stop signs may be associated with a confidence (e.g., 80 percent confident the item is a stop sign, 90 percent confident the item is a stop sign, and so forth). As discussed above, the confidence of the ML model(s) may be determined via the ML model(s) being trained from a dataset to recognize stop signs. Each of the ML model(s) may therefore include a corresponding confidence that represents an accuracy of the ML model to identify the field(s) of interest. At512, the process500may determine whether the confidence of the item(s) satisfying the one or more condition(s). For example, the process500may determine whether the confidence is greater than a threshold, which may be set by the user at504. In some instances, the threshold may be determined using a calibration set and stateful calibrated non-adaptive or stateful calibrated adaptive techniques. The condition(s) may also indicate a range of confidences that trigger human review. For example, confidences between 0.25 and 0.7 is sent may be sent for human review. Here, the confidence of the item(s) as determined at510may be compared against the threshold to determine whether the confidence is greater than, equal to, or less than the threshold. In some instances, if the confidence is greater than the threshold, the process500may determine that the item(s) represent or correspond to the field of interest. Alternatively, if the process500determine(s) that the confidence is less than the threshold, the process500may be inconclusive about determining that the item(s) represent the fields of interest or may have low confidence that the item(s) represent the fields of interest. If at512, the process500determines that the confidence does not satisfy the one or more condition(s), the process500may follow the “NO” route and proceed to514. At514, the process500may not assign the item(s) and/or the content for review. For example, based at least in part on determining that the confidence satisfies the one or more condition(s), the process500may be confident that the item(s) represent or correspond to the fields of interest. In this sense, the search of the content may not satisfy the condition(s) for invoking human review of the content. For instance, the process500may be confident, above the threshold confidence, that the item(s) represent stop signs. Alternatively, if at512, the process500determines that the confidence does not satisfy the one or more conditions (e.g., the confidence is not greater than the threshold), the process500may follow the “YES” route and proceed to516. At516, the process500may assign the item(s) and/or the content for review. For example, based at least in part on determining that the confidence is not greater than the threshold, the process500may not be confident, or may not be sure, that the item(s) represent or correspond to the fields of interest. In this sense, the condition(s) associated with invoking human review may be satisfied. For instance, the process500may not be confident, above the threshold amount, that the item(s) represent stop signs. In some instances, the process500may flow to516in instances where the ML model is unable to identify objects or item(s) within the content. For example, the content may include an ambiguous item that the ML model(s) may be unable to discern or recognize. At518, the process500may transmit a first indication of the item(s) and/or the content for review. For example, the first indication may represent which item(s) in the content, or which areas of the content, the reviewer is to review. In some instances, the review may include the reviewer verifying that the item(s) is/are not the fields of interest or that the content does not contain the field of interest. For example, the reviewer may confirm that the item(s) is not a stop sign and/or that the content does not contain a stop sign. In some instances, additionally or alternatively, the reviewer may adjust labels associated with the items. For example, if the reviewer is prompted to confirm that the item(s) is a stop sign, but the item(s) is not actually a stop sign, the reviewer may instead label the item as a billboard or yield sign, for example. Here, this review may relabel or readjust the labels of the item(s). At520, the process500may receive a second indication associated with the review of the item(s) and/or the content. For example, based on the review, the process500may receive information associated with the review and which indicates the review performed. Continuing with the above example, the second indication may indicate that the reviewer verified the item(s) as stop signs, confirmed that item(s) were not item(s) were not stop signs, confirm that no stop signs were present in the item(s) and/or content, adjusted a label of the item(s) that were labeled as stop signs, and so forth. From520, the process500may proceed to “B” as discussed inFIG.5B. As shown in FIG. FB, from “B” the process500may proceed to522. At522, the process500may determine the result of the review. For example, the process500may determine whether the reviewer confirmed the item(s), adjusted a label of the item(s), and so forth. That is, at522, the process500may determine whether the reviewer confirmed that the item(s) and/or the content contained stop signs. At524, the process500may determine whether the result of the review is different than the item(s) within the content associated with the field of interest. For example, the process500may predict that the item(s) are stop signs but the review may indicate that the item(s) are not stop signs. Additionally, the reviewer may identify a stop sign within the content that was unidentified by the ML model(s) during the search of the content. Accordingly, the process500at524may compare the predictions or the results of the ML model(s) with the review. If at524, the process500determines that the result is different than the predicted item(s), the process500may follow the “YES” route and proceed to526. At526, the process500may retrain the ML model(s) using the result of the review. For example, the result may be utilized to indicate to the ML model(s) that certain item(s) within the content were unidentified by the ML model(s) during the search of the content. The ML model(s) may therefore be retrained to identify, in future instances, the item(s) at increased accuracies. That is, using the result of the review, or the portions of the content containing the item(s), the ML model(s) may be retrained to more accurately identify the item(s) in future instances. For example, the review may indicate a stop sign within the content and the ML model(s) may be retrained based on identification of the stop sign within the content. Alternatively, if at524the process500determines that the result is not different than the item(s), the process500may follow the “NO” route and proceed to528. At528, the process500may update a confidence threshold of the ML model(s). For example, the ML model(s) may determine the predicted item(s) and the review may indicate that the ML model(s) correctly identified the item(s). In this sense, the review may confirm the result of the ML model(s). In such instances, the confidence threshold of the ML model(s), or the confidence of the ML model(s) to identify the item(s), may be increased. By increasing the confidence of the ML model(s), the confidence associated with the model(s) correctly identifying the item(s) within the content may be correspondingly increased. Although the process500is discussed above with regard to search for a single field of interest within the content, in some instances, the process500may search for multiple fields of interest within the content. For example, in addition to identifying stop signs within the content, the process500may simultaneously search the content for other items, such as street signs or cars. In such instances, the process500may utilize one or more additional ML model(s) to identify the other fields of interest. Accordingly, the process500may perform several searches in parallel to identify fields of interest. Furthermore, although the process500is discussed and mentioned with regard to searching content, such as images, for fields of interest, the process500may search other content as well. For example, envision that a user wants to search invoices for company names. The process500may search the content to identify the key (e.g., company name) and return corresponding values (e.g., Company A, Company B, and so forth). Therein, the process500may surface item(s) for review if the ML model(s) that identify the key value pairs have a confidence lower than a certain threshold, or other user-defined criteria or conditions. Therein, such items may be sent for review to confirm or correct the predictions of the ML model(s). FIG.6illustrates an example process600for predicting outputs using workflows as applied to the ML models and/or human reviews. In some instances, the workflows may represent a series of steps or operations that the ML models and human reviews are collectively, or individually, configured to perform. At602, the process600may receive one or more conditions associated with reviewing content. For example, a user may input instructions, criteria, parameters, or other conditions associated with reviewing the content. By way of example, the conditions may include predicting outputs at 95 percent confidence. For example, if the predicted outputs as determined by the ML models are less than 95 percent, the user may request additional review by human reviewers. In some instances, the user may input or define the conditions using a DSL to allow the user to script the conditions. These conditions are then combined with logic utilize by the ML model to express a semantic meaning to indicate when human review is warranted (e.g., when the user desires human review if under a certain confidence). At604, the process600may determine a type of review to associated with reviewing the content and or a type of review. For example, the user may request that certain key value pairs be identified within the content. Here, the process600, upon knowing the type of review may select corresponding ML model(s) to perform the review and/or tasks performable by the ML model(s). For example, if the user wants to review content that contains email address, or locate email addresses within the content, the process600may select ML model(s) trained for detecting or searching for email addresses within the content. Additionally, or alternatively, the ML model(s) may be specific or trained to detect the email address within various forms of content. For example, the ML model(s) may be specific to detecting email addresses within text and/or images. At606, the process600may determine a workflow associated with reviewing the content. For example, knowing the one or more conditions as specified by the user, and/or a task (or review) requested by the user, the process600may determine operations or a workflow for reviewing the content. In some instances, the workflow may represent a series of steps performable by the ML models(s) and/or human reviewers, respectively. For example, depending on the content to be reviewed or the type of review, workflows may be different and/or a different order of operations between the ML model(s) and human reviewers may be invoked. By way of example, a workflow associated with reviewing content to identify email addresses may be different than a workflow associated with reviewing content to identify mailing addresses or object recognition in text or images. In some instances, and as noted above, the workflow may identify operations performed by ML model(s) and operations performed by human reviewers. For example, a workflow may specify that the ML model(s) and the human reviewers are to both confirm the presence of an email address in a particular piece of content. Additionally, or alternatively, the workflow may specify that certain predictions are to be checked or confirmed by human reviewers and/or that conclusions of the human reviewers are to be checked or confirmed by ML model(s). In some instances, the workflow may include any order, or different combination, or human reviewers confirming the predictions of ML models and/or the ML model(s) confirming the results of the ML models. By way of another example, for image classification, both the predicted output of the ML model(s) and the review of the human may have to indicate that the image contains a fox before the image is classified as containing a fox. In this sense, and as noted above, each ML model may be trained on datasets and proven workflows corresponding to their associated reviews, tasks, or function. At608, the process600may review the content based at least in part on the workflow. For example, using the workflow, the content may be reviewed to determine the content or item(s) within the content satisfy the one or more conditions. Continuing with the above example, the process600may analyze the content to determine the presence and location of email address(es) if any, within the content. FIG.7illustrates additional details of the operation606ofFIG.6and the process600for determining a workflow associated with reviewing content. As shown, the workflow606may include or be associated with a process700. In some instances, the workflow606may include a first operation702. For example, the first operation702may include determining whether content contains explicit material. In some instances, determining whether the content contains explicit material may include utilizing image classification, bounding boxes, semantic segmentation, or text extracting via one or more ML model(s). For example, if the content contains explicit material, a bounding box may be drawn around the area(s) within the content containing explicit material. Such flagging, or identification of explicit material, may be utilized when screening or posting the content to forums, websites, blogs, or other forms of social media. For example, social media cites may include policies that limit the use or presentation of explicit material. If the first operation702does not recognize or determine that the content contains explicit material, then bounding boxes may not be drawn around areas within the content. In some instances, the first operation702may be performed by a human or one or more ML model(s). After performing the first operation702, the process700may include determining a first confidence704associated with the first operation702. For example, the ML model(s) may determine a confidence that the content does not include or contain explicit material. In some instances, if a reviewer performs the first operation, the input or answer to the first operation702, may be treated as the ground truth or that the content does not contain explicit material. At706, the process700may determine whether the first confidence is greater than a first threshold. For example, the process700may compare the first confidence with the first threshold to determine whether the first confidence is greater than or less than the first threshold. In some instances, the first threshold may be set, or determined by the user requesting the review, or may be a default and/or continuously trained threshold associated with the workflow. If at706, the process determines that the first confidence704is not greater than the first threshold, the process700may follow the “NO” route and proceed to708. For example, the ML model may output a first confidence704of 85 percent that the content does not content contain explicit material. However, the first threshold may include a confidence of 95 percent, meaning that if the first confidence704is not above the first threshold, the process700is not confident enough that the content does not contain explicit material. Accordingly, at708the content may be transmitted for review. In some instances, the review may flag or identify those portions or areas within the content that include the first confidence704that is less than the first threshold. Such indications may serve to reduce an amount of review time or pinpoint the review to a specific area of the content. In some instances, the area or the content may be accentuated for ease in locating. In some instances, the review at708may be conducted by one or more additional ML model(s) and/or human reviewers. At710, the process700may receive a first review of the content. In some instances, the first review may include a verification of the first operation702or a predicted output of the first operation702. Alternatively, the first review may include an adjustment of the first operation702or the predicted output of the first operation. For example, the review may deselect or remove a bounding box around an area of the content as determined by the first operation702as corresponding to explicit content. Additionally, or alternatively, the review may identify a missed area within the content that contains explicit material. Such verification and/or adjustment may be used to update the accuracies and confidence thresholds associated with the first operation702. For example, if the ML model accurately determines that the content contains explicit material, the accuracy of the model may be updated. Alternatively, if the ML model does not accurately identify the content, the ML model may be retained. For example, the first review may be performed by a human reviewer and the results of the human review may be utilized by the process700to retrain the ML model(s). After710, the process700may proceed a second operation712that is associated with the workflow. The second operation712is discussed in detail herein. At706, if the process700determines that the first confidence704is greater than the first threshold, the process700may follow the “YES” route and proceed to the second operation712. Here, determining that the first confidence704is greater than the first threshold may indicate that the first operation702or the predicted output of the first operation702is greater than the first threshold. For example, the ML model may be 98 percent confident that the content does not contain explicit material, which is greater than the first threshold of 95 percent. The second operation712may include determining whether all of the explicit material within the content is identified or within a bounding box. In some instances, the second operation712may include different techniques for identifying whether all of the explicit material within the content (e.g., image classification, bounding boxes, semantic segmentation, or text extracting). Additionally, or alternatively, in instances where the first operation702is performed by ML model, the second operation712may be performed by a human reviewer or a different ML model. Regardless, the second operation712may further serve to identify explicit material within the content or otherwise confirm or correct the results of the first operation702. For example, the second operation712may determine that all the explicit material within the content includes a bounding box or that not all explicit material includes bounding boxes. For the latter, the process700may draw a bounding box around the area(s) within the content containing explicit material. After performing the second operation712, the process700may include determining a second confidence714associated with the second operation712. For example, the ML model(s) or the reviewer may determine a confidence that the content does not include or contain explicit material. At716, the process700may determine whether the second confidence is greater than a second threshold. In some instances, the second threshold may be greater than, equal to, or less than the first threshold. For example, the process700may compare the second confidence with the second threshold to determine whether the second confidence is greater than or less than the second threshold. In some instances, the second threshold may be set, or determined by the user requesting the review, or may be a default and/or continuously trained threshold associated with the workflow. If at716, the process700determines that the second confidence714is not greater than the second threshold, the process700may follow the “NO” route and proceed to718. For example, the ML model may output a second confidence716of 90 percent that the content does not content contain explicit material. However, the second threshold may include a confidence of 93 percent, meaning that if the second confidence714is not above the second threshold and the process700is not confident enough that the content does not contain explicit material. Accordingly, at718the content may be transmitted for review. In some instances, the review may flag or identify those portions or areas within the content that include the second confidence714that is less than the second threshold. Such indications may serve to reduce an amount of review time or pinpoint the review to a specific area of the content. In some instances, the area or the content may be accentuated for ease in locating. In some instances, the review at718may be conducted by one or more additional ML model(s) and/or human reviewers. At720, the process700may receive a second review of the content. In some instances, the second review may include a verification of the second operation at712or a predicted output of the second operation at712. Alternatively, the second review may include an adjustment of the second operation at712or the predicted output of the second operation. For example, the second review may deselect or remove a bounding box around an area of the content as determined by the second operation at712as corresponding to explicit content. Additionally, or alternatively, the second review may identify a missed area within the content that contains explicit material. Such verification and/or adjustment may be used to update the accuracies and confidence thresholds associated with the second operation. For example, if the ML model accurately determines that the content contains explicit material, the accuracy of the model may be updated. Alternatively, if the ML model does not accurately identify the content, the ML model may be retained. For example, the second review may be performed by a human reviewer and the results of the human review may be utilized by the process to retrain the ML model(s). After720, the process700may proceed an nth operation722that is associated with the workflow. Further, if the process700determines that the second confidence at714is greater than the second threshold, the process700may follow the “YES” route and proceed to the nth operation at722. Here, determining that the second confidence is greater than the second threshold may indicate that the second operation at712or the predicted output of the second operation at702is greater than the second threshold. In some instances, the nth operation may include additional operations for determining whether all of the content associated with the one or more conditions has been identified. For example, the process700may determine whether all of the explicit material within the content has been identified. From722, the process700may proceed to724to determine a nth confidence associated with the nth operation. Therein, at726, the process700may determine whether the nth confidence is greater than an nth threshold for potentially invoking one or more additional operations, reviews. Alternatively, the process700after determining that the second confidence is greater than the second threshold may end and conclude that the content does not contain any items corresponding to the one or more conditions. For example, after satisfying the second threshold, the process700may terminate and conclude that the content does not contain explicit material. In some instances, the process700may also terminate after726. In some instances,FIG.7and the process700may illustrate a scenario whereby confidences are determined between each stages or operations within an overall workflow. Determining the confidences between each stage may serve as a source for error checking and retraining the ML models. For example, if the process700frequently (or over a predetermined amount of time) determines that the first confidence is less than the first threshold, the process700may retrain ML model(s), update a training dataset, invoke human reviewers, and so forth. The quality or accuracy of the workflow may therefore be monitored and updated. Furthermore, as shown, the predictions or results of the operations in the700may flow or continue to subsequent operations for further analysis or review. Herein, the process700may route information between the operations and ensure data compatibility between each operation of the process700. In doing so, the predictions and/or outputs of the operations may be checked for quality before being pass onto subsequent operations in the process700. Accordingly, the multi-step process as illustrated inFIG.7may check an agreement at each step. In doing so, more data may be collected before moving on or proceeding to subsequent operations. Between each step the results (or predictions of the human reviews and/or ML model(s)) may be compared to determine variances). This comparison may be lead to higher quality ML model outputs. FIG.8illustrates an example process800for updating thresholds for reviewing content. At802, the process800may receive a request for reviewing content. For example, the user may submit a request for reviewing content. In some instances, the request may include the content to be reviewed and/or the conditions associated with reviewing the content (e.g., confidence thresholds). At804, the process800may review the content using a first machine learning (ML) model. For example, the first ML model may be trained to identify field(s) of interest (e.g., objects, key value pairs, etc.) corresponding to the request of the user. Therefore, using the first ML model, the process800may review the content based on the request of the user. At806, the process800may determine a first confidence associated with the predicted output(s) of the first ML model. For example, in searching the content, the first ML model may have a first confidence score associated with fields of interest that correspond to the request of the user. By way of example, if the user requests the content review service106to label and/or identify stop signs within an image, the first confidence may represent a confidence of the first ML model identifying an object within the image as a stop sign. In this sense, the first confidence represents a confidence of the result, or predicted output, of the first ML model. For example, the first ML model may be 98 percent confident that an image contains a stop sign. At808, the process800may determine whether the first confidence is greater than a second confidence. For example, at808, the process800may determine whether the first confidence is trustworthy. Comparing the first confidence against the second confidence may attempt to verify that the result or predicted output of the first ML model is accurate. In doing so, the process800may compare the first confidence against the second confidence to decide, or determine, whether the first confidence is above or below the second confidence (e.g., threshold) for use in determining whether to request a review of the content. To determine the second confidence, the process800may utilize a calibration set for a second ML model. For example, as illustrated, at810the process800may determine a calibration set for the second ML model. The calibration set used to train the second ML model may include random samplings of content or content that has been identified with high confidences. In other instances, the calibration set may include content labeled by human reviewers. The calibration set may therefore be utilized to train the second ML model to identify, search, or review particular field(s) of interest or content. At812, the process800may determine the second confidence associated with the accuracy of the first ML model. For example, through analyzing the calibration set, the process800may determine the second confidence associated with the accuracy of the first ML model. This second confidence may continuously or dynamically update based on the calibration set. In this sense, the second ML model may determine a confidence threshold (e.g., the second confidence) utilized when checking the first confidence, and for use in determining whether to trust the first confidence of the first ML model. For example, even though the first ML model may be 98 percent accurate that the image contains the stop sign, the predicted outputs of the first ML model may not be accurate. Hence, by comparing the first confidence with a second confidence that is trained via a calibration set, the results of the first ML model may be checked prior to submitting the content for review. For example, the process800may determine that the first ML model is accurate 60 percent of the time, and may determine that results of the first ML model are trustworthy or above a certain confidence level. If at808the process800determines that the first confidence is greater than the second threshold, the process800may follow the “YES” route and proceed to814. At814, the process may determine to not transmit the content for review. For example, the process800, from808, may determine that the prediction of the first ML model is above the second confidence and that the output of the first ML model is trustworthy. Conversely, if at808the process800determines that the first confidence is not greater than the second confidence, the process800may follow the “NO” route and proceed to816. At816, the process800may transmit the content for review by one or more reviewer(s). At818, the process800may receive results of the review(s). For example, the process800may receive indications confirming or adjusting the results of the predicted outputs of the first ML model. The indications, for example, may indicate that one or more stop signs were identified in the image and which were not detected by the first ML model, may confirm that the first ML model accurately identified the stop signs, and so forth. Based on the review(s), the results or the content may be included within the calibration set for use in determining the second confidence. Accordingly, the review(s) of the one or more reviewer(s) may be used update the confidence of the first ML model accurately predicting field(s) of interest. In some instances, the process800may illustrate a stateful calibrated adaptive threshold technique whereby the calibration set evolves overtime. Such scenario may be useful for large compilations of data in order to use more recent (or otherwise relevant) information for calibrating the threshold. However, the thresholding techniques discussed herein may find use in other techniques as well, such as stateful calibrated non-adaptive. In this example, users may provide the expected accuracy threshold of the ML model predicted output against human labelers, and process may automatically find the confidence threshold. To find the confidence threshold, a calibration set may be determined, and in the non-adaptive scenario, the calibration set does not change over time. FIG.9illustrates a user interface900for creating a review. In some instances, the user interface900may be presented on a device of a user as the user requests a review from the content review service106(e.g., the user device108). As discussed, utilizing the user interface(s), the user may define the condition(s) and/or criteria associate with creating a review. The user interface900is shown within which a user may insert or select criteria associated with review content. Within the user interface900, the user may define a name902of the review, as well as a location904where the content is located. The user may also select a task (e.g., the task(s)136) associated with the review. For example, as illustrated, the user may select a task associated with key value pair extraction, a task associated with image recognition, a task associated with machine learning models, and/or a custom task. As discussed hereinabove, the task associated with key value pair extraction may involve the content review service searching or analyzing the content for key value pairs. In some instances, the user may further define specific key value pairs the user would like to search for within the content (e.g., employee names, company name, etc.). In some instances, the user may define confidence associated with the individual or particular key value pairs. Otherwise, the user may simply request that key value pairs be reviewed, determined, or extracted from the content. The image recognition tasks may include a review of the content to identify certain subject matter, such as explicit content. For example, the image recognition task may identify people in swimwear as compared to nudity. The user may also define custom tasks as well. As shown, the user has selected the key value pair extraction task. In doing so, the content review service106may be configured to identify key value pairs within the content. In some instances, these key value pairs may be defined or limited by the user, or the content review service106may search the content for any key value pairs. Additionally, upon selecting the task, the user may define condition(s) (e.g., the condition(s)116) associated with the review of the content. For example, the user may include condition(s) associated with when key value pairs are sent for human review. For example, an identification value906may represent a confidence score for deciding if two identified fields have a key value relationship. That is, in the review of the content, if the confidence that two fields (e.g., the key and the value) is below the identification value906, the two fields, or the pair, may be sent for review. In some instances, the user may insert a value between 0 and 100 for the identification value906. The user may also select a quality value908, which represents a confidence score for the text within the fields of the key value pairs. That is, the text within the fields as identified are a key value pair. In some instances, the user may insert a value between 0 and 100 for the quality value908. By way of example, envision that the user would like to extract employee names from the content. Here, the identification value906would represent the confidence whether the fields identified are associated with, include, or represent the names of the employee and the quality value908would represent the confidence in the words of the fields (e.g., confidence in key “word” such as a field “employee name” within the content and confidence in value “word” e.g., “John Doe” within the content). In some instances, the confidence around these words may be determined and if any of the words has a confidence lower than a threshold the review may be triggered. That is, if the content review service106is less than 90 percent confident that the fields are a key value pair and/or that the words within the fields are a key value pair, then the content may be triggered for human review. However, as noted above, the user may specific other condition(s) for when human review is triggered. Additionally, or alternatively, if the average confidence or summation of the confidences is lower than a threshold, the human review may be triggered. The user may also select a random sampling910of the content for human review. For example, the random sampling910, or an audit of the results of the review, may represent a random sampling of determined key value pairs that have a confidence above and/or below the identification value906and/or above and/or below the quality value908. This random sampling910may ensure a quality of the content review service and that the ML models are accurate. In some instances, the user may input a value between 0 and 100 for the random sampling810. Although the user interface900is shown including certain material or content, additional fields may be presented to the user. Additionally, or alternatively, multiple user interfaces may be presented. Through the series or multiple user interfaces, the user may define the conditions and/or the criteria associated with the review. For example, the user may select among template (e.g., the template(s)162) when creating the review. In some instances, the user may create their own custom templates that the reviewers use for reviewing the content. Users may also input instructions for the reviewers during the review of their tasks. For example, the user may request that the reviewers review the key value pairs and to correct them if they do not match the provided content. The users may also select the types of reviewers that are assigned for reviewing the content. For example, users may select between reviewers of the content review service106, private reviewers the user has sourced, and/or third-party reviewers contracted or associated with the content review service106. In some instances, the user may also specify a price per task. Additionally, or alternatively, the content review service106may determine a price per task based on the provided condition(s). After selecting the conditions and specifying the criteria associated with the review, the user may create the task. Herein, a dashboard of the interface of the user with the content review service may be updated to indicate the newly created tasks. Additionally, after creating, the task may be assigned to reviewer(s) of the content review service106(or as otherwise chosen by the user during the creation of the task). FIGS.10-17illustrate a sequence of user interfaces for presenting reviews to a reviewer. In some instances, the sequence of user interfaces may be presented on a device of a reviewer. Utilizing the user interfaces, the reviewer may interact with the device to perform the review. Beginning withFIG.10, a user interface1000is shown. After the user has created the task, or the review, the task may show up on a dashboard of the reviewer. The dashboard, as shown in the user interface1000, may illustrate the tasks to be reviewed by the reviewer. For each reviewer, his or her dashboard may reflect those reviews to be completed. As shown, each review may include a name, the type of task to be completed (or reviewed), the status (e.g., for review, completed, in progress, and so forth), as well as a creation time and/or a completion data. The dashboard may also indicate when the reviews are to be completed by the reviewer (e.g., a deadline). In some instances, the reviews may be organized or sorted in their respective categories (e.g., status). Additionally, or alternatively, the reviews may be prioritized within the dashboard depending on the severity of time-sensitive nature of the review. For example, those reviews that are a priority or have been requested for prompt review may be presented in descending order on the dashboard. Upon selecting a review, the reviewer may review that review, as discussed herein. Accordingly, the dashboard may see the metrics or guidelines for the reviews, as well as the total number of completed and/or pending reviews for images, text, and/or video content. InFIG.11, the reviewer has selected one of the reviews for review. The user interface1100may include separate regions, such as a first portion1102that represents or content1104being reviewed by the reviewer (e.g., the content that the reviewer is requested or being requested to review). A second portion1106may include item(s) for review. For example, as discussed above, the content1104may be reviewed for certain fields of interest using one or more ML model(s). The results, or predictions, of the ML model(s) may be output for review based on confidence scores or other user-defined criteria. For example, the user may request to search the content1104to identify a company name. In searching the content1104for the key “company name” and like aliases (e.g., business, business name, corporation, etc.) values corresponding to the key may be determined or which contextual fields of interest map together. If the confidence that the key and the value are a pair is less than a threshold, the user interface1100may present these key value pairs for review in the second portion1106of the user interface1100. The second portion1106of the user interface1100indicates that the reviewer has four key value pairs for review, such as a first key value pair1108(1), a second key value pair1108(2), a third key value pair1108(3), and a fourth key value pair1108(4). In some instances, the key value pairs may be surfaced for review based on a confidence that the words are a key value pair. For example, the key value pairs presented in the second portion1106may include key value pairs determined to have a low confidence (e.g., are low-confidence key value pairs) and/or condition(s) as specified by the user when searching the content. By way of example, for the first key value pair1108(1) for review, the key may include searching the content1104for the key “company name.” That is, the user may request that the content1104be searched to identify company names. Aliases of the key “company name” may also be searched (e.g., corporation, business, etc.). Here, the returned value for the first key value pair1108(1) may include “Allordable Lawn Care.” As discussed herein, the reviewer may interact with the second portion1106for updating and/or adjusting the first key value pair1108(1). The user may specify other key value pairs for review, or which the ML model(s) have identified. In some instances, these additional key value pairs may be requested by the user or may be surfaced for review by the ML model(s). For example, the second key value pair1108(2) may indicate a key of “[email protected]” and a value of “589-802-2987.” The key for the second key value pair1108(2) may represent an email address and the value of the second key value pair1108(2) may represent a phone number. In this sense, the second key value pair1108(2) may not represent a correct or accurate key value pair and during the review, the reviewer may correct the second key value pair1108(2). The third key value pair1108(3) may indicate a key of “Commission fee” and a value may be blank. Here, for example, the content1104may have been searched for a commission fee, but no value may have been found within the content1104. The fourth key value pair1108(4) may indicate a key of “Term” but the search of the content1104may not surface the value from the content1104. As illustrated, each of the first key value pair1108(1), the second key value pair1108(2), the third key value pair1108(3), and the fourth key value pair1108(4) may have been identified or predicted as key value pairs within the content1104, as indicated by the checked “YES” box within the second portion1106. During the review, the reviewer may correct such classification or entries. Additionally, as noted above, the first key value pair1108(1), the second key value pair1108(2), the third key value pair1108(3), and the fourth key value pair1108(4) may be requested by the user and/or the search may surface these key value pairs for review, despite not being requested by the user. In some instances, the key value pairs within the second portion1106may be presented in order of important, order of confidence, or in any other manner. For example, the first key value pair1108(1) may be a most confidence key value pair as determined by the ML model(s), while the fourth key value pair1108(4) may be a least confidence key value pair as determined by the ML model(s). However, although presented in a specific order, the reviewer may choose to review the first key value pair1108(1), the second key value pair1108(2), the third key value pair1108(3), and the fourth key value pair1108(4) in any order. Additionally, although the user interface1100illustrates four key value pairs being presented at a single time, in some instances, the user interface1000may present one key value pair at a time. For example, the second portion1106may display the first key value pair1108(1) for review, and after the reviewer reviews the first key value pair1108(1), the second key value pair1108(2) may be displayed. This process may repeat until all the key value pairs are reviewed by the reviewer. Although the user interface1100illustrates the second portion1106presenting four key value pairs, in some instance, the second portion1106and/or the user interface1100may present other prompts or requests for the reviewer to perform. For example, the second portion1106may request the reviewer to locate values for certain keys. Such prompt may ask the reviewer to locate key value pairs within the content1104. Additionally, or alternatively, the reviewer may add in additional detail that is not based on the ML model(s) predictions or outputs, but rather may be additional information within the content1104. For example, the reviewer may label or identify objects within the content1104. In instances where more than or less than four key value pairs are presented for review, the reviewer may scroll (e.g., vertical) within the second portion to display or surface more key value pairs for review. FIG.12illustrates a user interface1200showing example instructions that may be presented to the reviewer during a review of the content1104. In some instances, the instructions may be presented in unison with the first portion1102and/or the second portion1106. As shown, the first portion1102and/or the second portion1106may not be to scale on the user interface1200in order to discuss the instructions presented the review. In some instances, the instructions may be presented within a third portion1202that is positioned adjacent (e.g., to the left of) the first portion1102. However, in some examples, the instructions may be presented elsewhere within the user interface1200. Additionally, or alternatively, in some instances, the user interface1200may present the instructions and then the reviewer may hide the instructions within the user interface1200and/or may review the instructions before reviewing the content1104, at which time the instructions may be removed from the user interface1200. Using the instructions, the reviewer may review the content1104. FIG.13illustrates a user interface1300showing the user adjusting the first key value pair1108(1). Here, the reviewer may be permitted to adjust the first key value pair1108(1) through hovering a mouse, pointer, or other indicator within an area1302of the second portion1106associated with the first key value pair1108(1). Additionally, clicking or hovering within the area1302may indicate the predicted key and/or the predicted value within the content1104presented within the first portion1102. For example, the user interface1300may display a first box1304around the predicted key of the first key value pair1108(1) and a second box1306around the predicted value of the first key value pair1108(1). The first box1304may assist the reviewer in locating the key of the first key value pair1108(1) within the content1104while the second box1306may assist the reviewer in locating the value of the first key value pair1108(1) within the content1104. In other words, the first box1304and the second box1306may be used by the reviewer when reviewing the content1104for determining whether the first key value pair1108(1) is actually a key value pair. Upon clicking or hovering within the area1302the reviewer may modify one or more characteristics of the determined first key value pair1108(1). For example, the reviewer may correct the value from “Allordable Lawn Care” to “Affordable Lawn Care.” During the searching of the content1104, for example, the search may have correctly identified that first key value pair1108(1) as a correct or accurate key value pair, but have may errored in the spelling of the value of the first key value pair1108(1). As such, the reviewer may indicate that the first key value pair1108(1) is an accurate key value pair as “Affordable Law Care” is the “Company name” within the content1104and through keeping the “YES” box checked. After correcting the spelling, the user interface1300may update the first key value pair1108(1) as displayed within the second portion1106. In some instances, the key and the value of the first key value pair1108(1) may be highlighted or otherwise indicated within the content1104. For example, the first box1304may include a first color, or first highlight, while the second box1306may include a second color, or second highlight. Such indications may visually assist the user in locating the first key value pair1108(1) within the content1104for determining whether the first key value pair1108(1) is an accurate key value pair and/or adjusting the key value pair. FIG.14illustrates a user interface1400showing the user adjusting the second key value pair1108(2). Here, the reviewer may be permitted to adjust the second key value pair1108(2) through hovering a mouse, pointer, or other indicator within an area1402of the second portion1106associated with the second key value pair1108(2). Additionally, clicking or hovering within the area1402may indicate the predicted key and/or the predicted value within the content1104presented within the first portion1102. For example, the user interface1400may display a first box1404around the predicted key of the second key value pair1108(2) and a second box1406around the predicted value of the second key value pair1108(1). The first box1404may assist the reviewer in locating the key of the first key value pair1108(2) within the content1104while the second box1406may assist the reviewer in locating the predicted value of the second key value pair1108(2) within the content1104for user in determining whether the second key value pair1108(3) is actually a key value pair. Upon clicking or hovering within the area1402the reviewer may modify one or more characteristics of the determined second key value pair1108(2) as determined. For example, the predicted key (i.e., “[email protected]”) may not be a key of the predicted value (i.e., 589-802-2987). Instead, by way of example, a key may include “email address” and an associated value may include “[email protected]” and/or a key may include “phone number” and an associated value may include “589-802-2987.” However, the key and the value of the second key value pair1108(2) may not be associated or related with one another. Accordingly, as shown, the reviewer may click within a “NO” box presented within the user interface1400to indicate that “[email protected]” and 589-802-2987 are not a key value pair. Such indication that the second key value pair1108(2), as predicted, is not a key value pair may be used to update or retrain one or more ML model(s) for better accurately identifying key value pairs with the content1104or additional content. In some instances, the predicted key and the predicted value of the second key value pair1108(2) may be highlighted or otherwise indicated within the content1104(e.g., highlighted). Additionally, the user interface1400illustrates that the value of the first key value pair1108(1) within the second portion1106has been updated with “Affordable Lawn Care” to indicate the correct spelling and based on the reviewer correcting the spelling of the value, as discussed above inFIG.13. FIG.15illustrates a user interface1500showing the user adjusting the third key value pair1108(3). Here, the reviewer may be permitted to adjust the third key value pair1108(3) through hovering a mouse, pointer, or other indicator within an area1502of the second portion1106associated with the third key value pair1108(3). As shown, the third key value pair1108(3) may include a predicted key of “Commission fee” while the predicted value may be left blank. Here, for example, the search of the content1104may be unable to locate a value of the predicted key associated with the third key value pair1108(3) within the content1104. Additionally, or alternatively, the content1104may not include the key, or aliases of key (e.g., aliases of “Commission fee”), within the content1104. For example, as shown, the first portion1102of the user interface1500may not include boxes that identify the predicted key within the content1104. Upon reviewing the third key value pair1108(3), for example, the reviewer may review or otherwise scan the content1104in an attempt to locate a commission or aliases of a commission fee (e.g., transaction fee, sales commission, transaction cost, etc.). Here, however, as shown, the content1104may not include such terms, or keys, and hence, the reviewer may click or select a box “Can't Find.” This indication may indicate that that the reviewer is unable to find a commission fee (or like aliases) within the content1104. Such indication may be utilized to indicate that the content1104does not include a commission fee. Additionally, the user interface1500illustrates that the second key value pair1108(2) within the second portion1106has been updated to indicate that the key and the value are not a key value pair. FIG.16illustrates a user interface1600showing the user adjusting the fourth key value pair1108(4). Here, the reviewer may be permitted to adjust the fourth key value pair1108(4) through hovering a mouse, pointer, or other indicator within an area1602of the second portion1106associated with the fourth key value pair1108(4). Additionally, clicking or hovering within the area1602may indicate the predicted key and/or the predicted value within the content1104presented within the first portion1102. For example, the user interface1600may display a first box1604around the predicted key of the fourth key value pair1108(4) and a second box1606within the content1104associated with the key of the fourth key value pair1108(4). The first box1604may assist the reviewer in locating the key of the fourth key value pair1108(4) within the content1104while the second box1606may assist the reviewer in locating the predicted value of the fourth key value pair1108(4) within the content1104. However, as shown, the second box1606may not include a value (or a value associated with the term). That is, while the key “term” was identified within the content1104, the content1104may not include a value for the key. In some instances, the second box1606may be located within an area of the content1104associated with a predicted location of the value of the key. As such, because the content1104does not include a value for the key “term” the reviewer may select a box “value is blank” within the second portion1106of the user interface1600associated with the fourth key value term1108(4). Such indication may be utilized to indicate that the content1104includes the key “term” but does not include an associated value. In some instances, the predicted key and the predicted value of the fourth key value pair1108(4) may be highlighted or otherwise indicated within the content1104(e.g., highlighted). Additionally, the user interface1600illustrates that the value of the third key value pair1108(3) within the second portion1106has been updated to indicate that the reviewer cannot find the value associated with the key (e.g., Commission fee) within the content1104. FIG.17illustrates a user interface1700after the reviewer has reviewed the key value pairs within the second portion1106. For example, the user interface1700shows the first key value pair1108(1), the second key value pair1108(2), the third key value pair1108(3), and the fourth key value pair1108(4) being adjusted or confirmed as described above with regards to the user interface1300, the user interface1400, the user interface1500, and/or the user interface1600, respectively. After performing the reviews, the reviewer may submit the review. Therein, the reviews (e.g., confirmations and/or adjustments) may by the reviewer may be utilized to confirm the accuracy of the ML model(s) that predicted the first key value pair1108(1), the second key value pair1108(2), the third key value pair1108(3), and the fourth key value pair1108(4). The ML model(s) may then be retrained based on the reviews to permit more accurate predictions in future instances. Furthermore, and submitting the reviews of the content1104, additional content may be presented for review. In this sense, the reviewer may review the content1104, perform the reviews associated with the content1104, and after submitting the reviews, may be presented an additional piece of content for review. In this sense, the human reviewers may add a next level of intelligence for the reviews. This additional piece of content may be associated with respective reviews that are similar to and/or different than the reviews of the content1104. For example, the reviewer may be presented reviews associated with objects identified in an image. While various examples and embodiments are described individually herein, the examples and embodiments may be combined, rearranged and modified to arrive at other variations within the scope of this disclosure. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as illustrative forms of implementing the claims. | 141,348 |
11861513 | DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS The present invention may be a system, a method, and/or a computer program product for detecting and monitoring bias in a software application using artificial intelligence (AI). The computer program product may include a computer-readable storage medium (or media) having computer-readable program instructions thereon for causing a processor to carry out aspects of the present invention. A network environment10with an example of a bias monitoring computing system14is illustrated inFIGS.1-2. In this particular example, the environment10includes the bias monitoring computing system14, a one or more client devices12(1)-12(n), one or more training data servers16(1)-16(n), and a one or more application servers17(1)-17(n) coupled via one or more communication networks30, although the environment could include other types and numbers of systems, devices, components, and/or other elements as is generally known in the art and will not be illustrated or described herein. This technology provides a number of advantages including providing methods, non-transitory computer readable medium, and systems that detect and monitor bias in a software application using artificial intelligence (AI). Referring more specifically toFIGS.1-2, the bias monitoring computing system14is programmed to detect and monitor bias in an application using artificial intelligence. Now referring toFIG.2, the bias monitoring computing system14can employ a hub architecture including a north bridge and memory controller hub (NB/MCH)201and south bridge and input/output (I/O) controller hub (SB/ICH)202. Processing unit203, main memory204, and graphics processor205can be connected to the NB/MCH201. Graphics processor205can be connected to the NB/MCH201through an accelerated graphics port (AGP). In the depicted example, the network adapter206connects to the SB/ICH202. The audio adapter207, keyboard and mouse adapter208, modem209, read-only memory (ROM)210, hard disk drive (HDD)211, optical drive (CD or DVD)212, universal serial bus (USB) ports and other communication ports213, and the PCI/PCIe devices214can connect to the SB/ICH702through bus system216. PCI/PCIe devices214may include Ethernet adapters, add-in cards, and PC cards for notebook computers. ROM210may be, for example, a flash basic input/output system (BIOS). The HDD211and optical drive212can use an integrated drive electronics (IDE) or serial advanced technology attachment (SATA) interface. The super I/O (SIO) device215can be connected to the SB/ICH. An operating system can run on processing unit203. The operating system can coordinate and provide control of various components within the bias monitoring computing system14. As a client, the operating system can be a commercially available operating system. An object-oriented programming system, such as the Java™ programming system, may run in conjunction with the operating system and provide calls to the operating system from the object-oriented programs or applications executing on the data processing system700. As a server, the bias monitoring computing system14can be an IBM® eServer™ System p running the Advanced Interactive Executive operating system or the Linux operating system. The bias monitoring computing system14can be a symmetric multiprocessor (SMP) system that can include a plurality of processors in the processing unit203. Alternatively, a single processor system may be employed. Instructions for the operating system, the object-oriented programming system, and applications or programs are located on storage devices, such as the HDD211, and are loaded into the main memory204for execution by the processing unit203. The processes for embodiments of the full question generation system can be performed by the processing unit703using computer usable program code, which can be located in a memory such as, for example, main memory204, ROM210, or in one or more peripheral devices. A bus system216can be comprised of one or more busses. The bus system216can be implemented using any type of communication fabric or architecture that can provide for a transfer of data between different components or devices attached to the fabric or architecture. A communication unit such as the modem209or network adapter206can include one or more devices that can be used to transmit and receive data. Those of ordinary skill in the art will appreciate that the hardware depicted inFIG.2may vary depending on the implementation. For example, the bias monitoring computing system14includes several components that would not be directly included in some embodiments illustrated inFIGS.3-6C. However, it should be understood that the embodiments illustrated inFIGS.3-6Cmay include one or more of the components and configurations of the bias monitoring computing system14for performing processing methods and steps in accordance with the disclosed embodiments. Moreover, other internal hardware or peripheral devices, such as flash memory, equivalent non-volatile memory, or optical disk drives may be used in addition to or in place of the hardware depicted. Moreover, the bias monitoring computing system14can take the form of any of a number of different data processing systems, including but not limited to, client computing devices, server computing devices, tablet computers, laptop computers, telephone or other communication devices, personal digital assistants, and the like. Essentially, data processing system700can be any known or later developed data processing system without architectural limitation. Referring back toFIG.1, each of the one or more client devices12(1)-12(n) may include a processor, a memory, user input device, such as a keyboard, mouse, and/or interactive display screen by way of example only, a display device, and a communication interface, which are coupled together by a bus or other link, although each may have other types and/or numbers of other systems, devices, components, and/or other elements. In this example, the bias monitoring computing system14interacts with the one or more client devices12(1)-12(n) via the communication network30to receive requests to access applications executing on the one or more application servers17(1)-17(n), although the bias monitoring computing system14can receive other types or requests. Each of the one or more training data servers16(1)-16(n) may store and provide training data to the bias monitoring computing system14via one or more of the communication networks30, for example, although other types and/or numbers of storage media in other configurations could be used. In this particular example, each of the one or more training data servers16(1)-16(n) may comprise various combinations and types of storage hardware and/or software and represent a system with multiple network server devices in a data storage pool, which may include internal or external networks. Various network processing applications, such as CIFS applications, NFS applications, HTTP Web Network server device applications, and/or FTP applications, may be operating on the plurality of data servers16(1)-16(n) and may transmit data in response to requests from the bias monitoring computing system14. Each the one or more training data servers16(1)-16(n) may include a processor, a memory, and a communication interface, which are coupled together by a bus or other link, although each may have other types and/or numbers of other systems, devices, components, and/or other elements. Each of the one or more application servers17(1)-17(n) may store and provide access to the applications executing to the one or more client devices12(1)-12(n) via the bias monitoring computing system14via one or more of the communication networks30, for example, although other types and/or numbers of storage media in other configurations could be used. In this particular example, each of the plurality of data servers17(1)-17(n) may comprise various combinations and types of storage hardware and/or software and represent a system with multiple network server devices in a data storage pool, which may include internal or external networks. Various network processing applications, such as CIFS applications, NFS applications, HTTP Web Network server device applications, and/or FTP applications, may be operating on the plurality of data servers17(1)-17(n) and may transmit data in response to requests from the bias monitoring computing system14. Each the plurality of data servers17(1)-17(n) may include a processor, a memory, and a communication interface, which are coupled together by a bus or other link, although each may have other types and/or numbers of other systems, devices, components, and/or other elements. The non-transitory computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The non-transitory computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a head disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A non-transitory computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. The non-transitory computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a communication network30, for example, the Internet, a local area network (LAN), a wide area network (WAN) and/or a wireless network. The communication network30may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of communication network30, including LAN or WAN, or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operations steps to be performed on the computer, other programmable apparatus, or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical functions. In some alternative implementations, the functions noted in the block may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. The present description and claims may make use of the terms “a,” “at least one of,” and “one or more of,” with regard to particular features and elements of the illustrative embodiments. It should be appreciated that these terms and phrases are intended to state that there is at least one of the particular features or elements present in the particular illustrative embodiment, but that more than one can also be present. That is, these terms/phrases are not intended to limit the description or claims to a single feature/element being present or require that a plurality of such features/elements be present. To the contrary, these terms/phrases only require at least a single feature/element with the possibility of a plurality of such features/elements being within the scope of the description and claims. In addition, it should be appreciated that the following description uses a plurality of various examples for various elements of the illustrative embodiments to further illustrate example implementations of the illustrative embodiments and to aid in the understanding of the mechanisms of the illustrative embodiments. These examples are intended to be non-limiting and are not exhaustive of the various possibilities for implementing the mechanisms of the illustrative embodiments. It will be apparent to those of ordinary skill in the art in view of the present description that there are many other alternative implementations for these various elements that may be utilized in addition to, or in replacement of, the example provided herein without departing from the spirit and scope of the present invention. The system and processes of the Figures are not exclusive. Other systems, processes and menus may be derived in accordance with the principles of embodiments described herein to accomplish the same objectives. It is to be understood that the embodiments and variations shown and described herein are for illustration purposes only. Modifications to the current design may be implemented by those skilled in the art, without departing from the scope of the embodiments. As described herein, the various systems, subsystems, agents, managers, and processes can be implemented using hardware components, software components, and/or combinations thereof. No claim element herein is to be construed under the provisions of 35 U.S.C. 112 (f), unless the element is expressly recited using the phrase “means for.” An exemplary method for detecting and monitoring will now be illustrated with reference toFIGS.3-6C. Referring particularly with reference toFIG.3, the exemplary method300begins at step305where the bias monitoring computing system14indexes training data to enable instant word searching. In this example, indexing on the training data can be done by applying a full text search index that allows calculation of the correlation between values of a feature of an application and a target variable. By indexing the training data, the disclosed technology is able to search the feature value that has a high correlation with the target variable and that can also be adapted to additional data. In this example, the bias monitoring computing system14can obtain the training data from one of the one or more training data servers16(1)-16(n), although the training data can be obtained from other memory locations. An example of the step305will now be further illustrated with reference toFIGS.4A and4B. As illustrated inFIG.4A, the bias monitoring computing system14enters the hostname or the internet protocol (IP) address, secure socket layer (SSL) port, and the database present in one of the one or more training data servers16(1)-16(n) to get the training data. Next, as illustrated inFIG.4B, the bias monitoring computing apparatus14selects the specific training data from the training table by including the name of the schema and the table, although other types of information can be included. In this example, the artificial intelligence models are supervised learning models and the model types can be binary classification, multi-class classification or regression. Further in this example, the input training data can be structured data. Next in step310, the bias monitoring computing system14determines the correlation between vales of each feature and a target variable using the indexed data and the trained artificial intelligence models illustrated above in step305. By way of example, the target variable relates to the predicted values by machine learning. Next in step315, the bias monitoring computing system14for each value of a feature, calculates a correlation difference F as to determine if they are favorable or unfavorable. In this example, favorable relates to preferred or desired result of judgement the machine learning model and unfavorable relates to non-preferred or undesired result of judgement by the machine learning model. By way of example, in a two-value classification model of approval/rejection of loan application, an approval corresponds to favorable, and a rejection corresponds to unfavorable. In this example, the bias monitoring computing system14uses below described formula to calculate the first value, the second value, the absolute value, and the total sum and correlate: r(feature,favorable)=(#offavorablew/thefeaturevalue)/(#ofthatfeaturevalue)(total#offavorable)/(total#ofrecords) An example of the use of the above formula will now be illustrated. The favorable and unfavorable values in this example are either “No Risk” or “Risk”. By way of example, if there are values M and F for the values associated with Gender and if there are a total of 50 records of which 20 records have gender as M and favorable values being “No Risk”, 15 records with gender as “F” and favorable value is “No risk”, 10 records having gender M and unfavorable value as “Risk” and 5 records that have gender as F and unfavorable value as Risk; then calculating “No Risk” as a target variable and gender M as the feature value in the gender data would give a value of 0.95 ((20)/(20+10))/((20+15)/(50))=0.95. In step320, the bias monitoring computing system14determines if the calculation of correlation is performed for all features. If the bias monitoring computing system14determines that the calculation of correlation has not been performed for all the features, then the No branch is taken to step315. However, when the bias monitoring computing system14determines that the calculation of correlation is performed for all features, then the Yes branch is taken to step325. In step325, the bias monitoring computing system14presents the tendency per feature and determines if a correlation value calculated using feedback data. i.e., F(f), is present. If the bias monitoring computing system14determines that F(f) is present, then the Yes branch is taken to step330. In this example, the value of F per feature illustrated in step315is obtained by the following formula with r (feature, favorable/unfavorable) for each value in the feature. F=sum (|r(feature, favorable)−r(feature, unfavorable)|). Accordingly, F is determined for each feature and the tendency per feature is useful for selecting a feature to be monitored. In step330, the bias monitoring computing system14proposes a feature which satisfies F(t)*a<=F(f) as a candidate of a feature to be monitored. In this example, F(t) means F calculated with training data and F(f) means F calculated with feedback data and the formula is used for each feature where a can take any value. An example of step330is illustrated inFIG.5and the exemplary flow proceeds to step340which will be further illustrated below. As illustrated inFIG.5, the bias monitoring computing system14proposes the gender and age as the feature that satisfies the above illustrated formula. However in step325, if the bias monitoring computing system14determines that F(f) is not present, then the No branch is taken to step335. In step335, the bias monitoring computing system14referring ‘r’ of a selected feature, values of the selected feature having high correlation with favorable are proposed as candidates of majority, and values of the selected feature having high correlation with unfavorable are proposed as candidates of minority as illustrated inFIGS.6A-6C. By way of example, majority relates to a feature value group that will be more likely to contribute to favorable and minority relates to a feature group value that will be more likely to contribute to unfavorable. Taking gender as an example of a feature in two-value classification model of approval/rejection of loan application, males belong to majority and females belong to minority. Alternatively, taking an annual income as an example of a feature, a person with an annual income of 10 million JPY or more belongs to majority, and a person with an annual income of 3 million JPY or less belongs to minority. By way of example, if the favorable and unfavorable values are “No Risk” and “Risk”, “M” and “F” are the values associated with the gender; if number of total records is 50 out of which if there are 20 records that have “M” as “Gender” column and “No Risk”, 15 records that have “F” as “Gender” column and “No Risk” as target variable, 10 records that have “M” as “Gender” column and “Risk” as target variable; and 5 records that have “F” as “Gender” column and “Risk” as target variable; and Gender is selected as monitored feature, then the value is r is calculated as follows: r(“M”, “No Risk”)=((20)/(20+10))/((20+15)/(50))=0.95; r (“F”, “No Risk”)=((15)/(15+5))/((20+15)/(50))=1.07; r (“M”, “Risk”)=((10)/(20+10))/((10+5)/(50))=1.11; and r (“F”, “Risk)=((5)/(15+5))/(10+5)/(50))=0.83. Accordingly, in this illustrative example, the value of the selected feature having high correlation with favorable (“No Risk”) is “F”, then “F” is proposed as candidates of majority. On the other hand, the value of the selected feature having high correlation with unfavorable (“Risk”) is “M”, then “M” is proposed as candidates of minority. Next in step340, the bias monitoring computing system14provides the feedback data and indexes data and the exemplary flow proceeds to step310. In this example, feedback data is the new data that is in the same format as the training data and that is the right answer data to be added that should have been predicted by the model in response to the actual input to the model. Additionally, feedback data can be provided any number of times. By using the above illustrated techniques, the disclosed technology is able to accurately identify the feature that has potential bias that otherwise may not be identified by a human, such as a data scientist. Additionally, indexing training data allows adding new data easily and supports efficient computation. Even when an error in the setting of the target variable or the like is noticed and the setting is changed, preparing the index separate from the model actually used enables calculating the correlation dynamically and thus the index is useful. Additionally, the disclosed technology can be applied also to new data and thus can also handle changes in the features according to the trend of the times. By way of example, if Age is not selected as a monitored feature in an application when applying the disclosed technology to the training data; and the Age feature satisfies F(t)*a<=F(f) in step330when applying the disclosed technology to the new data called feedback data, then Age is newly recommended as a candidate of a feature to be monitored. Accordingly, by applying the disclosed technology every time new data is added, the candidates of feature to be monitored are proposed from the latest calculation results and also the changes in tendency for each feature are detected. This technology with new data makes it possible to find the feature that could not be identified in training data but has potential bias. Although the invention has been described with reference to exemplary embodiments, it is not limited thereto. Those skilled in the art will appreciate that numerous changes and modifications may be made to the preferred embodiments of the invention and that such changes and modifications may be made without departing from the true spirit of the invention. It is therefore intended that the appended claims be construed to cover all such equivalent variations as fall within the true spirit and scope of the invention. | 28,091 |
11861514 | This disclosure includes references to “one embodiment” or “an embodiment.” The appearances of the phrases “in one embodiment” or “in an embodiment” do not necessarily refer to the same embodiment. Particular features, structures, or characteristics may be combined in any suitable manner consistent with this disclosure. Within this disclosure, different entities (which may variously be referred to as “units,” “circuits,” other components, etc.) may be described or claimed as “configured” to perform one or more tasks or operations. This formulation—[entity] configured to [perform one or more tasks]—is used herein to refer to structure (i.e., something physical, such as an electronic circuit). More specifically, this formulation is used to indicate that this structure is arranged to perform the one or more tasks during operation. A structure can be said to be “configured to” perform some task even if the structure is not currently being operated. A “computer system configured to generate a dataset” is intended to cover, for example, a computer system has circuitry that performs this function during operation, even if the computer system in question is not currently being used (e.g., a power supply is not connected to it). Thus, an entity described or recited as “configured to” perform some task refers to something physical, such as a device, circuit, memory storing program instructions executable to implement the task, etc. This phrase is not used herein to refer to something intangible. Thus, the “configured to” construct is not used herein to refer to a software entity such as an application programming interface (API). The term “configured to” is not intended to mean “configurable to.” An unprogrammed FPGA, for example, would not be considered to be “configured to” perform some specific function, although it may be “configurable to” perform that function and may be “configured to” perform the function after programming. Reciting in the appended claims that a structure is “configured to” perform one or more tasks is expressly intended not to invoke 35 U.S.C. § 112(f) for that claim element. Accordingly, none of the claims in this application as filed are intended to be interpreted as having means-plus-function elements. Should Applicant wish to invoke Section 112(f) during prosecution, it will recite claim elements using the “means for” [performing a function] construct. As used herein, the terms “first,” “second,” etc. are used as labels for nouns that they precede, and do not imply any type of ordering (e.g., spatial, temporal, logical, etc.) unless specifically stated. For example, references to “first” and “second” machine learning algorithms would not imply an ordering between the two unless otherwise stated. As used herein, the term “based on” is used to describe one or more factors that affect a determination. This term does not foreclose the possibility that additional factors may affect a determination. That is, a determination may be solely based on specified factors or based on the specified factors as well as other, unspecified factors. Consider the phrase “determine A based on B.” This phrase specifies that B is a factor is used to determine A or that affects the determination of A. This phrase does not foreclose that the determination of A may also be based on some other factor, such as C. This phrase is also intended to cover an embodiment in which A is determined based solely on B. As used herein, the phrase “based on” is thus synonymous with the phrase “based at least in part on.” As used herein, the word “module” refers to structure that stores or executes a set of operations. A module refers to hardware that implements the set of operations, or a memory storing the set of instructions such that, when executed by one or more processors of a computer system, cause the computer system to perform the set of operations. A module may thus include an application-specific integrated circuit implementing the instructions, a memory storing the instructions and one or more processors executing said instructions, or a combination of both. DETAILED DESCRIPTION Referring now toFIG.1, a block diagram of an exemplary embodiment of a computer system100is depicted. In various embodiments, computer system100receives a plurality of images120and prepares a training dataset130using the plurality of images120and user input. In various embodiments, computer system100employs a first machine learning algorithm102, a dimensionality reduction algorithm103, a clustering algorithm106, a second machine learning algorithm110, and a user interface108to prepare training dataset130using images120. In various embodiments, computer system100is any of a number of computing systems configured to receive images120, receive user input, and prepare training dataset130. In various embodiments, computer system100is implemented with a single computing system (e.g., a single server, desktop computer, laptop computer, tablet computer, smart phone) but in other embodiments is implemented with a plurality of computers working together (e.g., a cloud of servers). In various embodiments, a first portion of computer system100(e.g., a server or cloud of servers) is configured to perform the various algorithms and a second portion of computer system100(e.g., a laptop computer, a tablet computer) is configured to implement user interface108to present information to the user and to receive information from the user. In various embodiments, the plurality of images120can be any of a group of images that, with metadata such as a user classification label, are useable to be included in a training dataset130. In various embodiments, for example, images120include images of cells or other biological specimens. In various embodiments, these images of cells include a plurality of multispectral images of cells, a plurality of multimodal images of the cells, or both. Such images, for example, may have been created using fluorescence imagery in which a specimen is dyed with fluorescent dye and excited with a light source. The disclosed techniques, however, are not merely limited to images of cells and can be used on any type of images that can be included in a training dataset130(e.g., picture of plants, picture of animals, pictures of taken from a vehicle traveling on a street of the surroundings, image of human faces, etc.) In various embodiments, the number of images120can vary depending on criteria for the target machine learning algorithm, the amount of acceptable training time for target machine learning algorithm, and the desired amount of precision in target machine learning algorithm. For example, a larger set of images120can be turned into a larger training dataset130. The amount of time needed to train the target machine learning algorithm increases as the size of the training dataset130increase, but in various embodiments the precision of the target machine learning algorithm may also increase. In various embodiments, the plurality of images120includes between 500 and 3000 images120. In various embodiments, images120are randomly selected from a larger pool of images. In various embodiments, training dataset130is useable for training a target machine learning algorithm to classify other images (i.e., images other than images120). In such embodiments, training dataset130includes some or all of images120and the user classification labels applied to these images120as discussed herein. As used herein, “target machine learning algorithm” includes any image recognition algorithm that, when trained with a training dataset130, is useable to classify images. For example, in the embodiments discussed herein in connection toFIGS.5A-5F, training dataset130include images of cells with a user classification label identifying the number of nuclei in each image (e.g., one, two, or three or more). After being trained with training dataset130, the target machine learning algorithm is operable to determine whether other images include one, two, or three or more nuclei. In various embodiments, first machine learning algorithm102is any of a number of algorithms executable to analyze images (e.g., by analyzing pixels of the image) and derive features from these images120to generate a dataset of image-derived features. In various embodiments, first machine learning algorithm102is a convolutional neural network (CNN). In some of such embodiments, first machine learning algorithm102is the Inception V3 Convolutional Neural Network, which has been trained on a large database of images from ImageNet. In embodiments where first leaning algorithm102is a CNN, the image-derived features for an image are “bottleneck features” that can be used to describe the contents of the image and to differentiate between different images120. In various instances, there may be thousands of bottleneck features per image. The output of the first machine learning algorithm102includes a multi-dimensional dataset (e.g., one dimension per feature of the image120) in various embodiments. For example, after analyzing images120, first machine learning algorithm102generates a dataset of 2048 features per channel of images120. In various instances, the plurality of images120includes between one to twelve channels of images. In various embodiments, dimensionality reduction algorithm104is any of a number of algorithms executable to reduce the dimensionality of the multi-dimensional dataset output by first machine learning algorithm102to a dimensionally-reduced dataset by reducing the number of random variables under consideration by obtaining a set of principal variables. In various embodiments, dimensionality reduction algorithm104reduces the dimensionality by several orders of magnitude. For example, in some embodiments first machine learning algorithm102outputs 2048 features for each image120, and dimensionality reduction algorithm104reduces the dimensionality of this dataset to three or fewer dimensions. In various embodiments, dimensionality reduction algorithm104can be one or more of principal component analysis (PCA), uniform manifold approximation and projection (UMAP), or t-distributed stochastic neighbor embedding (t-SNE). In various embodiments, dimensionality reduction algorithm104is also executable to take input from second machine learning algorithm110. As discussed herein, second machine learning algorithm110is executable to output predicted classification labels for unlabeled images120based on user classification labels received via user interface108. Dimensionality reduction algorithm104is executable, for each unlabeled image102, to take these predicted classification labels into account along with the multi-dimensional dataset output by first machine learning algorithm102to generate another reduced-dimension dataset having, for example, three or fewer dimensions. In various embodiments, dimensionality reduction algorithm104is executable to output this reduced-dimension dataset to clustering algorithm106. In such embodiments, clustering algorithm106is executable to determine clusters of datapoints within the reduced-dimension dataset. Clustering algorithm106may be any of a number of suitable clustering algorithms including but not limited to k-means clustering or spectral clustering algorithms. In various embodiments, the number of clusters is set by the user, and the various datapoints in the reduced-dimension dataset is grouped into the nearest cluster. In various embodiments, the plurality of clusters is equal to X times Y clusters, where X is the number of groups into which a user wants to classify the images (e.g., the number of potential user classification labels) and Y is greater than or equal to 1. In various embodiments, Y is equal to five, for example, although other numbers can be used. In various embodiments during the second or later iteration (i.e., the user has input user classification labels and second machine learning algorithm110and dimensionality reduction algorithm104have output a dimensionally-reduced dataset with predicted classification labels) clustering algorithm106clusters datapoints corresponding to unlabeled images to the nearest classification label. This clustering is presented to the user as predicted classification labels via user interface108. In various embodiments, user interface108is executable to present information to the user and receive input from the user such that the user can prepare training dataset130using images120. In various embodiments, user interface108is a graphical user interface (GUI) that is executable to present a visual representation (e.g., visual representation400discussed herein in reference toFIG.4) of the datapoints in the reduced-dimension dataset as icons grouped by cluster in which each icon represents one or more particular datapoints. In various embodiments, various portions of user interface108are selectable to cause the display of the one or more images120associated with the one or more particular datapoints such as the icons themselves, a list of the clusters in the dataset, a list of user classification labels, a list of predicted classification labels, or a combination. User interface108is also executable to receive user input of a user classification label for various ones of the images120. User interface108is discussed in further detail herein in reference toFIGS.4and5A-5F. In various embodiments, second machine learning algorithm110is executable to predict classification labels for unlabeled images120based on user classification labels input by the user for other images120. In various embodiments, second machine learning algorithm110is an iterative optimization algorithm. In various embodiments, second machine learning algorithm110can be any suitable supervised learning algorithm including but not limited to a stochastic gradient descent (SGD) model with logarithmic loss or Random Forest model. As discussed herein, in various embodiments second learning algorithm110is executable to output to dimensionality reduction algorithm104. In turn, clustering algorithm106clusters the datapoints for the unlabeled images120into the nearest classification label. In such embodiments, the results of this clustering are presented to the user using user interface108as predicted classification labels. In various instances, the user responds to the predicted classification labels by accepting them as user classification labels or rejecting them and either selecting a different user classification label, marking the classification label for the image120as unknown (e.g., leaving it to a second user to review), or excluding the image120from training dataset130altogether. In various embodiments, the loop illustrated inFIG.1from dimensionality reduction algorithm104to clustering algorithm106to user interface108to second machine learning algorithm110to dimensionality reduction algorithm104iterates until all of images120have been labeled or excluded. The processes of labeling images120is discussed in further detail in reference toFIGS.3,6, and7herein. In various embodiments, the techniques disclosed herein enable a user to more quickly and accurately prepare a training dataset130from a plurality of unlabeled images120. Rather than the user having to look at each image120in isolation and assign a user classification label to that image120for inclusion in the training dataset130, instead the various algorithms employed by computer system100provide the user with various aids in decision making to make the labeling process more efficient. This is especially important in instances where the decision of which label to apply to a particular image120is reviewed by an individual with particular training (for example, a microbiologist or radiologist) and whose labor time is expensive. In various embodiments discussed herein, first machine learning algorithm102, dimensionality reduction algorithm104, and clustering algorithm106use machine-learning techniques to pre-sort images120into various clusters that are predicted to share visual characteristics and, in many cases, will be given the same user classification labels. A visual representation of the clustering (e.g., in a visual representation400discussed in connection toFIG.4) as well as the images being labeled are displayed using user interface108in various embodiments. As discussed herein, the user is able to review the various clusters and assign user classification labels to multiple images120at the same time (e.g., by highlighting multiple images120and applying a label to each highlighted image). As discussed herein, this process of clustering uses “unsupervised” (i.e., user input was not used in the initial clustering) training techniques that are then reviewed by a user to prepare labeled material for a training dataset130. As discussed herein, after a number of user classification labels have been input, using second learning algorithm120, computer system100is able to take the user's input into account to further streamline the labeling process in various embodiments. As discussed herein, by using second learning algorithm110computer system100is operable to predict which classification labels might be correct for some (or all) of the images120that remain unlabeled. In various embodiments, dimensionality reduction algorithm104factors in the output of second learning algorithm110into generating a second dimensionally-reduced dataset that is then clustered using clustering algorithm106. As discussed herein, user interface108is updated to show the user the clusters of predicted user classification labels (e.g., in a visual representation512discussed in connection toFIGS.5E and5F). As discussed herein, the user is able to review the various clusters and assign user classification labels to multiple images120at the same time (e.g., by highlighting multiple images120and applying a label to each highlighted image). As discussed herein, this process of clustering uses “semi-supervised” (i.e., the previous user input was used in the revised clustering, but the user has not yet reviewed all of the images120) training techniques that are then reviewed by a user to prepare labeled material for a training dataset130. Accordingly, in various embodiments, the techniques disclosed herein provide a user who is labeling images120for training dataset130with a guided path from unsupervised clustering to semi-supervised predictions while providing visualizations and an intuitive user interface to aid in decision making. Referring now toFIG.2, a sampling of images120is depicted. In various embodiments, each image120includes a visual portion and metadata202(e.g., the name of the particular image120, when it was created, etc.). As discussed herein in connection toFIGS.5A-5F, when a particular image120is being reviewed and labeled (e.g., using user interface108) the image is represented using an object200in various embodiments. In various embodiments, the object200includes metadata202about the particular image (such as the name of the image shown inFIG.2) and is selectable. As discussed herein, selecting object200allows the user to apply a user classification label (and/or respond to a predicted classification label) in various embodiments. FIG.2also includes a small number of examples of image-derived features204. In various embodiments, first machine learning algorithm102derives various features from images120. In various instances, these features are represented by mathematical description of the pixel data of the image120. Represented visually, however, these image-derived features204are portions of the image120that collectively describe the image120such that it can be differentiated from the other images120. Accordingly, three image-derived features204a,204b, and204care shown inFIG.2, although the number of image-derived features may be much greater than three as discussed above (e.g., thousands of features per image120in various embodiments). Referring now toFIG.3, a flowchart illustrating an embodiment of a training dataset creation method300is shown. In various embodiments, the various actions associated with method300are performed with computer system100. At block302, a user inputs an unlabeled (or insufficiently labeled) training dataset (e.g., a plurality of images120) for labeling to prepare training dataset130. As discussed herein, the user may randomly select the images120to label from a larger collection of images. The user may input the images120by any suitable method including but not limited to inserting storage media (e.g., a disk or hard drive) or downloading the images120to computer system100. In various embodiments in which some of the techniques discussed herein are performed by computer systems100implemented on remote clouds of computers, the user may upload the images120to the cloud for processing. At block304, computer system100derives (e.g., with first machine learning algorithm102) features from the pixel data from the training dataset (e.g., features204shown inFIG.2). At block306, computer system100(e.g., with dimensionality reduction algorithm104) reduces the dimensionality of the features204. In various instances, dimensionality reduction is performed on the dataset of derived features204prepared by first machine learning algorithm102using the plurality of images. In other instances (e.g., when method300proceeds to block306from block314), dimensionality reduction is performed on the dataset of derived features204while taking into account user classification labels that have been applied to some of the plurality of images120. In either instance, dimensionality reduction algorithm104receives a relatively large dimensional dataset each of the plurality of images120(e.g., a dataset of 2048 features of an image120) and reduces it down to a substantially smaller number of dimensions such as two dimensions in some embodiments or three dimensions in other embodiments. At block308, computer system100prepares a visual representation400(also referred to herein as an “object map”) of the datapoints of the dimensionally-reduced dataset. As discussed in further detail in reference toFIG.4, this visual representation400is a two-dimensional plot with icons representing one or more datapoints in the dimensionally-reduced database in various embodiments. In other embodiments, the visual representation400is a three-dimensional plot with icons representing one or more datapoints in the dimensionally-reduced database. At block310, a determination is made whether to predict classification labels for the plurality of images120. In various embodiments, computer system100is configured to make this determination based on the number of user classification labels that have been input. For example, if the percentage of images120that are label below a threshold (e.g., 30%, 40%, or any other threshold) or when no user classifications have been received, the determination is made automatically and method300proceeds to block312. If the percentage is above the threshold, method300proceeds to block310. In various embodiments, the determination is made by a user who determines whether method300should proceed to block312or block314, and computer system100proceeds according to commands from the user. At block312, computer system100clusters the dimensionally-reduced dataset (e.g., with clustering algorithm106) into a predetermined number of clusters in various embodiments. In iterations of method300in which no predicted classification labels have been generated, clustering algorithm clusters the datapoints into X times Y clusters, wherein X is the number of groups (e.g., the number of user classification labels) into which a user wants to classify the images; and Y is greater than or equal to 1 (e.g., 3, 4, 5). In various embodiments, these clusters are incorporated in visual representations400discussed herein in connection toFIGS.4and5A-5F. At block316, having determined to predict classification labels, computer system100(e.g., with second learning algorithm110) predicts classification labels for the unlabeled images in the plurality of images120. In iterations of method300in which classification labels have been predicted, the various datapoints are clustered into clusters for each user classification label. In such embodiments, datapoints representing images120that have user classification labels are clustered into the cluster associated with their respective labels and unlabeled datapoints are clustered into the nearest cluster as a predicted classification label. Computer system100generates a visual representation400incorporating the predicted classification labels. In various embodiments, this updated visual representation400appears on a user interface as discussed in further detail in reference toFIGS.5D,5E, and5F. At blocks314and318, computer system100receives user input to apply user classification labels. At block314, computer system100receives user input to apply user classification labels to one or more unlabeled images120. In various embodiments, this input is received via a menu appearing on a user interface as discussed in further detail in reference toFIG.5C. Similarly, at block318computer system100receives user input to apply user classification labels to one or more unlabeled images120that have been given predicted classification labels in various embodiments. In various embodiments, this input is received via a menu appearing on user interface108as discussed in further detail in reference toFIG.5F. In various embodiments, such user classification labels include labels for the various images120that describe what is contained in the image for use in training the target machine learning algorithm (e.g., as discussed inFIGS.5A-5F, the image contains one nucleus, two nuclei, or three or more nuclei). In various embodiments, the user classification label can also be a label excluding the image from the training dataset130. In various embodiments, the user classification label can be that the label is unknown (e.g., the user is unable to identify what label to apply). In various embodiments, images120labeled unknown and exclude are not included in training dataset130. After blocks314and318, if some of the plurality of images130remain unlabeled, method300loops back to block306in various embodiments. Referring now toFIG.4, an example visual representation400of a dimensionally-reduced dataset for a plurality of images120is depicted. In the embodiment shown inFIG.4, visual representation400is a two-dimensional rendering that includes a plurality of icons404that represent datapoints in the dimensionally-reduced dataset output by dimensionality reduction algorithm104grouped into clusters402. In the embodiment shown inFIG.4, the datapoints in the dimensionally-reduced dataset have been grouped into 15 clusters,404a-404o. In various embodiments, this clustering is represented in visual representation400using one or more techniques including but limited to (a) rendering the visual representation400such that icons404that correspond to datapoints in the same cluster402are positioned closed together, (b) rendering the visual representation400such that icons404that correspond to datapoints in the same cluster402are shaded with a same color (e.g., red for cluster402a, blue for cluster402b, green for cluster402c), (c) rendering the visual representation400such that icons404that correspond to datapoints in the same cluster402are encircled by a polygon, or a combination. In various embodiments, the position of the various icons404on the two-dimensional embodiment of visual representation400shown inFIG.4is based on the two dimensions of the dimensionally-reduced dataset (e.g., the X axis coordinate is based on a first dimension and the Y axis coordinate is based on a second dimension). Similarly, when the dimensionally-reduced dataset has three dimensions, visual representation400is a three-dimensional figure with the position of the various icons404based on the three dimensions of the dimensionally-reduced dataset (e.g., the X axis coordinate is based on a first dimension and the Y axis coordinate is based on a second dimension, the Z axis coordinate is based on a third dimension). As discussed herein, the number of clusters may vary according to the number of user classification labels and whether predicted classification labels have been generated. As discussed herein in reference toFIGS.5E and5F, an updated visual representation512is generated in various instances displaying the dimensionally-reduced dataset output by dimensionality reduction algorithm104(this time taking into account the output of second machine learning algorithm111) grouped into one cluster for each of the user classification labels. In various embodiments, this clustering is represented in visual representation400using one or more techniques including but limited to (a) rendering the visual representation512such that icons404that correspond to datapoints in the same cluster402are positioned closer together, (b) rendering the visual representation512such that icons that correspond to datapoints in the same cluster402are shaded with a same color, (c) rendering the visual representation512such that icons404that correspond to datapoints in the same cluster402are encircled by a polygon, or a combination. Referring now toFIGS.5A-5F, various display screens of an exemplary embodiment of a graphical user interface (GUI)500operated by user interfaced108in accordance with the disclosed embodiments are illustrated. In various embodiments, GUI500is displayed on a display screen (e.g., a monitor, a laptop computer display, a tablet computer display) coupled to computer system100directly (e.g., via an HDMI cable) or indirectly (e.g., streamed to the display screen over a WAN and/or LAN). As discussed herein, GUI500is useable to present information to a user and receive input from the user (e.g., input classifying images130) to prepare labeled training dataset130for training a target machine learning algorithm. In each screen of GUI500, a plurality of regions is used to display various information discussed herein. In various embodiments, each screen includes a first region510including a two-dimensional visual representation400(or an updated visual representation512). As discussed herein, visual representation400represents a dimensionally-reduced dataset of image data that was derived from a plurality of images. In various embodiments, two-dimensional visual representation400includes a plurality of icons404and indications of clusters402within the dataset. In various embodiments, each screen also includes a second region including one or more of the plurality of images130. In various embodiments, the various screens include a third region530to display a list of the identified clusters (and in embodiments the number of images130grouped into each). In various other embodiments, the various screens include an updated third region532to display a list of the predicted classification labels (and in embodiments the number of images130grouped into each). In various embodiments, the various screens include a fourth region540to display a list of the user classification labels (and in embodiments the number of images130labeled with each). InFIGS.5A-5F, first region510is disposed on the right side of GUI500, second region520is disponed in the middle of GUI500, and third region530(and updated third region532) and fourth region540are disposed on the left side of GUI500, but these various regions can be arranged in any order. The various regions inFIGS.5A-5Fare depicted as being part of the same window, but in other embodiments some or all of the regions may be presented as separate windows. Referring again toFIG.3, the actions of blocks302-312are performed prior to the display the screen depicted inFIG.5A, the actions of block314are performed during the display of the screen depicted inFIG.5C, the decision at block310is made during the display of the screen depicted inFIG.5D, the actions of block316are performed prior to the display of the screen depicted inFIG.5E, and the actions of block316are performed during the display of the screen depicted inFIG.5F. In various embodiments, first region510is useable to display visual representation400discussed herein in reference toFIG.4or updated visual representation512discussed herein in reference toFIGS.5E and5F. In various embodiments, each icon404of visual representation400(or updated visual representation512) represents one of more datapoint in the dimensionally-reduced dataset. Further, in such embodiments each icon404represents one or more of the plurality of images120and is selectable to cause the represented images120to be displayed in second region520. In various embodiments, second region520is useable to display one or more images120. In various embodiments, the images120displayed in second region520are displayed in response to use selection of portions of first region5120(e.g., one or more icons404causing images120represented by the icons404to be displayed), portions of third region530or updated third region532(e.g., a portion of a list corresponding to a particular cluster causing images120associated with that cluster to be displayed), and/or portions of fourth region540(e.g., a portion of a list corresponding to the user classification labels causing images120labeled with a particular user classification label to be displayed). Each image120displayed in second region520is displayed as an object200in various embodiments. As discussed herein in reference toFIG.2, each object is associated with metadata for the image120and is selectable. Selecting the image, for example, allows the user to apply a user classification label or to respond to a predicted classification label for the selected image120as discussed herein. In various embodiments, third region530is useable to display a list of the clusters within the dataset. Similarly, updated third regions532(also referred to herein as a “fifth region”) is useable to display a list of the predicted classification labels by which the remaining unlabeled images120are clustered. In either case, each entry of the list is selectable to cause the images120associated with that cluster to be displayed in second region520in various embodiments. In various embodiments, the lists displayed in third region530and updated third region532include respective indication of the number of images120associated with each cluster. In various embodiments, fourth region540is useable to display a list of the user classification labels applied to images120. In some of such embodiments, each entry of the list is selectable to cause the images120labeled with the user classification label to be displayed in second region520. In various embodiments, the list displayed in fourth region540includes respective indication of the number of images120labeled with each user classification label. Referring now toFIG.5A, a first screen of GUI500is shown. Prior to the display of this first screen, images120have been received by computer system, first machine learning algorithm102derived features from the images120, the dimensionality of the dataset of the derived features have been reduced by dimensionality reduction algorithm104, and clusters have been determined by clustering algorithm106. A visual representation400of the dimensionally-reduced dataset is displayed in first region510. A number of images120are displayed in second region520, however because no user selection have been received, the images120displayed are not associated with a particular cluster (e.g., they may be display randomly, they may be displayed in chronological order of when they were captured, they may be displayed in alphabetical order by name). A list of the clusters is displayed in third region530with indications of the number of images120associated with each cluster. Finally, a list of the three user classification labels used in this instance is displayed in fourth region540. In the examples shown inFIGS.5A-5F, the user classification labels are determined based on the number of cell nuclei present in each image120: 1N for images120including one nucleus, 2N for images120including two nuclei, and 3_4N for image120including three or more nuclei. As discussed herein, more than three user classification labels may be used, and the criteria for determining which label should be applied to a particular image120also varies in various instances. Referring now toFIG.5B, a user selection of one or more icons404in cluster402bhas been received. In response, images120associated with cluster402bare displayed in second region520and the portion of the list in third region530associated with cluster402bis highlighted. Referring now toFIG.5C, in various embodiments, user classification labels are received from the user via a menu522displayed in the GUI500. In various embodiments, menu522includes indications of each user classification label, which includes the various labels for training dataset130as well as additional labels such as a label to exclude one or more images120from training dataset130. In the example shown inFIG.5C, menu522includes indications of the three user classification labels 1N, 2N, and 3_4N as well as commands to “Move to Unknown” and “Move to Exclude” to mark the selected images120accordingly. As shown inFIG.5C, a number of images120are highlighted in second region520, and the user input to menu522will apply the user classification label or command to the highlighted images120. Referring now toFIG.5D, user input applying user classification labels to various images120has been received. As shown in fourth region540, 170 images have been labeled 1N, 110 images have been labeled 2N, and 146 images120have been labeled 3_4N. In the embodiment shown inFIG.5D, the user can enter a command to predict classification labels be clicking on button524. In response to this command, predicted classification labels are assigned as discussed in block316of method300. Alternatively, a prediction of classification labels is made automatically after a threshold number of images120have been labeled. Referring now toFIG.5E, GUI500now includes updated visual representation512and updated third region532. As discussed herein, the remaining unlabeled images120have been assigned predicted classification labels and are clustered by predicted classification labels. Accordingly, visual representation512includes four clusters: one associated with each user classification label and one for images for which a classification label has not (or for whatever reason cannot) been determined. Updated third region532includes a list of the predicted classification labels: Pred 1N, Pred 2N, Pred 3_4N and Unknown as well as indications of the number of images120in with each predicted label. In various embodiments, updated visual representation512also includes datapoint associated with the image120that have user classification labels. The icons404representing these labeled images120may be visually different from icons404representing unlabeled images including but not limited to being a different color (e.g., icons404representing labeled images120are a darker color than icons404representing unlabeled images120but are in the same color family such as dark green and light green or dark blue and light blue) or by being different shapes (e.g., circular icons404for unlabeled images and star-shaped icons for labeled images120). As with visual representation400discussed herein, the icons404of updated visual representation512a selected able to cause the display of the image120represented by the selected icon(s)404. In the screen shown inFIG.5E, the Pred 1N cluster is selected. In response to this selection, images120in the Pred 1N cluster are displayed in second region520. Referring now toFIG.5F, in various embodiments, the user responds to the predicted classification labels by commands that are received from the user via a menu522displayed in the GUI500. In various embodiments, the menu522allows a user to accept the predicted classification label as a user classification label by selecting the indication of the user classification label corresponding to the predicted classification label in menu522or to reject the predicted classification label by selecting a different indication in menu522. In various embodiments, menu522includes indications of each user classification label, which includes the various labels for training dataset130as well as additional labels such as a label to exclude one or more images120from training dataset130. In the example shown inFIG.5F, menu522includes indications of the three user classification labels 1N, 2N, and 3_4N as well as commands to “Move to Unknown” and “Move to Exclude” to mark the selected images120accordingly. As shown inFIG.5F, a number of images120are highlighted in second region520, and the user input to menu522will apply the user classification label or command to the highlighted images120. Referring now toFIG.6, a flowchart illustrating an embodiment of a training dataset creation method600is shown. In various embodiments, the various actions associated with method600are performed with computer system100. At block602, computer system100receives a dataset of image-derived features for each of a plurality of images120, wherein the image-derived features are determined by using a first machine learning algorithm102to analyze the plurality of images120. At block604, computer system100generates a dimensionally-reduced dataset from the dataset of image-derived features using a dimensionality reduction algorithm106. At block606, computer system identifies a plurality of clusters of datapoints within the dimensionally-reduced dataset using a clustering algorithm106. At block608, computer system100generates a visual representation400of the datapoints as icons404grouped by cluster402. Each icon404represents one or more particular datapoints and is selectable to cause the display of the one or more images120associated with the one or more particular datapoints. At block610, computer system100receives a selection of one or more of the icons404. At block612, computer system100causes the display of the images120associated with the one or more particular datapoints represented by the one or more selected icons404. At block614, computer system100receives a user classification label for at least one of the displayed images. At block616, computer system100predicts classification label(s) for unlabeled images120and receives a user response to the predicted classification label(s). Referring now toFIG.7, a flowchart illustrating an embodiment of a training dataset creation method700is shown. In various embodiments, the various actions associated with method700are performed with computer system100. At block702, computer system100causes a user interface (e.g., GUI500) for preparing a labeled training dataset130for training a target machine learning algorithm to classify images to be displayed on a user device. The user interface includes a first region510including a two-dimensional visual representation400of a dimensionally-reduced dataset of image data that was derived from a plurality of images120. The two-dimensional visual representation400includes a plurality of icons404and indications of clusters404within the dataset. The user interface also includes a second region520that includes include one or more of the plurality of images120. At block704, computer system100receives user input applying user classification labels to one or more of the images120displayed in the second region520. Exemplary Computer System Turning now toFIG.8, a block diagram of an exemplary computer system800, which may implement the various components of computer system100is depicted. Computer system800includes a processor subsystem880that is coupled to a system memory820and I/O interfaces(s)840via an interconnect860(e.g., a system bus). I/O interface(s)840is coupled to one or more I/O devices850. Computer system800may be any of various types of devices, including, but not limited to, a server system, personal computer system, desktop computer, laptop or notebook computer, mainframe computer system, tablet computer, handheld computer, workstation, network computer, a consumer device such as a mobile phone, music player, or personal data assistant (PDA). Although a single computer system800is shown inFIG.8for convenience, system800may also be implemented as two or more computer systems operating together. Processor subsystem880may include one or more processors or processing units. In various embodiments of computer system800, multiple instances of processor subsystem880may be coupled to interconnect860. In various embodiments, processor subsystem880(or each processor unit within880) may contain a cache or other form of on-board memory. System memory820is usable to store program instructions executable by processor subsystem880to cause system800perform various operations described herein. System memory820may be implemented using different physical memory media, such as hard disk storage, floppy disk storage, removable disk storage, flash memory, random access memory (RAM-SRAM, EDO RAM, SDRAM, DDR SDRAM, RAMBUS RAM, etc.), read only memory (PROM, EEPROM, etc.), and so on. Memory in computer system800is not limited to primary storage such as memory820. Rather, computer system800may also include other forms of storage such as cache memory in processor subsystem880and secondary storage on I/O Devices850(e.g., a hard drive, storage array, etc.). In some embodiments, these other forms of storage may also store program instructions executable by processor subsystem880. I/O interfaces840may be any of various types of interfaces configured to couple to and communicate with other devices, according to various embodiments. In one embodiment, I/O interface840is a bridge chip (e.g., Southbridge) from a front-side to one or more back-side buses. I/O interfaces840may be coupled to one or more I/O devices850via one or more corresponding buses or other interfaces. Examples of I/O devices850include storage devices (hard drive, optical drive, removable flash drive, storage array, SAN, or their associated controller), network interface devices (e.g., to a local or wide-area network), or other devices (e.g., graphics, user interface devices, etc.). In one embodiment, computer system800is coupled to a network via a network interface device850(e.g., configured to communicate over WiFi, Bluetooth, Ethernet, etc.). Although specific embodiments have been described above, these embodiments are not intended to limit the scope of the present disclosure, even where only a single embodiment is described with respect to a particular feature. Examples of features provided in the disclosure are intended to be illustrative rather than restrictive unless stated otherwise. The above description is intended to cover such alternatives, modifications, and equivalents as would be apparent to a person skilled in the art having the benefit of this disclosure. The scope of the present disclosure includes any feature or combination of features disclosed herein (either explicitly or implicitly), or any generalization thereof, whether or not it mitigates any or all of the problems addressed herein. Accordingly, new claims may be formulated during prosecution of this application (or an application claiming priority thereto) to any such combination of features. In particular, with reference to the appended claims, features from dependent claims may be combined with those of the independent claims and features from respective independent claims may be combined in any appropriate manner and not merely in the specific combinations enumerated in the appended claims. | 48,014 |
11861515 | DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS Reference will now be made in detail to several exemplary embodiments, including those illustrated in the accompanying drawings. Whenever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. Embodiments disclosed herein are directed to, among other things, to systems and methods that can determine the propensity of an entity (e.g., a person, a household, or a company) to take a specified action. For example, a specific action can involve determining the propensity that a customer will leave a supplier during a given time period (e.g., churn). Such factors that can affect the churn rate include customer dissatisfaction, cheaper and/or better offers from the competition, more successful sales and/or marketing by the competition, or reasons having to do with the customer life cycle. If a supplier can receive an indication that a customer is likely to churn, the supplier can take one or more actions in order to keep the customer. The embodiments disclosed herein can assist with providing that indication. For example, the systems and methods can access one or more data sources, the one or more data sources including information associated with the entity, form a record associated with the entity by integrating the information from the one or more data sources, generate, based on the record, one or more features associated with the entity, process the one or more features to determine the propensity of the entity to take the specified action, and output the propensity. The operations, techniques, and/or components described herein are implemented by a computer system, which can include one or more special-purpose computing devices. The special-purpose computing devices can be hard-wired to perform the operations, techniques, and/or components described herein. The special-purpose computing devices can include digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the operations, techniques, and/or components described herein. The special-purpose computing devices can include one or more hardware processors programmed to perform such features of the present disclosure pursuant to program instructions in firmware, memory, other storage, or a combination. Such special-purpose computing devices can combine custom hard-wired logic, ASICs, or FPGAs with custom programming to accomplish the techniques and other features of the present disclosure. The special-purpose computing devices can be desktop computer systems, portable computer systems, handheld devices, networking devices, or any other device that incorporates hard-wired and/or program logic to implement the techniques and other features of the present disclosure. The one or more special-purpose computing devices can be generally controlled and coordinated by operating system software, such as iOS, Android, Blackberry, Chrome OS, Windows XP, Windows Vista, Windows 7, Windows 8, Windows Server, Windows CE, Unix, Linux, SunOS, Solaris, VxWorks, or other compatible operating systems. In other embodiments, the computing device can be controlled by a proprietary operating system. Operating systems control and schedule computer processes for execution, perform memory management, provide file system, networking, I/O services, and provide a user interface functionality, such as a graphical user interface (“GUI”), among other things. By way of example,FIG.1is a block diagram that illustrates an implementation of a computer system100, which, as described above, can comprise one or more electronic devices. Computer system100includes a bus102or other communication mechanism for communicating information, and one or more hardware processors104(denoted as processor104for purposes of simplicity), coupled with bus102for processing information. One or more hardware processors104can be, for example, one or more microprocessors. Computer system100also includes a main memory106, such as a random access memory (RAM) or other dynamic storage device, coupled to bus102for storing information and instructions to be executed by one or more processors104. Main memory106also can be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor104. Such instructions, when stored in non-transitory storage media accessible to one or more processors104, render computer system100into a special-purpose machine that is customized to perform the operations specified in the instructions. Computer system100further includes a read only memory (ROM)108or other static storage device coupled to bus102for storing static information and instructions for processor104. A storage device110, such as a magnetic disk, optical disk, or USB thumb drive (Flash drive), etc., is provided and coupled to bus102for storing information and instructions. Computer system100can be coupled via bus102to a display112, such as a cathode ray tube (CRT), an LCD display, or a touchscreen, for displaying information to a computer user. An input device114, including alphanumeric and other keys, is coupled to bus102for communicating information and command selections to one or more processors104. Another type of user input device is cursor control116, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to one or more processors104and for controlling cursor movement on display112. The input device typically has two degrees of freedom in two axes, a first axis (for example, x) and a second axis (for example, y), that allows the device to specify positions in a plane. In some embodiments, the same direction information and command selections as cursor control may be implemented via receiving touches on a touch screen without a cursor. Computer system100can include a user interface module to implement a GUI that may be stored in a mass storage device as executable software codes that are executed by the one or more computing devices. This and other modules may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. In general, the word “module,” as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions, possibly having entry and exit points, written in a programming language, such as, for example, Java, Lua, C, and C++. A software module can be compiled and linked into an executable program, installed in a dynamic link library, or written in an interpreted programming language such as, for example, BASIC, Perl, Python, or Pig. It will be appreciated that software modules can be callable from other modules or from themselves, and/or can be invoked in response to detected events or interrupts. Software modules configured for execution on computing devices can be provided on a computer readable medium, such as a compact disc, digital video disc, flash drive, magnetic disc, or any other tangible medium, or as a digital download (and can be originally stored in a compressed or installable format that requires installation, decompression, or decryption prior to execution). Such software code can be stored, partially or fully, on a memory device of the executing computing device, for execution by the computing device. Software instructions can be embedded in firmware, such as an EPROM. It will be further appreciated that hardware modules can be comprised of connected logic units, such as gates and flip-flops, and/or can be comprised of programmable units, such as programmable gate arrays or processors. The modules or computing device functionality described herein are preferably implemented as software modules, but can be represented in hardware or firmware. Generally, the modules described herein refer to logical modules that may be combined with other modules or divided into sub-modules despite their physical organization or storage. Computer system100can implement the techniques and other features described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the electronic device causes or programs computer system100to be a special-purpose machine. According to some embodiments, the techniques and other features described herein are performed by computer system100in response to one or more processors104executing one or more sequences of one or more instructions contained in main memory106. Such instructions can be read into main memory106from another storage medium, such as storage device110. Execution of the sequences of instructions contained in main memory106causes one or more processors104to perform the process steps described herein. In alternative embodiments, hard-wired circuitry can be used in place of or in combination with software instructions. The term “non-transitory media” as used herein refers to any media storing data and/or instructions that cause a machine to operate in a specific fashion. Such non-transitory media can comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device150. Volatile media includes dynamic memory, such as main memory106. Common forms of non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge, a register memory, a processor cache, and networked versions of the same. Non-transitory media is distinct from, but can be used in conjunction with, transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus102. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. Various forms of media can be involved in carrying one or more sequences of one or more instructions to one or more processors104for execution. For example, the instructions can initially be carried on a magnetic disk or solid state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system100can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus102. Bus102carries the data to main memory106, from which processor104retrieves and executes the instructions. The instructions received by main memory106can optionally be stored on storage device110either before or after execution by one or more processors104. Computer system100can also include a communication interface118coupled to bus102. Communication interface118can provide a two-way data communication coupling to a network link120that is connected to a local network122. For example, communication interface118can be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface118can be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links can also be implemented. In any such implementation, communication interface118can send and receive electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information. Network link120can typically provide data communication through one or more networks to other data devices. For example, network link120can provide a connection through local network122to a host computer124or to data equipment operated by an Internet Service Provider (ISP)126. ISP126in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet”128. Local network122and Internet128both use electrical, electromagnetic, or optical signals that carry digital data streams. The signals through the various networks and the signals on network link120and through communication interface118, which carry the digital data to and from electronic device110, are example forms of transmission media. Computer system100can send messages and receive data, including program code, through the network(s), network link120and communication interface118. In the Internet example, a server130might transmit a requested code for an application program through Internet128, ISP126, local network122and communication interface118. The received code can be executed by one or more processors104as it is received, and/or stored in storage device110, or other non-volatile storage for later execution. FIG.2is a flowchart representing an exemplary method200for determining the propensity of an entity to take a specified action. While the flowchart discloses the following steps in a particular order, it is appreciated that at least some of the steps can be moved, modified, or deleted where appropriate, consistent with embodiments of the present disclosure. In some embodiments, method200can be performed in full or in part by a computer system (e.g., computer system100). It is appreciated that some of these steps can be performed in full or in part by other systems. Referring toFIG.2, at step210, the computer system can access one or more data sources that include information associated with the entity. The one or more data sources can be stored locally at the computer system and/or at one or more remote servers (e.g., such as a remote database), or at one or more other remote devices. In some embodiments, the information in the data sources can be stored in one or more multidimensional tables. By way of example, information of a first type (e.g., bill payment amount) associated with the entity, (e.g., a household), can be stored in a first multidimensional table and information of a second type (e.g., automobile type) associated with the entity can be stored in a second multidimensional table. In some embodiments a table can contain information associated with a single entity. In other embodiments, a table can store information associated with a plurality of entities. For example, each row in the table can correspond to a different entity (e.g., Household #1, Household #2, etc.) and each column in the table can correspond to a payment amount. In some embodiments, the information stored in the table can include entries associated with a temporal period. For example, a table can store a bill payment date for each bill payment amount. The information can be stored as a continuous value (e.g., $800 as a bill payment amount), as a categorical value (e.g., “Sedan” or “Coupe” as an automobile type), as textual value, or as any other type of value. In some embodiments, a table can be stored in either a row-oriented database or a column-oriented database. For example, a row in a row-oriented table can contain information associated with an entity (e.g., Household #1) and data in the row can be stored serially such that information associated with the entity can be accessed in one operation. In some embodiments the computer system can access the one or more data sources periodically (e.g., once a week, once a month, etc.). The computer system can access the one or more data sources based on the one or more data sources being updated (e.g., a new entry, such as payment bill amount, is added to a table). In some embodiments, the computer system can access the one or more data sources responsive to an input received from the user. The user input can identify the entity (e.g. Household #5) for which information is requested. In some embodiments, the user input can identify a category or class of entities. For example, the user input can identify a class of entities that are all consumers of a specified provisioning entity (e.g., insurance company), the user input can identify entities that are located within a specified geographic region (e.g., all households within the state of Illinois), or the user input can identify any other category of entities (e.g., all households with an income over $100,000). In response to the user input, the computer system can access the one or more data sources including information associated with the entities. In some embodiments, method200can be performed periodically (e.g., once a week, once a month, etc.). In some embodiments, method200can be performed whenever the one or more data sources are accessed. At step220, the computer system can form a record including all information from the one or more data sources associated with the entity. In some embodiments, the record can be formed by integrating the information that is associated with the entity from the one or more data sources. The record can contain a multitude of information related to the entity. For example, the record can contain all information from the one or more data sources associated with a household (e.g., number of members in household, age of each member of the household, number of automobiles, income, monthly bill mounts for each automobile, types of automobiles, etc.). In some embodiments, the record can be stored as a cogroup (e.g., the cogroup shown inFIG.4). In some embodiments, the record can be stored in either a row-oriented database or a column-oriented database. For example, a row in a row-oriented record can be associated with a data source (e.g., bill payment amount) and data in the row can be stored serially such that data associated with that data source can be accessed in one operation. At step230, the computer system can filter the record for information associated with the specified action. For example, the specified action can be churn (e.g., cancellation of a subscription) and the computer system can filter the record for information related to churn. In some embodiments, the computer system can provide context for the specified action. In some embodiments, the computer system can determine whether the specified action will likely occur within a specified temporal period (e.g., one month). The computer system can filter out all information associated with a time that is outside (e.g., before or after) the specified temporal period. In some embodiments, the computer system can determine the propensity for the specified action based on only recent events. For example, the computer system can filter out information associated with a time before the specified time period (e.g., stale or less relevant information). In some embodiments, each record can be filtered in a slightly different way. The record can be filtered according to a user input specifying an activity or temporal period. In some embodiments, the record can be filtered automatically based on a presetting (e.g., the computer can be configured to filter out all information that is more than one year old). At step240, the computer system can generate, based on the record, one or more features associated with the entity. A feature can be any discernable way of sorting or classifying the record (e.g., average value, most recent value, most common value, etc.). In some embodiments, the computer system can generate key value pairs, wherein each key value pair contains a feature and a value. For example, the computer system can generate features such as “average bill payment amount”, “average income”, “average number of automobiles”, etc. and corresponding values such as “$670”, “$73K”, “2.3 cars”, etc. In some embodiments, features can be associated with a time value. For example, computer system can generate features for a specified temporal period (e.g., features can be based only on the most recent values). Feature values can be represented as a continuous value (e.g., $670), as a categorical value (e.g., “Sedan” or “Coupe”), as a textual value, or as any other type of value. In some embodiments, feature values can be classified as weighted values. For example, a household income of $73,000 can be represented as weighted value of {0.27 0}, {0.73 100000}. At step250, the computer system can process the one or more features to determine the propensity of the entity to take the specified action. In some embodiments, the propensity can be determined by applying a trained model, such as the model described in greater detail inFIG.3. The input to the model can be key value pairs of the one or more features associated with the entity and the specified actions and the output of the model can be the propensity of the entity to take the specified action. In some embodiments, processing the one or more features associated with the entity can result in a multitude of useful insights regarding the features that influence the propensity of the entity to take the specified action. Such insights, can include, for example, the features that are most influential on the propensity of the entity to take the specified action (e.g., change in income, etc.). At step260, the computer system can output the propensity. In some embodiments the computer system can output the propensity as a continuous value, such as a number or percentage (e.g., 80 or 80%) or as a categorical value (e.g., “low”, “medium”, or “high”). In some embodiments, the computer system can generate a user interface, such as the user interfaces described in greater detail inFIGS.5and6for displaying the propensity. In some embodiments, the computer system can output a plurality of propensities for a plurality of entities. The computer system can output the plurality of propensities as an a separate file (e.g., a text file or an Excel file) or as a table. FIG.3shows a flowchart representing an exemplary method300for creating a model to determine the propensity of an entity to take a specified action, consistent with embodiments of the present disclosure. While the flowchart discloses the following steps in a particular order, it is appreciated that at least some of the steps can be moved, modified, or deleted where appropriate, consistent with embodiments of the present disclosure. In some embodiments, method300can be performed in full or in part by a computer system (e.g., computer system100). It is appreciated that some of these steps can be performed in full or in part by other systems. Referring toFIG.3, at step310, the computer system can access one or more data sources that include information associated with the plurality of entities. The one or more data sources can be stored locally at the computer system and/or at one or more remote servers (e.g., such as a remote database), or at one or more other remote devices. In some embodiments, the information in the data sources can be stored in one or more multidimensional tables. By way of example, information of a first type (e.g., bill payment amount) associated with the plurality of entities, (e.g., households), can be stored in a first multidimensional table and information of a second type (e.g., automobile type) associated with the entities can be stored in a second multidimensional table. In some embodiments a plurality of table can contain information associated with the plurality of entities, wherein each table contains information associated with each entity. In other embodiments, a table can store information associated with a plurality of entities. For example, each row in the table can correspond to a different entity (e.g., Household #1, Household #2, etc.) and each column in the table can correspond to a payment amount. In some embodiments, the information stored in a table can include entries associated with a temporal period. For example, a table can store a bill payment date for each bill payment amount. The information can be stored as a continuous value (e.g., $800 as a bill payment amount), as a categorical value, (e.g., “Sedan” or “Coupe” as an automobile type), as textual value, or as any other type of value. In some embodiments, a table can be stored in either a row-oriented database or a column-oriented database. For example, a row in a row-oriented table can contain information associated with an entity (e.g., Household #1) and data in the row can be stored serially such that information associated with the entity can be accessed in one operation. In some embodiments the computer system can access the one or more data sources periodically (e.g., once a week, once a month, etc.). In other embodiments, the computer system can access the one or more data sources based on the one or more data sources being updated (e.g., a new entry, such as payment bill amount, is added to a table). In some embodiments, the computer system can access the one or more data sources responsive to an input received from the user. In some embodiments, the user input can specifically identify the plurality of entities (e.g., Household #1-#10,000) for use in generating the model. In some embodiments, the user input can identify a category or class of entities. For example, the user input can identify a class of entities that are all consumers of a specified provisioning entity (e.g., insurance company), the user input can identify entities that are located within a specified geographic region (e.g., all households within the state of Illinois), or the user input can identify any other category of entities (e.g., all households with an income over $100,000). In response to a user input, the computer system can access the one or more data sources including information associated with the plurality of entities. At step320, the computer system can form a plurality of records including information from the one or more data sources associated with the plurality of entities, each record being associated with an entity. In some embodiments, a record of the plurality of records can be formed by integrating information from the one or more data sources information that is associated with an entity of the plurality of entities. The record can contain a multitude of information related to the entity. For example, the record can contain all information from the one or more data sources associated with a household (e.g., number of members in household, number of automobiles, income, monthly bill amounts for each automobile, etc.). In some embodiments, the record can be stored as a cogroup (e.g., the cogroup shown inFIG.4). In some embodiments, the record can be stored in either a row-oriented database or a column-oriented database. For example, a row in a record can be associated with a data source (e.g., bill payment amount) and data in the row can be stored serially such that data associated with that data source can be accessed in one operation. At step330, the computer system can filter the plurality of records for information associated with the specified action. For example, the specified action can be churn (e.g., cancellation or non-renewal of a subscription) and the computer system can filter the record for information related to churn. In some embodiments, the computer system can provide context for (e.g., frame) the specified action. In some embodiments, the computer system can determine whether the specified action will occur within a specified temporal period (e.g., one month). The computer system can filter out all information associated with a time that is outside (e.g., before or after) the specified temporal period. In some embodiments, the computer system can determine the propensity for the specified action based on only recent information. For example, the computer system can filter out information associated with a time before the specified temporal period (e.g., stale or less relevant information). In some embodiments, each record can be filtered in a slightly different way. A record can be filtered according to a user input specifying an activity or temporal period. In some embodiments, the record can be filtered automatically based on a presetting (e.g., the computer can be configured to filter out all information that is more than one year old). The computer system can frame the record by associating a label with the record. In some embodiments, the label can represent whether the entity took the specified action within the specified temporal period. For example, the computer system can associate a label of “1” or “true” if the entity took the specified action within the specified temporal period. By way of example, in the context of the cancellation of a subscription, the computer system can keep data from time period A to B (e.g., the specified temporal period) and determine whether the entity cancelled the subscription within a second time period, T. In this example, if the entity cancelled the subscription in time period T, the computer system can associate a label with the record indicating that the entity took the specified action. At step340, the computer system can create, for each record, a labelled example by generating one or more features associated with an entity of the plurality of entities. A feature can be any discernable way of sorting or classifying the record (e.g., average value, most recent value, most common value, etc.). In some embodiments, the computer system340can generate key value pairs, wherein each key value pair contains a feature and a value. For example, the computer system can generate features such as “average bill payment amount”, “average income”, “average number of automobiles”, etc. and corresponding values such as “$670”, “$73K”, “2.3 cars”, etc. In some embodiments, features can be associated with a time value. For example, computer system can generate features for a specified temporal period (e.g., features can be based only on the most recent values). Feature values can be represented as a continuous value (e.g., $670), as a categorical value (e.g., “Sedan” or “Coupe”), as a textual value, or as any other type of value. In some embodiments, feature values can be classified as weighted values. For example, a household income of $73,000 can be represented as weighted value of {0.27 0}, {0.73 100000}. In some embodiments, the labelled example can include the key value feature pairs and the record label (e.g., whether the entity took the specified action). At step350, the computer system can select a subset of the plurality of labelled examples to train a model. In some embodiments, the subset can be created by randomly sampling the plurality of labelled examples. A random sample can allow for broader generalization of the model created at step360. In some embodiments, the user can select the subset of labelled examples. For example, the user can select all entities with a particular feature (e.g., all households with at least 2 cars). In some embodiments, the subset can be created by sampling labelled examples with a wide range of values for features that are known to be more important (e.g., change in income). At step360, the computer system can train a model using the subset of labelled examples. For example, the model can be trained by generalizing a function that maps inputs (e.g., the one or more features) to outputs (e.g., the label, such as whether the specified action occurred). In some embodiments, the model can perform regressions for each feature simultaneously. In some embodiments, the model can be trained by a hyperparameter optimization algorithm. In some embodiments, the hyperparameter optimization algorithm can perform a grid search through a hyperparameter space for the optimal hyperparameters. In some embodiments, the hyperparameter algorithm can perform a random search through the hyperparameter space. The computer system can evaluate the hyperparameters against a holdout set of labelled examples. For example, the computer system can apply the model trained by hyperparameter optimization to the holdout set. In some embodiments, the computer system can retrain the model with different hyperparameters if a particular attribute (e.g., accuracy, area under the curve, log-likelihood, F1-score, Top N, etc.) of the model does not exceed a predetermined threshold. In some embodiments, the computer system can continue to retrain the model until it obtains hyperparameters that exceed the threshold value. In some embodiments, the computer system can train the model a predetermined number of times (e.g., 10). The computer system can evaluate the trained models against a holdout set and select the model with the most favorable attributes (e.g., accuracy, area under the curve, log-likelihood, F1-score, Top N, etc.). At step370, the computer system can output the model. In some embodiments, the model can be outputted to a user for future use. For example, a user can use the model to determine the propensity of an entity to take a specified action. In other embodiments, the computer system can output the model to be stored locally or to be transmitted to an external database. In some embodiments, the computer system can output the model for use in another method, such as the method described inFIG.2, to determine the propensity of an entity to take a specified action. In some embodiments, the computer system can output confidence levels for the model. For example, the computer system can output the particular attribute (e.g., accuracy, area under the curve, log-likelihood, F1-score, Top N, etc.) of the model with respect to the examples in the holdout set. FIG.4provides an exemplary use case scenario for determining a propensity of an entity to take a specified action applied to an exemplary data structure. While the flowchart discloses the following steps in a particular order, it is appreciated that at least some of the steps can be moved, modified, or deleted where appropriate, consistent with embodiments of the present disclosure. In some embodiments, the use case scenario shown inFIG.4can be performed by a computer system (e.g., computer system100). It is appreciated that some of these steps can be performed in full or in part by other systems. Referring toFIG.4, one or more data tables410acquired from one or more data sources can include information associated with the entity. The one or more data tables410can be stored locally at the computer system and/or at one or more remote servers (e.g., such as a remote database), or at one or more other remote devices. In some embodiments, the information in the data tables can be stored in one or more multidimensional tables. By way of example, as shown inFIG.4, information of a first type (e.g., bill payment amount) associated with the entity, (e.g., a household), can be stored in a first multidimensional table410and information of a second type (e.g., income or number of cars) associated with the entity can be stored in a second multidimensional table410. In some embodiments a table can contain information associated with a single entity. For example, Bill Amount table410shows the most recent bill payment amounts associated with the entity in this exemplary scenario. In other embodiments (not shown), a table can store information associated with a plurality of entities. For example, each row in the table can correspond to a different entity (e.g., Household #1, Household #2, etc.) and each column in the table can correspond to a payment amount. In some embodiments, the information stored in the table can include entries associated with a temporal period. For example, a table can store a bill payment date for each bill payment amount. As shown inFIG.4, Bill Payment Table410can store dates in the first column (e.g., 1/1/14, 2/1/14, and 3/1/14). Each bill payment date can be associated with the bill payment amount. For example, Bill Payment Table410shows that an amount of $800 was billed to the household on Jan. 1, 2014. The information can be stored as a continuous value (e.g., $800 as a bill payment amount), as a categorical value, (e.g., “Sedan” or “Coupe” as an automobile type), as textual value, or as any other type of value. In some embodiments, a table can be stored in either a row-oriented database or a column-oriented database. For example, a row in a row-oriented table can contain information associated with an entity (e.g., Household #1) and data in the row can be stored serially such that information associated with the entity can be accessed in one operation. The computer system can form (420) a record430including some or all information from the one or more data sources associated with the entity. In some embodiments, record430can be formed (420) by integrating the information from the one or more data sources that is associated with the entity. Record430can contain a multitude of information related to the entity. For example, record430can contain all information from the one or more data sources associated with a household (e.g., number of members in household, number of automobiles, income, monthly bill mounts for each automobile, etc.). In some embodiments, record430can be stored as a cogroup with each row of the cogroup associated with a different category of information. In some embodiments, record430can be stored in either a row-oriented database or a column-oriented database. For example, a row in a row-oriented record can be associated with a data source (e.g., bill payment amount) and data in the row can be stored serially such that data associated with that data source can be accessed in one operation. As shown inFIG.4, the “Bill Amount” is stored as row in record430. Bill amounts $800, $600, and $600 can be stored serially such that all of the payment amounts can be accessed in one operation. Similarly, “Income” and “Number of Cars” are stored in separate rows in record430, and information from these sources (e.g. {$80K, $70K, $70K} and {3, 2, 2}) can also be accessed in one operation. In some embodiments, the computer system can filter record430for information associated with the specified action (not shown). For example, the specified action can be churn (e.g., cancellation of a subscription) and the computer system can filter record430for information related to churn. In some embodiments, the computer system can provide context for the specified action. In some embodiments, the computer system can determine whether the specified action will occur within a specified temporal period (e.g., one month). The computer system can filter out all information associated with a time that is outside (e.g., before or after) the specified temporal period. In some embodiments, the computer system can determine the propensity for the specified action based on only recent events. For example, the computer system can filter out information associated with a time before the specified time period (e.g., stale or less relevant information). In some embodiments, each record can be filtered in a slightly different way. Record430can be filtered according to a user input specifying an activity or temporal period. In some embodiments, record430can be filtered automatically based on a presetting (e.g., the computer can be configured to filter out all information that is more than one year old). For example, the computer system can determine the propensity of the entity to take the specified action based on only data from the previous month. In the example shown inFIG.4, the computer system can filter out the older entries of Bill Amount table410(e.g., Bill Amounts of $800 and $600 corresponding to bill dates in January and February). The computer system can also filter out similar entries in Income and Number of Cars tables410(e.g., incomes of $80K and $70K and 3 and 2 number of cars). Thus, the computer system can use only the most recent entries to determine the propensity of the household to take the specified action (e.g., $600 in Bill Amount table410, $70K in Income table410, and 2 in Number of Cars table410). The computer system can generate (440), based on record430, one or more features450associated with the entity. A feature can be any discernable way of sorting or classifying the record (e.g., average value, most recent value, most common value, etc.). In some embodiments, the computer system can generate key value pairs, wherein each key value pair contains a feature and a value. For example, the computer system can generate one or more features450such as “average bill payment amount”, “average income”, “average number of automobiles”, etc. and corresponding values such as “$670”, “$73K”, “2.3 cars”, etc. In some embodiments, the one or more features450can be associated with a time value. For example, computer system can generate features for a specified temporal period (e.g., features can be based only on the most recent values). Feature values can be represented as a continuous value (e.g., $670), as a categorical value (e.g., “Sedan” or “Coupe”), as a textual value, or as any other type of value. In some embodiments, the one or more feature450can be stored as classified as weighted values. For example, a household income of $73,000 can be represented as weighted value of {0.27 0}, {0.73 100000}. In some embodiments, the one or more features can be extrapolated from the information contained in the record. For example, a feature can be that the entity deactivated online payments (e.g. customer deactivated ETF payment on 2/20). In some embodiments, the one or more features can be related to communications between the providing entity (e.g., insurance provider) and consuming entity (e.g., household). For example, computer system100can analyze (e.g., tokenize) the transcript of a call between an agent and a household and assign a topical value to that call (e.g., “topic5” corresponding to anger). Computer system100can store this information as a feature pair (not shown), such as the pair {“Service Call Topic” “5”}. In some embodiments, the one or more features can be related to whether the household took a specified action (e.g., filed a claim or called to change policy). In some embodiments, the computer system can process (460) the one or more features450to determine the propensity470of the entity to take the specified action. In some embodiments, the propensity470can be determined by applying a trained model, such as the model described in greater detail inFIG.3. The input to the model can be key value pairs of the one or more features450associated with the entity and the specified actions and the output of the model can be the propensity470of the entity to take the specified action. In some embodiments, processing the one or more features associated with the entity can result in a multitude of useful insights regarding the features that influence the propensity of the entity to take the specified action. Such insights, can include, for example, the features that are most influential on the propensity of the entity to take the specified action (e.g., change in income, etc.). In some embodiments, the computer system can output the propensity470. In some embodiments, the computer system can output the propensity470as a continuous value, such as a number or percentage (e.g., 80 or 80%) or as a categorical value (e.g., “low”, “medium”, or “high”). In some embodiments, the computer system can generate a user interface, such as the user interfaces described in greater detail inFIGS.5and6for displaying the propensity470. FIG.5illustrates an exemplary user interface500provided by a computer system (e.g., computer system100) for display (e.g., display122), in accordance with some embodiments. User interface500can include a plurality of tiles (e.g., tile510), each tile representing an entity (e.g., a household). In some embodiments, tiles can be arranged according to the propensity of the entity to take the specified action. For example, entities that are more likely to take the specified action can be located near the top of the display, whereas entities that are less likely to take the specified action can be lower on the display. As shown inFIG.5, in some embodiments, the tiles can be arranged by date (e.g., date520). For example, entities with the most recent activities can be located near the top of the display. By way of example, tile510with the most recent date520of Feb. 21, 2014 is located in the top left corner of the display. The tile to the right of tile510has the next most recent date (e.g., Feb. 20, 2014). Subsequent tiles have dates that are less recent. In other embodiments, entities with the longest pending outstanding action can be located near the top of the screen. In some embodiments, user interface500can be updated periodically (e.g., once a day, once a week, once a month, etc.). In other embodiments, user interface500can be updated when information associated with any of the entities stored in the one or more data sources is updated (e.g., a new entry, such as payment bill amount, is added to a table). In some embodiments, user interface500can update in response to an input received from the user. User interface500can automatically determine the entities for which to generate the display. In some embodiments, user interface500can display entities associated with a particular user (e.g., John Smith, Triage Agent) once the user accesses user interface500. In some embodiments, the user can specifically identify the entities for which to generate the display. In some embodiments, the user can identity a category or class of entities for which to generate the display. For example, the user can identify a class of entities that are all consumers of a specified provisioning entity (e.g., insurance company), the user input can identify entities that are located within a specified geographic region (e.g., all households within the state of Illinois), or the user input can identify any other category of entities (e.g., all households with an income over $100,000). In some embodiments, user interface500can portray a date520(e.g., Feb. 21, 2014) associated with the entity in tile510. Date520can correspond to the current date, the date that method200was last performed for that entity, the date that information in the one or more data sources associated with that entity was last updated, or the date that the user last viewed the tile associated with the entity. In some embodiments, user interface500can portray a propensity540of the entity to take the specified action (e.g., “Med”) in tile510. For example, as shown inFIG.5, user interface500can portray the propensity as a categorical value, such as “Med” in tile510. In some embodiments, user interface500can portray tile510in a color (e.g., green for “low”, red for “high”, etc.) representing the propensity. In some embodiments, user interface500can portray the propensity in tile510as numerical value or as a percentage. User interface500can portray recent activity530in tile510. In some embodiments, the recent activity530can be entered by a user. By way of example, a recent activity could be that an “Agent called customer on 2/21 regarding discounts” as shown in tile510. In some embodiments, user interface500can generate the recent activity based on the one or more features associated with the entity. For example, user interface500can display, “Customer registered an additional luxury vehicle on 2/18” in tile510responsive to this information being updated in the record associated with the entity. In some embodiments, tile510can portray important features540associated with the entity. For example, as shown in tile510ofFIG.5, these features can be “vehicle”, “discounts”, etc. In some embodiments, user interface500can recommend an action for the user to take (e.g., service call). In some embodiments, this recommendation can relate to the recent activity530. A user can use this information to take preemptive action to prevent the entity from taking the specified action. By way of example, if the propensity of a household subscribing to an automobile insurance policy was high, the user could take remedial action (e.g., lower rate, contact customer to address customer concerns, etc.). In some embodiments user interface500can display a number uniquely identifying the entity (e.g., a policy number). In some embodiments, user interface500can allow a user to click on tile510to access additional information associated with the entity. For example, a user can access user interface600shown inFIG.6below by clicking on one of the tiles shown in user interface500ofFIG.5. In some embodiments, user interface600can be inlaid over user interface500. In some embodiments, user interface600can be a distinct user interface. User interface500can also allow access to additional user interfaces (not shown) through the “INBOX,” “FLAGGED,” and “STATS” links shown at the top of user interface500. The “INBOX” user interface can display messages between the user and other agents to track the remedial actions that were taken. The INBOX user interface can also be used to notify users of households with a higher likelihood of cancelling the subscription. The “FLAGGED” user interface can show customers (e.g., households) that the user believed were at risk for taking the specified action. For example, the FLAGGED user interface can contain a list of the households most likely to cancel their insurance policy. In some embodiments, these households can be selected manually by the user. In some embodiments, these households can be automatically populated if the propensity exceeds a predetermined threshold (e.g., the FLAGGED interface can be populated with all households with a “High” propensity). The FLAGGED user interface can allow the user to track remediation steps (e.g., contacting the household, changing policy, etc.). Households can remain in the FLAGGED user interface until their risk of taking the specified action has declined, the user has decided that the household is no longer at risk, or the specification action occurred (e.g., the household cancelled its subscription). The “STATS” interface can display metrics such as, for example, the rate at which the user was able to prevent the specified action from occurring categorized by action taken and the most common and/or trending issues. FIG.6illustrates another exemplary user interface600provided by the computer system (e.g., computer system100) for display (e.g., display112) in accordance with some embodiments. In some embodiments, user interface600can be accessed by clicking on a tile (e.g., entity) in user interface500. User interface600can portray a date610(e.g., Feb. 18, 2014) associated with the entity. Date610can correspond to the current date, the date that method200was last performed for that entity, the date that information in the one or more data sources associated with that entity was last updated, or the date that the user last viewed the tile associated with the entity. In some embodiments, user interface600can portray a propensity620of the entity to take the specified action. For example, as shown inFIG.6, user interface600can portray propensity620as a categorical value, such as “Med.”. In some embodiments, user interface600can convey propensity620by shading the top bar in a different color (e.g., green for “low”, red for “high”, etc.) representing propensity620. In some embodiments, user interface600can portray propensity620as numerical value or as a percentage. In some embodiments user interface600can display the entity status630(e.g., “Active” if the household is currently subscribing to a policy). In some embodiments, user interface600can display recent activities640associated with the entity. For example, as shown inFIG.6, user interface600can display that the “customer registered an additional luxury vehicle on 2/18”. User interface600can recommend an action650for the user to take (e.g., service call). In some embodiments, this recommendation650can relate to the recent activity. User interface600can provide the user with additional information associated with the entity. As shown in the bottom left panel ofFIG.6, user interface600can display basic biographic information660for the entity. In the automobile insurance context, for example, user interface600can display the policy number, (e.g., 34726182), the entity name (e.g., household/owner of the policy, David Stark), the policy coverage start date (e.g., Dec. 12, 2004), any secondary owners associated with the policy (e.g., James Watson), information associated with the insured automobile (e.g., 2013 Cadillac Escalade), and the type of insurance policy (e.g., Standard). In some embodiments, user interface600can also display information for an agent670associated with the entity. For example, the user interface600can display the name (e.g., Bruce Atherton) and contact information (e.g., 583 234-9172) of the agent. A user can use this information to take preemptive action to prevent the entity from taking the specified action. By way of example, if the propensity of churning for a household subscribing to an automobile insurance policy was high, the user could contact the agent to take remedial action (e.g., lower rate, address customer concerns, etc.). In some embodiments, the right panel ofFIG.6, can display recent events680associated with the entity. For example, user interface600can display whether the entity status is active (e.g., whether the entity is currently subscribing to a policy) or whether the agent has taken any actions (e.g., called the household or subscriber). In some embodiments, user interface600can also allow the user and agent to converse in the right panel. For example, the user can click on the “ADD AN UPDATE” button690to remind the agent to contact the entity. The user interface can display responsive comments680from the agent and the agent can add any actions taken 680 (e.g., calling the household). Embodiments of the present disclosure have been described herein with reference to numerous specific details that can vary from implementation to implementation. Certain adaptations and modifications of the described embodiments can be made. Other embodiments can be apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the present disclosure being indicated by the following claims. It is also intended that the sequence of steps shown in figures are only for illustrative purposes and are not intended to be limited to any particular sequence of steps. As such, it is appreciated that these steps can be performed in a different order while implementing the exemplary methods or processes disclosed herein. | 55,549 |
11861516 | DETAILED DESCRIPTION Various embodiments of the present invention will be described in detail below with reference to accompanying drawings. It will be apparent, however, that these embodiments may be practiced without some or all of these specific details. In other instances, well known process steps or elements have not been described in detail in order not to unnecessarily obscure the description of the invention. The following example embodiments and their aspects are described and illustrated in conjunction with apparatuses, methods, and systems which are meant to be illustrative examples, not limiting in scope. Example Network Environment FIG.1illustrates an implementation of a network environment100in which various implementations of the invention may be deployed, according to one embodiment. Network environment100includes one or more client nodes102, a network104, and a network-based service provider106. Network104generally represents one or more interconnected networks, over which network-based service provider106and client nodes102can communicate with each other. Network104may include packet-based wide area networks (such as the Internet), local area networks (LAN), private networks, wireless networks, satellite networks, cellular networks, paging networks, and the like. A person skilled in the art will recognize that network104may also be a combination of more than one type of network. For example, network104may be a combination of a LAN and the Internet. In addition, network104may be implemented as a wired network, or a wireless network or a combination thereof. Client nodes102are communicatively coupled to network104via a network service provider or any other suitable methods known in the art. Client Nodes Client nodes102are computing devices from which a user accesses the services provided by network-based service provider106. Client node102has the capability to communicate over network104. Client node102further has the capability to provide the user an interface to interact with the service provided by network-based service provider106. Client node102may be, for example, a desktop computer, a laptop computer, a mobile phone, a personal digital assistant, and the like. A client node may execute one or more client applications such as, without limitation, a web browser to access and view content over a computer network, an email client to send and receive emails and an instant messaging client for communicating with other users. Client nodes102, in various embodiments, may include a Wireless Application Protocol (WAP) browser or other wireless or mobile device protocol suites such as, without limitation, NTT DoCoMo's i-mode wireless network service protocol suites, EDGE, and the like. Network-Based Service Provider Network-based service provider106is a network addressable system that hosts a network application accessible to one or more users over network104. The network application may provide means for users to connect with each other through social ties or any other common attributes. The network application may be an email service, a social or a business network, a blog service, an online forum, a wiki service, a content aggregation and/or distribution service or any other network application where at least part of the content is annotated and/or generated by users. The user generated content includes, without limitation, multimedia content, audio content, visual content, text content and the like. Some examples of such services are, but not limited to, Panoramio™, Flickr™, Answers™, Blogger™, Orkut™ and Twitter™. In various embodiments of the present invention, network-based service provider106includes an application server108, a database110and a geo tag engine112. Further, it should be understood that database110may be any source of information such as a hosting service or remote database, and network-based service provider106can either be connected directly thereto or through network104. In an embodiment of the invention, database110may be associated with a content storage server. Application server108provides a user access to database110and network applications over network104. More specifically, user executes a client application and accesses one or more network applications provided by network-based service provider106. Further, user may access the content stored in database110via network applications. Additionally, application server may employ a user authentication system such as a login and a password to provide access to the content stored in database110. Database110stores the content generated by users of a network application hosted by network-based service provider106. Database110, without limitation, may include information which is accessible or viewable by the users of the network application or service. Content stored in database110includes content such as, but not limited to, multimedia content, audio content, text content, visual content and the like. Further, the user may append one or more annotations, and/or location metadata associated with the content in database110. Annotations may include, but not limited to, one or more tags, a title, a description, creator, comments and the like. Additionally, the content may also include annotations embedded by various digital devices. For example, a digital camera may embed various annotations in the picture. The various annotations may represent time of capture, aperture, shutter speed, location metadata and the like. Such annotations may be extracted, for example, from the EXIF (Exchangeable Image File) header, or from the IPTC (International Press Telecommunications Council) header stored inside the digital file of the picture by various photograph management and organization software packages. In various embodiments of the present invention, the user may also edit the annotations associated with a particular content at any give time. Geo tag engine112utilizes the information stored in database110to develop a language model. Geo tag engine112utilizes the various annotations and location metadata associated with one or more content to develop the language model. The language model can further predict the association of one or more locations with a given set of annotations, based on a probabilistic distribution of locations over the annotations. FIG.2illustrates geo tag engine112, according to one embodiment of the invention. Geo tag engine112includes a language model202, an annotation receiver204and a computing unit206. In an embodiment of the invention, users upload one or more user-generated content in the network application provided by network-based service provider106. Users may associate one or more annotations with the content. The annotations may include, with out limitation, one or more tags, a title, a description, creator, comments and the like. Additionally, the content may also include annotations embedded by various digital devices. In an embodiment of the invention, the user generated content and the associated annotations are stored in database110. Moreover, in some cases, the content also includes an associated location metadata, such as latitude and longitude information. In an embodiment of the invention, geo tag engine112utilizes the information stored in database110to develop language model202. For this purpose, content stored in database110may be divided into two categories: a) content with one or more annotations and associated location metadata and b) content with one or more annotations but no location metadata. The content with one or more annotations and associated location metadata is interchangeably referred to as a training data set. In an embodiment of the present invention, each content present in the training data set is associated with the following attributes: a content identification, a location metadata, and one or more annotations. Content identification is a unique identifier associated with content. In an embodiment of the invention, location metadata associated with the content is latitude and longitude coordinates of a location, or other geo-location information formatted according to an alternative geographic coordinate scheme. The training data set is utilized by geo tag engine112to develop language model202. Referring now toFIG.2, language model202may include a location association apparatus208, a probability determination module210, a smoothing module212and a data store214. In an embodiment of the present invention, location association apparatus208accesses the location metadata of the training data set from database110. Since, the training data set has content associated with one and more annotations, and specific location metadata, location association apparatus208uses the latitude and longitude coordinates (location metadata) to represent a location on the world map. Subsequently, location association apparatus208associates the represented location with corresponding annotations to generate a language representation of each location. Representation of Locations on the Map In an embodiment of the present invention, for representing locations on a map, an m×n grid is constructed on the world map. The grid construction is based on the latitude and longitude coordinates of the globe, where each cell within the grid represents a location on the world map. In an exemplary embodiment, a location on the world map is described by a pair of universal geographical coordinates (UGC). UGC coordinates represents the latitude and longitude coordinates in the form of decimal numbers. Each pair of coordinates, ignoring the decimal part and considering only degree units, defines a unique location such as an approximate rectangle, or a cell of the grid with longitude size of about 111 kilometers long and variable latitude size. In an embodiment of the present invention, if the length of latitude varies from 0 kilometers at the poles to 111 kilometers at the equator, various divisions of grid cells such as grid cell of size 1, 5, 10, 50 and 100 kilometers are considered for representing locations. Thus, for a particular grid cell division, a location can be mapped to its corresponding grid cell on the world map by using the latitude and longitude coordinates of the location. In an embodiment of the present invention, location association apparatus208maps the location metadata of each content present in the training data set with the corresponding grid cell, to associate a location with content. In one embodiment of the present invention, annotations associated with the geographical locations on the world map are enriched by using external sources of information such as gazetteers and Geonames. Information obtained from external sources may be associated, similar to one or more annotations, with the locations, thereby enriching the annotations associated with the locations. Furthermore, external sources such as gazetteers and Geonames, act as an authentic and reliable source of information about the geographical locations. Gazetteer and Geonames are some of the examples of the external databases which contain descriptions of various places and locations in the world. A person who is ordinarily skilled in the art can understand that the above examples are for exemplary purposes only and do not limit the scope of the present invention. In an embodiment of the present invention, location association apparatus208may associate one or more annotations of various contents in the training data set to the corresponding location on the grid cell. Further, the sources of annotations may be augmented by the text or description or various other annotations received from the external sources of information. The annotations associated with the locations are utilized to derive a language representation of the location. In an embodiment of the invention, if more than one content is mapped to the same cell of the grid structure, then annotations associated with all of these content are grouped together to form a set of annotations for that particular location, represented by that cell. Thus, a set of annotations is generated for each location represented by the grid cell on the world map. In an embodiment of the present invention, content in the training data set may be associated with location agnostic annotations (such as “garden”). The location agnostic annotations are not used for deriving the language representation of locations. Location association apparatus208samples the annotations associated with one or more contents to identify location agnostic annotations. Further, location association apparatus208may also perform standard annotation normalization in which all terms in compound annotations are concatenated and all special characters are removed. Locations as a Graph In an embodiment of the present invention, for representing locations in a graph, the grid cell structure underlying the collection of locations implies a spatial relationship. For example, the links between a pair of locations, represented by grid cells, exist only if they are situated close enough on the grid. In an example embodiment of the invention, cell-based distance may be used to determine the closeness of the cells. For example, the grid structure may have 8 cells situated within 1-cell distance or 24 cells situated within 2-cell distance, etc. Thus, locations which are found within a predefined distance may be linked and considered as neighbors. Further, linked locations may have high probability to be represented by similar annotations. Probabilistic Distribution of Locations Over Annotations The representation of locations developed by location association apparatus208is utilized by probability determination module210to determine the probability distribution of locations over the annotations. The probability determination module210determines, for each location, the probability of an annotation being present in the set of annotations associated with the location. The set of probability values thus generated represents the probability distribution of locations over the annotation. This process is repeated for each annotation to determine probability distribution of locations over the annotations. The probability distribution of locations over the annotations is represented by P (T|L), where T is a set of annotations T={t1, t2, . . . ti} and L is the set of locations containing all the locations on the world map. In one embodiment, each annotation tipresent in the set T is generated independently, thus P(T|L)=∏i=1TP(ti|L)(1) Further, the probability distribution of locations for each annotation (i.e. P(ti|L)) computed by probability determination module210is stored in data store214. In one embodiment, data store214may be dynamically updated when a user uploads content with associated location metadata and annotations. In some cases, the probability distribution of locations computed by probability determination module210may suffer from the problem of data sparseness or that the annotations indicate an area that exceeds the bounds of a location. For example, some annotations specify a large area such as a country or a continent, which may be larger than the largest grid cell used in the representation of locations. Moreover, some content and related annotations may be associated with several locations on the grid thereby exceeding the bounds of specific location. In some embodiments, the probability distribution of locations over the annotations may be smoothed using smoothing module212prior to storage in data store214. Various methods have been described in the following embodiments of the present invention for smoothing the obtained probability distribution. Annotation Based Smoothing with Neighbors Due to the problem of data sparseness, probability distribution of locations over the annotations may not contain probability values for each location. Thus, based on the annotations associated with the locations in the spatial neighborhood, the probability distribution of locations over the annotations is smoothed. In one embodiment, each annotation found within a specific location is generated by either language model of the location, or by language models of neighboring locations, thus the annotation likelihood (the probability of an annotation given a location expressed as P(t|L)) can be calculated by the equation: P(t|L)=μL·P(t|L)MLL+λ+(1-μ)P(t|NB(L))+λ·P(t|G)MLL+λ(2)P(t|NB(L))=∑L′∈NB(L)L′L′+λP(t|L′)ML(2d+1)2-1(3) where P(t|G) is the maximum likelihood probability of an annotation being generated by a general model of locations, and λ and μ are parameter that controls the smoothing to prevent zero probabilities for annotations not present in the data store214, The neighborhood of locations, NB(L), consists of all locations, L′, within distance d (in grid cells) of location L to be connected on the grid. Smoothing Cell Relevance Probabilities In another embodiment, cell based smoothing is used to smooth a grid cell from its neighborhood locations. In this method, cell relevance propagates through the links between locations which are close on the grid structure. Using this, a weighted in-degree approach is used to calculate the probability to generate the annotation set T of a certain location by adding the probabilities of neighborhood locations P(T|L)=μ·P(T|L)+(1-μ)∑L′∈NB(L)P(T|L′)(2d+1)2-1(4) Further, some neighbors are selected to propagate cell relevance based on one or more predefined criteria. In one embodiment, only those neighborhood locations are selected for propagating cell relevance which have lower probability that the location to be smoothed. Thus, best location within a certain neighborhood is not selected but the locally relevant locations from different parts of the globe are considered for selecting the locations for propagating cell relevance. Accordingly, locations are represented in a directed grid graph and only some selected neighbors satisfying a predefined criterion are used for calculating weighted in-degree. In graph-related terms, the grid graph is dependent on the annotations entered by users, and edges between cells are directed from lower probability scored cells to higher probability scored cells. Boosting Geo-Related Annotations In yet another embodiment, an external database of locations is used to incorporate geographical information about geo-related annotations. In some cases, users annotate content with annotations that can be easily recognized as location specific such as names of places (e.g. cities or countries), points-of-interest (e.g. monuments, stadiums, hotels, or bars) or events specific to certain locations (e.g. festivals, sport competitions). A boosting approach defined by the following equation is used to introduce preliminary knowledge about annotations into the developed model. Pnew(t|L)ML=P(t|L)ML1+β·P(Loc|t)Z(5) where P(Loc|t) is a probability of the annotation t to be location specific, β is a boosting coefficient, Z is a normalization constant, and P(t|L)MLis defined as in equation 2. For example, an external database of locations such as GeoNames may be used to identify location specific annotations. GeoNames is a geographical database integrating geographical data such as names of places in various languages, elevation, population and other data from various sources. The list of toponyms limited to English names of populated locations is used for identifying location specific annotations. For all annotations that are present in the list, P (Loc|t) equals 1.0 and otherwise equals 0. Spatial Ambiguity-Aware Smoothing In yet another embodiment, spatial ambiguity of an annotation is incorporated in language model202for cases where annotations are specific to more than one location. Annotations can be specific to more than one location either because their scope exceeds the bounds of a single cell or due to their ambiguity (for example bath and Bath, UK) or because they are instances that are typically spotted at a few specific locations, such as elephants. For smoothing purposes, annotations that are highly spatially ambiguous are often better than those having a single geographical focus. As the latitude and longitude coordinates of all annotations are known in the training data set, spatial ambiguity of an annotation is characterized by the standard deviation of its latitudes and longitudes σlat, σlon. This can be incorporated in the developed language model202by using the smoothing coefficient λ discussed earlier, in equation 3. In this case, smoothing coefficient λ is annotation specific and proportional to the ambiguity of an annotation. λ(t)=λ+γ(σlat(t)+σlon(t)) (6) Thus, individually generated probabilities of ambiguous annotations are used for finding the most probable location for a set of annotations. Also, it helps in preventing over-boosting of ambiguous annotations. As described above, various methods of smoothing may be incorporated in smoothing module212to smooth the probability distribution of locations. Further, in various embodiments, the probability distribution obtained after the smoothing may be stored in data store214. In one embodiment, language model202develops itself by using the content uploaded by the users. For example, language model202utilizes the various content and associated annotations and location metadata to identify a plurality of beach locations present in the world. When a user uploads content and provides “beach” as an annotation, language model202accordingly modifies the probability distribution of locations over the annotation “beach”. Thus, language model202develops itself by learning through the various content and associated annotations provided by the user. Utilization of Language Model to Identify Locations Corresponding to Annotations In one embodiment, language model202developed by utilizing the content present in the training data set is used by geo tag engine112to identify the geographical locations associated with the annotations received from a user. Annotation receiver204receives the annotations provided by the user. In one embodiment, annotations received from the user are associated with content. More specifically, when a user uploads content by accessing the network application hosted by network-based service provider106, various annotations are attached by the user to provide more information about the content. In another embodiment, annotations received from the user are one or more keywords of a query entered by the user. Computing unit206obtains probability data corresponding to the annotations received by annotation receiver204by utilizing language model202. More specifically, computing unit206obtains a probability distribution of locations corresponding to the received annotations from data store214. Computing unit206utilizes the probability data to generate a set of probable locations associated with the received annotations Given a set of annotations associated with a content, a set of locations are predicted where the content might have been generated based on a probabilistic analysis. In other words, a rank list of locations L, ordered by the descending probability for a given set of annotations T belonging to the content, taken within the bounds of L is generated by the equation: P(L|T)=P(T|L)P(L)P(T)(7) where P(L|T) represents the probability of the locations L for the set of annotations T. In one embodiment, a multinomial probability distribution over the annotations is used to represent the locations. In various embodiments, probability of locations P(L) and probability of annotations P(T) may not influence the ranking of locations. The rank list of locations for the given set of annotations (i.e. P(L|T)) is obtained by computing the probability of the given set of annotations at the locations (i.e. P(T|L)). Thus, locations are ranked by the probability to generate the set of annotations supplied by the user. If each annotation tiin the set T is generated independently, the annotation set likelihood can be calculated by the equation: P(t|L)=LL+λP(t|L)ML+λL+λP(t|G)ML(8) where P(t|L)MLand P(t|G)MLare maximum likelihood estimates of annotation generation probabilities for the language model of the location and the general language models respectively, |L| is the size of the location L in annotations and λ is the parameter of smoothing. In an example embodiment of the present invention, Dirichlet smoothing may be used for better estimation of probabilities. FIG.3illustrates an example embodiment of the present invention in which a set of locations is identified based on the one or more annotations received from the user. Consider a user who has uploaded a photograph and has associated a set of annotations T={T1, T2} with the photograph. Annotation receiver204receives the annotations provided by the user. Now, computing unit206obtains the probability distribution of locations for the two annotations from language model202. In one embodiment, language model202performs real-time calculation of the probability distribution of locations for the received annotations. In another embodiment, probability distribution of locations over various annotations is already stored in data store214, which is a part of language model202. Referring toFIG.3, table302and table304contains the probability distribution of locations corresponding to the annotations T1 and T2 respectively. In this example, P(T|L) represents the probability that the annotation T belongs to the set of annotations associated with the location L. Table302shows the probability scores for the top 5 locations {L1, L2, L3, L4, L5} arranged in the descending order of probability scores. These locations are identified based on their probabilities of being associated with the annotation T1. Similarly, table304shows the probability scores for the top 5 locations {L4, L1, L2, L6, L7} arranged in the descending order of probability scores. These locations are identified based on their probabilities of being associated with the annotation T2. In this example, only top five locations are considered for further probability calculations. Computing unit206utilizes the probability distribution of locations obtained from language model202for further calculations. Table306lists the final probability scores of the locations for the two annotations calculated by computing unit206where P(T|L) represents the probability that both the annotations T1 and T2 correspond to the location L. In one embodiment, each annotation present in the set of annotations associated with a location is generated independently; thus, P(T|L) can be obtained by multiplication of P(T1|L) and P(T2|L). In one embodiment of the present invention, the computing unit206generates a rank list of locations, ordered by the descending probability for a given set of annotations. The rank list of locations for the two given annotations T1 and T2 is represented by P(L|T). Referring to equation 7, as P(L) and P(T) does not affect the rank list of the locations, hence P(L|T) bears the same ranking as P(T|L). In this example, the rank list contains {L1, L4, L2} with L1 having the highest probability of being associated with the annotations T1 and T2. FIG.4illustrates flowchart400summarizing various steps involved in an example embodiment of the invention. At step402, location association apparatus208access content with associated location metadata and various annotations, from a training data set, which may be available in a database110. In one embodiment, database110provides the content with associated location metadata and annotations. In another embodiment, a user provides the content with associated location metadata and annotations. The content uploaded by the user includes, without limitation, multimedia content, visual content, audio content, text content. In various embodiments, annotations may take the form of one or more tags associated to the content. Other types of annotations include, e.g., a title, a description, creator, and comments. In one embodiment, the location metadata associated with the content is the latitude and longitude coordinates of the location. At step404, the location metadata received in the step402is utilized to identify the corresponding location on a world map. In one embodiment, an m×n grid is constructed on the world map. The grid construction is based on the latitude and longitude coordinates of the globe, where each cell within the grid represents a location on the world map. Further, the annotations received in the step402are associated with the identified location on the world map. If more than one content is associated with the same location on the map then, annotations associated with all of the content are grouped together to form a set of annotations. Thus, locations on the grid are represented by sets of annotations. In one embodiment, external sources of information such as gazetteers and Geonames, are used to obtain more information about the locations. The information obtained from the external sources is used to enrich the annotations associated with the location. Thus, language representation of each location is derived by using the set of annotations associated with the location. At step406, probability determination module210computes probability distribution of locations over the annotations. In one embodiment, the probability distribution is a multinomial probability distribution. Further, in various embodiments, the probability distribution may be further smoothed by using smoothing module212. Smoothing module212incorporates various smoothing techniques such as, without limitation, annotation based smoothing with neighbors, cell based smoothing, spatial ambiguity aware smoothing. In one embodiment, the probability distributions of locations obtained after the smoothing are stored in data store214. Thus, language models of locations are developed by geo tag engine112by utilizing the annotations and associated location metadata. At step408, probability determination module210stores the determined probability in the data store214of language model202. The geotag engine112may further utilize the determined probabilities of the locations from the data store, to associate one or more locations with a given set of annotations. FIG.5illustrates flowchart500summarizing various steps involved in an example embodiment of the invention. At step502, annotation receiver204receives one or more annotations from a user. In one embodiment, received annotations are associated with content uploaded by the user. The content uploaded by the user includes, without limitation, multimedia content, visual content, audio content, text content. Further, annotations may take the form of one or more tags for the content. Other types of annotations include, e.g., a title, a description, creator, and comments. In another embodiment, annotations may be received as a part of the keywords included in a query. The query may be focused to search for geographical locations associated with the annotations. At step504, computing unit206identifies one or more geographical locations associated with the one or more annotations. In an embodiment, computing unit206communicates with the language model202to obtain a probability distribution of locations over the annotations. In an embodiment, the probability distribution of locations over the annotations is obtained from data store214, which is a part of language model202. In one embodiment, the probability distribution of locations is a multinomial probability distribution. In various embodiments of the present invention, smoothing of the probability distribution of locations may be incorporated in language model202. In one embodiment, the one or more identified geographical locations are ranked based on the calculated probability distribution. FIG.6illustrates an example hardware system600to implement the location association system according to one embodiment. Hardware system600includes at least one processor602, a system memory604, and mass storage606. The system memory604has stored therein one or more application software, programming instructions for implementing location association process608, an operating system and drivers directed to the functions described herein. Mass storage606provides permanent storage for the data and programming instructions for location association process608, whereas system memory604(e.g., DRAM) provides temporary storage for the data and programming instructions when executed by processor602. The process flow of the programming instructions for location association process608is described in detail in conjunction withFIG.5. In one embodiment, database110(shown in theFIG.1) may reside in mass storage606. A network/communication interface610provides communication between hardware system600and any of a wide range of networks, such as an Ethernet (e. g., IEEE 802.3) network, etc. Additionally, hardware system600includes a high performance input/output (I/O) bus612and a standard I/O bus614. System memory604and network/communication interface610couple to bus612. Mass storage606couple to bus614. I/O Bus Bridge616couples the two buses612and614to each other. In one embodiment, location association process608described herein is implemented as a series of software routines run by hardware system600. These software routines comprise a plurality or series of instructions to be executed by a processor in a hardware system, such as processor602. Initially, the series of instructions are stored on a storage device, such as mass storage606. However, the series of instructions can be stored on any suitable storage medium, such as a diskette, CD-ROM, ROM, EEPROM, DVD, Blu-ray disk, etc. Furthermore, the series of instructions need not be stored locally, and could be received from a remote storage device, such as server on a network, via network/communication interface610. The instructions are copied from the storage device, such as mass storage606, into system memory604and then accessed and executed by processor602. In one embodiment, hardware system600may also include I/O ports618, a keyboard and pointing device620, a display622coupled to bus612. I/O ports618are one or more serial and/or parallel communication ports that provide communication between additional peripheral devices, which may be coupled to hardware system600. A host bridge624couples processor602to high performance I/O bus612. Hardware system600may further include video memory (not shown) and a display device coupled to the video memory. Collectively, these elements are intended to represent a broad category of computer hardware systems, including without limitation general purpose computer systems based on the x86-compatible processors manufactured by Intel Corporation of Santa Clara, Calif., and the x86-compatible processors manufactured by Advanced Micro Devices (AMD), Inc., of Sunnyvale, Calif., as well as any other suitable processor. Hardware system600may include a variety of system architectures; and various components of the hardware system600may be rearranged. For example, cache626may be on-chip with processor602. Alternatively, cache626and processor602may be packed together as a “processor module,” with processor602being referred to as the “processor core.” Furthermore, certain embodiments of the present invention may not require nor include all of the above components. For example, the peripheral devices shown coupled to standard I/O bus612may couple to high performance I/O bus612. In addition, in some embodiments only a single bus may exist with the components of hardware system600being coupled to the single bus. Furthermore, hardware system600may include additional components, such as additional processors, storage devices, or memories. An operating system manages and controls the operation of hardware system600, including the input and output of data to and from software applications (not shown). The operating system provides an interface between the software applications being executed on the system and the hardware components of the system. According to one embodiment of the present invention, the operating system is the LINUX operating system. However, the present invention may be used with other suitable operating systems, such as the Windows® 95/98/NT/XP/Server operating system, available from Microsoft Corporation of Redmond, Wash., the Apple Macintosh Operating System, available from Apple Computer Int. of Cupertino, Calif., UNIX operating systems, and the like. The present invention has been explained with reference to specific embodiments. For example, while embodiments of the present invention have been described with reference to specific hardware and software components, those skilled in the art will appreciate that different combinations of hardware and/or software components may also be used, and that particular operations described as being implemented in hardware might also be implemented in software or vice versa. Other embodiments will be evident to those of ordinary skill in the art. It is therefore not intended that the present invention be limited, except as indicated by the appended claims. | 37,392 |
11861517 | DETAILED DESCRIPTION I. Introduction Various embodiments are directed to an activity monitoring system in a continuous glucose monitor. The activity monitoring system is configured to detect movement of a user, determine whether the movement of the user includes the user taking consecutive steps, and determine how many consecutive steps the user has taken over a period time. A problem associated with conventional activity monitoring systems and methods is that they utilize complex algorithms that require extensive processing power, a large amount of memory or storage, and a rechargeable power or energy source for counting steps. Moreover, conventional activity monitoring systems typically demonstrate high error rate (e.g., up to 60% error rate) for determining steps taken during certain physical activities such as walking on a treadmill. This error rate trickles down and undesirably impacts the overall step count of the conventional activity monitoring systems and methods. To address these problems, various embodiments described herein are directed to activity monitoring systems and methods capable of achieving minimal error rate in an environment, such as a continuous glucose monitoring system (e.g., including one-time use or disposable continuous glucose monitors), with limited processing and power resources. In particular, processes were developed that include gating whether or not steps should be counted in an observation window based on whether a decision tree concludes there are consecutive step activities (versus no activity or other activities) in the observation window. For example, various embodiments of the present disclosure include a system including one or more processors and a memory coupled to the one or more processors. The memory is encoded with a set of instructions configured to perform a process including obtaining acceleration data for an observation window of an accelerometer, inputting two or more characteristics of the acceleration data into a decision tree to determine activity occurring within the observation window, assigning a first class (e.g., a low activity class) to the observation window when the determined activity is associated with consecutive steps, assigning a second class (e.g., a no activity class) to the observation window when the determined activity is not associated with consecutive steps, and when the first class is assigned to the observation window, determining a step count for the observation window using frequency analysis. Advantageously, these approaches provide activity monitoring systems and methods that are capable of achieving minimal error rate in an environment such as a continuous glucose monitoring system with limited processing and power resources. For example, the decision tree can be implemented with a low power budget (e.g., a simple tree of conditionals), and provides powerful non-linear classification capabilities of a multi-dimensional search space. The non-linear classification may be used to gate whether or not steps should be counted in an observation window, and consequently saves on computation power and increases robustness of the overall step counting process. II. Activity Monitoring System FIG.1is an illustrative architecture of a computing system100implemented in various embodiments. The computing system100is only one example of a suitable computing system and is not intended to suggest any limitation as to the scope of use or functionality of the various embodiments. Also, computing system100should not be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in computing system100. As shown inFIG.1, computing system100includes a computing device105. The computing device105can be resident on a network infrastructure such as within a cloud environment, or may be a separate independent computing device (e.g., a computing device implemented within the environment of a medical device110such as a continuous glucose monitor). The computing device105may include a bus115, processor120, a storage device125, a system memory (hardware device)130, and a communication interface135. The bus115permits communication among the components of computing device105. For example, bus115may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures to provide one or more wired or wireless communication links or paths for transferring data and/or power to, from, or between various other components of computing device105. The processor120may be one or more integrated circuits, printed circuits, controllers, microprocessors, or specialized dedicated processors that include processing circuitry operative to interpret and execute computer readable program instructions, such as program instructions for controlling the operation and performance of one or more of the various other components of computing device105for implementing the functionality, steps, and/or performance of the embodiments discussed herein. In certain embodiments, processor120interprets and executes the processes, steps, functions, and/or operations, which may be operatively implemented by the computer readable program instructions. For example, processor120can retrieve, e.g., import and/or otherwise obtain acceleration data from an accelerometer140, input two or more characteristics of the acceleration data into a decision tree to determine activity occurring within an observation window, assign a first class to the observation window when the determined activity is associated with consecutive steps, assign a second class to the observation window when the determined activity is not associated with consecutive steps, and when the first class is assigned to the observation window, determine a step count for the observation window using frequency analysis. In embodiments, the information obtained or generated by the processor120, e.g., accelerometer data, timestamps, a tally for various classifications, step counts for observations windows, a total step count, error codes, etc., can be stored in the storage device125. The storage device125may include removable/non-removable, volatile/non-volatile computer readable media, such as, but not limited to, non-transitory machine readable storage medium such as magnetic and/or optical recording media and their corresponding drives. The drives and their associated computer readable media provide for storage of computer readable program instructions, data structures, program modules and other data for operation of computing device105. In various embodiments, storage device125may store operating system145, application programs150, and/or program data155. In some embodiments, the application programs150, and/or program data155may include a database, index, or table, and algorithms such as an activity classification algorithm that includes components for pre-processing acceleration data, components for calculating characteristics of the acceleration data, a decision tree component to classify an observation window, and a step counting algorithm to quantify the number of steps taken by a user during a predetermined period of time, which provide the instructions for execution of processor120. The system memory130may include one or more storage mediums, including for example, non-transitory machine readable storage medium such as flash memory, permanent memory such as read-only memory (“ROM”), semi-permanent memory such as random access memory (“RAM”), any other suitable type of non-transitory storage component, or any combination thereof. In some embodiments, an input/output system165(BIOS) including the basic routines that help to transfer information between the various other components of computing device105, such as during start-up, may be stored in the ROM. Additionally, data and/or program modules170, such as at least a portion of operating system145, application programs150, and/or program data155, that are accessible to and/or presently being operated on by processor120, may be contained in the system memory130. The data and/or program modules170may include a motion data collector class that is configured to log activity histograms, step counts, and error status to the database at intervals (e.g., timer driven), and an activity monitor class that contains a comprehensive high level feature manager, and initialization, start, stop, error recovery, and interrupt service routines. The activity monitor class is configured to drive the accelerometer, and push data through the algorithms. The data and/or program modules may further include an accelerometer class configured to expose all necessary accelerometer behavior required to implement the step count features, and an activity classifier class that contains activity classification and step counter algorithm implementations and configured to maintain activity histograms and step counts. The data and/or program modules170may further include a controller class that contains a multi-slave serial peripheral interface device driver, a decision tree class configured to complete decision tree implementation with evaluation, and a math operation class configured to calculate characteristics of acceleration data including (i) the L1 Difference Norm, (ii) the L2 Difference Norm, (iii) the FFT Peak Frequency, (iv) the FFT Spectral Entropy, and (iv) the FFT Total Energy. The communication interface135may include any transceiver-like mechanism (e.g., a network interface, a network adapter, a modem, or combinations thereof) that enables computing device105to communicate with remote devices or systems, such as medical device110, accelerometer140, a mobile device or other computing devices180such as, for example, a server in a networked environment, e.g., cloud environment. For example, computing device105may be connected to remote devices or systems via one or more local area networks (LAN) and/or one or more wide area networks (WAN) using communication interface135. As discussed herein, computing system100may be configured to gate whether or not steps should be counted in an observation window based on whether a decision tree concludes there are consecutive step activities (versus no activity or other activities) in the observation window. In particular, computing device105may perform tasks (e.g., process, steps, methods and/or functionality) in response to processor120executing program instructions contained in non-transitory machine readable storage medium, such as system memory130. The program instructions may be read into system memory130from another computer readable medium (e.g., non-transitory machine readable storage medium), such as data storage device125, or from another device via the communication interface135or server within or outside of a cloud environment. In some embodiments, hardwired circuitry of computing system100may be used in place of or in combination with the program instructions to implement the tasks, e.g., steps, methods and/or functionality, consistent with the different aspects discussed herein. Thus, the steps, methods and/or functionality disclosed herein can be implemented in any combination of hardware circuitry and software. III. Activity Monitoring Processes FIGS.2,3, and5are simplified flowcharts depicting processing performed for preparing or pre-processing acceleration data obtained from an accelerometer, assigning an activity class to an observation window, and counting steps according to various embodiments. The steps ofFIGS.2,3, and5may be implemented in the system environment ofFIG.1, for example. As noted herein, the flowcharts ofFIGS.2,3, and5illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which includes one or more executable instructions for implementing the specified logical functions. It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combination of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. FIG.2depicts a simplified flowchart200illustrating a process used to prepare or pre-process acceleration data obtained from the accelerometer (e.g., accelerometer140described with respect toFIG.2) for use in later processing that includes a classification process used to assign an activity class to an observation window of the acceleration data, and a step counting process used to count steps within the observation window. In order for the processes ofFIGS.3and5to be applied to accelerometer data obtained from an accelerometer, the accelerometer may be configured with settings to produce samples points with characteristics used by the processes. In some embodiments, a number of sample points to be employed should be specified (that is, the observation window for the spectrum) as a power of 2 (e.g., 4, 8, 16, . . . 256, 1024, . . . , etc) to better amortize computational costs during Fast Fourier Transform (FFT) operations. For example, a sensitivity of the accelerometer may be set to +/−2 G, a sampling rate of the accelerometer may be set at 12.5 Hz, and the first in, first out (FIFO) Buffer Module may be set to streaming mode with a watermark of 128 tri-axial samples. The accelerometer may be configured to measure proper acceleration, which is the acceleration that the accelerometer experiences relative to freefall, and this is most commonly called “G-Force” (G). By way of example, an accelerometer at resting on a table will measure 1G (9.81 m/s2) straight downwards. By contrast, an accelerometer in free fall and accelerating due to the gravity of Earth will measure 0 G. The FIFO Buffer Module may be a variable-length buffer with scalable register word-width and address space, or depth, and includes a process for organizing and manipulating the data buffer, where the oldest (first) entry, or head of the queue, is processed first. A watermark is a flag, e.g., a flag for “empty”, “full”, “almost full”, “almost empty”, and “error” conditions. The depth of the flags may be adjusted. Accordingly, sampling, e.g., at 12.5 Hz, means a 5.12 second observation window can provide 64 samples, so a 64 point FFT looks at an observation window with a width of 5.12 seconds. At step205, acceleration data in each observation window (e.g., a 64 sample observation window) is received by the activity monitor from the accelerometer (e.g., the FIFO Buffer Module). The FIFO Buffer Module of the accelerometer may be set with a watermark to send an interrupt signal to the wake up the processor (e.g., processor120discussed with respect toFIG.1) at periodic intervals such that one or more observation windows are processed successively or simultaneously. In some embodiments, the FIFO Buffer Module is set to streaming mode with a 12.5 Hz sampling rate and a FIFO watermark is set to 128 tri-axial samples (128×3 axes (X, Y, and Z)=384 of the available FIFO entries) such that the interrupt signal is sent by the FIFO Buffer Module at intervals of 128 samples, at which point two 64 sample observation windows including acceleration data for each of the axes at each sample point are received and processed successively by the processor. At step210, once the acceleration data is received (e.g., raw values for X, Y, and Z for each sample point) in step205, the acceleration data is converted to Milli-G or “G-Force” (G). The accelerometer may be configured to output raw values for the acceleration data as Milli-G/least significant bit (LSB), which is the last bit on the right. Accordingly, an accelerometer set with a +/−sensitivity level of, for example 2, would provide a sensitivity total of 4 G or 4,000 Milli-Gs. In some embodiments, the raw values from the accelerometer are multiplied by the sensitivity level to convert the acceleration data to Milli-G or “G-Force” (G). For example, if the accelerometer is set to output at 16 bits, the 16 bits may be equivalent to 65,535 different readings for the range between a sensitivity of −2 G and +2 G (or −2,000 Milli-Gs and +2 Milli-Gs). Consequently, each time the LSB changes by one, the raw value changes by 0.061 (4,000 Milli-Gs/65,535=0.061). By way of example, if the accelerometer is horizontal and at rest when using a sensitivity level of ±2g, the raw value for the Z axis should be around 16,500. The raw value of 16,500 may then be converted by the activity monitor in step210by multiplying 16,500 by the LSB value for +/−2 G (e.g., 0.061) to obtain a value of 1006 Milli-Gs or 1G. At step215, the acceleration data in the form of Milli-G or G values calculated for the X and Y axes of each sample are combined to obtain a magnitude of acceleration for each sample. In some embodiments, equation (1) may be used to obtain the magnitude. magnitude=√{square root over (x2+y2)}, (1) where X is a value of acceleration (Milli-G or G values) on the X axis and Y is a value of acceleration (Milli-G or G values) on the Y axis. At step220, the resulting magnitudes of acceleration for the samples (e.g., 64 magnitudes for the 64 sample observation window) are normalized to be a zero mean, producing a vector X. The normalization of the acceleration data to a zero mean standardizes the range of independent variables (i.e., planarizes the acceleration data) prior to FFT transformation such that the objective functions performed thereafter will work properly. FIG.3depicts a simplified flowchart300illustrating a process used for assigning an activity class to an observation window (e.g., a first class that includes activities associated with consecutive steps such as walking or running, and a second class that includes activities not associated with consecutive steps such as no activity or other activities than those involving a step such as jumping or swimming). For example, to improve step counter robustness of the activity monitor system, an activity classification process may be utilized in accordance with various embodiments to gate whether or not steps should be counted in a given observational window based on whether the process concludes there are consecutive step activities (versus no activity or other activities) in the window. In some embodiments, the activity classification process is implemented as a decision tree, which allows for complex non-linear partitions to be developed in the problem space based on statistical processes. However, it should be understood that the selection of characteristics to be used in the decision tree is not a statistical process, and instead is a selective process. Through this selective process it has been found surprisingly and unexpectedly that a combination of two or more of the following characteristics: (i) the L1 Difference Norm, (ii) the L2 Difference Norm, (iii) the FFT Peak Frequency, (iv) the FFT Spectral Entropy, and (iv) the FFT Total Energy, allows for the activity monitoring systems to determine a step count of a user over a predetermined period of time with minimal error rate in an environment such as a continuous glucose monitoring system with limited processing and power resources. The generation of each characteristic for input into the activity classification process may be performed as follows in accordance with various embodiments. At step305, a sum is taken of absolute differences of magnitudes (e.g., the jerk) of acceleration calculated for each sample in step215ofFIG.2. In some embodiments, equation (3) may be used to obtain the sum of absolute differences of magnitudes of acceleration. L1Differencenorm=∑i=063abs(Xi+1-Xi),(3) where X is the vector X of the magnitudes of acceleration calculated in step220ofFIG.2. The LI Difference Norm effectively measures an amount of change (i.e., amplitude) of the signal, which correlates well with both the presence of steps and the speed of the steps (via acceleration magnitude). At step310, a sum is taken of absolute second differences of magnitudes (e.g., the jounce) of acceleration calculated for each sample in step215ofFIG.2. In some embodiments, equation (4) may be used to obtain the sum of absolute second differences of magnitudes of acceleration. L2Differencenorm=∑i=062abs((Xi+2-Xi+1)-(Xi+1-Xi))=∑i=062abs(Xi+2-2Xi+1+Xi),(4) where X is the vector X of the magnitudes of acceleration calculated in step220ofFIG.2. The L2 Difference Norm effectively measures a similarity, a quality, or a correlation between two signals, which correlates well with the presence of continuous steps. At step315, the FFT is calculated for the vector X of the magnitudes of acceleration calculated in step220ofFIG.2(e.g., FFT size=64). The FFT is a known method for calculating the Discrete Fourier Transform (DFT), and is not addressed in detail herein. At step320, a peak frequency in Hz corresponding to a maximum energy in the computed FFT is determined. In some embodiments, equation (5) may be used to obtain the peak frequency. FFTPeak Freq=argmax(P(fi)) (5), where argmax is the x-axis value at which the function's y-axis value is at its max (e.g., frequency at which P(fi) is at its max value), P(fi) is the FFT function indexed by frequency (fi). The integration of the peak frequency over the observation window corresponds to the number of steps in the observation window, assuming a constant walking/running speed. At step325, spectral entropy in the computed FFT is determined. In some embodiments, equation (6) may be used to obtain the spectral entropy. FFTSpectral Entropy=Σ−P(fi)*log(P(fi)) (6), where P(fi) is the FFT function indexed by frequency (fi). A clean activity single such as a walking/running signal will have very little entropy in the FFT (there will be only a single large frequency peak). However, entropy in the FFT may vary at different walking/running speeds due to unmodeled dynamics of human motion. Consequently, spectral entropy presents as a good metric for differentiation. At step330, a total energy in the computed FFT is determined. In some embodiments, equation (7) may be used to obtain the total energy. FFTTotal Energy=ΣP(fi)) (7), where P(fi) is the FFT function indexed by frequency (fi). A clean activity single such as a walking/running signal will exhibit most of its energy in the FFT at a corresponding dominant peak frequency. However, energy in the FFT may vary at different walking/running speeds due to unmodeled dynamics of human motion. Thus, energy also presents as a good metric for differentiation. A combination of characteristics including two or more of the following characteristics: (i) the L1 Difference Norm, (ii) the L2 Difference Norm, (iii) the FFT Peak Frequency, (iv) the FFT Spectral Entropy, and (iv) the FFT Total Energy, are input into the activity classification process at step335, which determines activity occurring within the observation window (e.g., a 64 sample observation window) and assigns an activity class to the observation window based on the determined activity. In some embodiments, each of the two or more characteristics is selected without replication from the group consisting of: (i) the L1 Difference Norm, (ii) the L2 Difference Norm, (iii) the FFT Peak Frequency, (iv) the FFT Spectral Entropy, and (iv) the FFT Total Energy. In alternative embodiments, the characteristics include the L1 Difference Norm and the FFT Spectral Entropy. In other embodiments, the characteristics include the L1 Difference Norm, the L2 Difference Norm, the FFT Peak Frequency, and the FFT Spectral Entropy. In yet other embodiments, the characteristics include the L1 Difference Norm, the L2 Difference Norm, the FFT Peak Frequency, the FFT Spectral Entropy, and the FFT Total Energy. In various embodiments, the activity classification process for determining activity occurring within the observation window and assigning an activity class to the observation window includes: (i) determining a probability for each of one or more activities determined to be occurring with the observation window, (ii) determining an activity with the greatest probability of occurring within the observation window based on the determined probabilities for the one or more activities, (iii) assigning a first class to the observation window when the determined activity is associated with consecutive steps, and (iv) assigning a second class to the observation window when the determined activity is not associated with consecutive steps. In some embodiments, the one or more activities includes no activity, activity other than those including steps (e.g., jumping or swimming), walking, jogging, running, etc., the first class may include activities associated with consecutive steps such as walking or running, and the second class may include activities not associated with consecutive steps such as no activity or other activities than those involving a step. In other embodiments, the first class may be partitioned to provide greater detail concerning the frequency in which the consecutive steps are being taken. For example, the first class may instead be separated into a combination of a low activity class (a lower frequency of consecutive steps being taken such as in walking) and a high activity class (a higher frequency of consecutive steps being taken such as in jogging/running). The low activity class may be defined as approximately a 1.5-3 mph walk, the high activity class may be defined as approximately 3-7 mph jog/run, and the second class, no activity class, or other activity class may be defined as anything less than approximately 1.5 mph activity. In such an instance the activity classification process for determining activity occurring within the observation window and assigning an activity class to the observation window may include: (i) determining a probability for each of one or more activities determined to be occurring with the observation window, (ii) determining an activity with the greatest probability of occurring within the observation window based on the determined probabilities for the one or more activities, (iii) assigning a first class to the observation window when the determined activity is associated with walking, (iv) assigning a second class to the observation window when the determined activity is associated with running, and (iv) assigning a third class to the observation window when the determined activity is associated with no activity or an activity that does not include consecutive steps. In various embodiments, the activity classification process includes the use of a decision tree developed statistically through machine learning on accelerometer training data to determine activity occurring within the observation window and assign an activity class to the observation window. An example of a simple decision tree used in the activity classification process is shown inFIG.4; however it should be understood that the decision tree used in actual practice may be more complex (e.g., the decision tree may further include branches to determine additional activity classes such as high activity or running). The left branch may be traversed when the decision tree element inequality is satisfied and the right branch may be traversed when the decision tree element inequality is not satisfied. The node shading indicates the nodal decision strength for non-walking and walking. The leaf nodes do not have any conditions, as they are the final class “decision” of the tree. As an illustrative example, if the LI Difference Norm calculated for an observation window is less than or equal to “X” in step405and greater than “Y” in step410, then the spectral entropy calculated for the observation window is compared to a value “Z” in step415. If the spectral entropy is less than or equal to “Z”, then in step420a probability is assigned to the activity of walking based on the nodal decision strength for the leaf. If the spectral entropy is greater than “Z”, then in step425a probability is assigned to the activity of non-walking based on the nodal decision strength for the leaf. The activity (e.g., non-walking or walking) with the greatest probability of occurring within the observation window is then determined based on the determined probabilities for the one or more activities. The observation window is then assigned a class based on the activity determined to have the greatest probability of occurring within the observation window (e.g., a low activity class may be assigned to the observation window if the activity having the greatest probability of occurring was walking). At step340, the activity class determined in step335is used to increment a total count of the activity class for a user over a predetermined period of time. In some embodiments, a record is maintained in a database (e.g., database160as discussed with respect toFIG.1) for each activity class (e.g., first class, second class, third class, no activity class, low activity class, high activity class, etc.), and the total count for each activity class is incremented (e.g., incremented by 1) each time the activity class is assigned to an observation window. Accordingly, the activity monitor is configured to keep a tally of all types of activity performed by a user over a predetermined period of time (e.g., an hour, a day, a week, etc.). FIG.5depicts a simplified flowchart500illustrating a step counting process used to count steps within an observation window. At step505, a determination is made as to whether the step counting process should be initiated. If the activity classification process (e.g., step335described with respect toFIG.3) concludes that the activity class assigned to the observation window is associated with no activity or other activities than those involving a step (e.g., jumping or swimming), then it is determined that consecutive steps are not present in the activity and consequently the process ends at step510and the step counting process is not performed for the observation window. If the activity classification process concludes that the activity class assigned to the observation window is associated with a low activity (e.g., walking) and/or a high activity (e.g., running), then it is determined that consecutive steps are present in the activity and consequently further processing is performed starting at step515to determine an estimate of the number of consecutive steps present in the observation window. At step515, harmonics introduced into the accelerometer data are detected and adjusted by scaling the magnitude of the FFT to accurately count consecutive steps despite the harmonics. The FFT may be obtained from step315described with respect toFIG.3. The harmonics or higher order effects may be associated with various motions or noises (e.g., associated with attachment of the continuous glucose monitor to a user's body) that have a frequency that is an integer multiple of the fundamental frequency or frequency associated with consecutive steps for an activity class (e.g., a low activity class or a high activity class). In certain embodiments, the process of detecting and adjusting for the harmonics may include selecting a binary spectrum weight function based on the activity class determined for the observation window. The binary spectrum weight enables the step counting process to accurately remove the harmonics in the spectrum domain, and thus correctly identify the spectral peak or fundamental frequency that represents the number of consecutive steps per second within the observed time window. For example, equation (8) may be used to scale the magnitude of the FFT if a low activity class is assigned to the observation window, and equation (9) may be used to scale the magnitude of the FFT if a high activity class is assigned to the observation window. M′=M,f≤1Hz M′=M/f2,f>1Hz (8), M′=M,f≤3Hz M′=M/f2,f>3Hz (9), where M is magnitude and f is the FFT frequency. In some embodiments, 1/f2is implemented as a lookup table to optimize runtime. This scaling dampens the harmonics and higher order effects that may be present in the FFT. The window of scaling corresponds to the expected frequency range for steps for each activity class. At step520, a peak frequency in Hz corresponding to a maximum energy in the computed FFT is determined using the scaled magnitude calculated in step515. In some embodiments, equation (10) may be used to obtain the peak frequency. ModifiedFFTPeak Freq=argmax(P(fi)) (10), where argmax is the x-axis value at which the function's y-axis value is at its max (e.g., frequency at which P(fi) is at its max value), P(fi) is the FFT function indexed by frequency (fi). The dominant peak frequency corresponds to the walking frequency in Hz. At step525, the step count of the observation window is determined by integrating the dominant peak frequency by the width of the observation window. In some embodiments, equation (11) may be used to increment the step count. Step Count=ModifiedFFTPeak Freq*width (11), where Modified FFT Peak Freq is calculated in step520and width is the width of the observation window (e.g., the width of the observation window may be 5.12 seconds for the 64 sample observation window). At step530, the step count determined in step525is used to increment the total step count for the user over a predetermined period of time. In some embodiments, a record is maintained in a database (e.g., database160as discussed with respect toFIG.1) for the total step count (e.g., the step count over multiple observation windows throughout a predetermined period time), and the total step count is incremented (e.g., increment by the steps determined in step520) each time an observation window in processed. In other embodiments, since the activity monitoring system determines a class of activity being performed (e.g., walking or running), a separate tally may be kept for steps associated with each class of activity. Accordingly, the activity monitor is configured to keep a tally of all steps taken by a user over a predetermined period of time (e.g., a half hour, an hour, a day, a week, etc.). In various embodiments, the activity class data and the step count data maintained in a database (e.g., database160as discussed with respect toFIG.1) are used by the continuous glucose monitoring system to provide contextual labels to glucose readings prior to reporting the glucose readings. For example, the continuous glucose monitoring system may be configured to perform glucose analysis on a patient (e.g., determine a patient's blood glucose concentration) and report glucose readings based on a time schedule (e.g., a predetermined period of time such as a half hour). Prior to reporting the glucose readings to a mobile device/reader, the continuous glucose monitoring system may be further configured to review the database and provide contextual labels (e.g., “high activity time interval” vs. “low activity interval” vs. “no activity interval”) to the glucose readings based on the activity class data and the step count data recorded for the observation windows that occurred within the predetermined period of time (e.g., the previous half hour). In additional or alternative embodiments, the activity class data and the step count data maintained in a database (e.g., database160as discussed with respect toFIG.1) are used by the continuous glucose monitoring system to calibrate or correct glucose readings prior to reporting the glucose readings. For example, the continuous glucose monitoring system may be configured to perform glucose analysis on a patient (e.g., determine a patient's blood glucose concentration) and report glucose readings based on a time schedule (e.g., a predetermined period of time such as a half hour). Prior to reporting the glucose readings to a mobile device/reader, the continuous glucose monitoring system may be further configured to review the database and calibrate or correct the glucose readings based on the activity class data and the step count data recorded for the observation windows that occurred within the predetermined period of time (e.g., the previous half hour). In some embodiments, the calibrating or correcting the glucose readings may include applying a correction factor to the glucose reading based on the activity class data and the step count data recorded for the observation windows that occurred within the predetermined period of time (e.g., the previous half hour). In additional or alternative embodiments, the activity class data and the step count data maintained in a database (e.g., database160as discussed with respect toFIG.1) are used by the continuous glucose monitoring system to report predetermined time period based metrics. For example, the continuous glucose monitoring system may be configured to report walking minutes and running minutes based on a time schedule (e.g., a predetermined period of time such as daily) since the activity monitor is configured to maintain data on whether an individual is walking or running and the steps associated with each activity class. IV. Examples Without intending to limit the scope of the embodiments discussed herein, the systems and methods implemented in various embodiments may be better understood by referring to the following examples. Example 1 (Activity Monitoring System—Power Usage) An ADXL362 accelerometer was configured with the following settings to produce characteristics used in the processes described herein to assign an activity class to an observation window and count steps: +/−2 G operating mode, 12.5 Hz sampling rate, normal noise mode, FIFO set to streaming mode, and a FIFO watermark set to 128 tri-axial samples. An investigation of the activity monitoring system during runtime revealed that the activity monitoring system increases power draw from the environment in which it is operating in the following areas: (i) accelerometer base load, which is a continuous power draw by the ADXL362 accelerometer when it is sampling, (ii) FIFO data download, which is the system power used to download 128 samples from the accelerometer FIFO, (iii) activity classification algorithm and step counting algorithm, which is system power used to run the algorithms, and (iv) record to database, which is system power required to log classification and step count to database at intervals (this power draw is trivial and was thus ignored in this example). With the given accelerometer configuration and at a nominal operating voltage of 3.2 V, the accelerometer base load draw was a constant average power draw of 1.8 μA. The FIFO data download step includes processing data that consists of 776 bytes of data being transferred between the processor and accelerometer, including: 4 bytes to read the number of samples in the FIFO (command, address, and u16 data), 768 bytes=128 samples×6 bytes per sample, and 4 bytes=1 byte overhead per 42 samples as read facilitated in chunks of bytes. At an operating rate of 8 MHZ, the download step will take at least 0.776 ms. In practice, this step takes 1.375 ms of CPU uptime including overheads, with a system power draw of 3.65 mA. Given the FIFO should be downloaded every 10.24 s, this uptime corresponds to an average power draw of 0.49 μA. Running the activity classification algorithm and the step counting algorithm on the 128 byte of data (two 64-sample observation windows) utilizes a CPU uptime of 1.806 ms (empirical). This uptime corresponds to an average power draw of 0.64 μA. Accordingly, the activity monitoring system adds an average power draw of 2.93 μA to the average system power draw at a nominal operating voltage of 3.2 V. In contrast, the conventional activity monitoring system adds an average power draw of 8.48 μA to the average system power draw at a nominal operating voltage of 3.2 V. The only difference between the activity monitoring systems being the algorithms used for determining the step count. Consequently, the processes of various embodiments discussed herein are capable of reducing the power draw on the environment in which the activity monitoring system is operating by about 5.5 μA. Advantageously, these approaches provide activity monitoring systems and methods that are capable of operating in an environment such as a continuous glucose monitoring system with limited processing and power resources. Example 2 (Activity Monitoring System—Error Rate Verification) The accelerometer and activity monitoring system described with respect to Example 1 were attached to 10 participants. In particular, one accelerometer and activity monitoring system was attached to the right abdomen of each participant and a raw data logger (activity monitoring system used as a control/training) was attached to the right abdomen of each participant. The activity monitoring systems and data loggers were set to record accumulated consecutive steps every 15 seconds, and an experimenter manually counted actual consecutive steps using a clicker (+/−1 step error). The participants were each asked to perform various activities indoors (i.e., on a treadmill) and outdoors for various lengths of time (as shown inFIG.6).FIGS.7A,7B,7C,7D,7E,7F,7G,7H,7I,7J,7K,7L,7M, and7Nshow signals generated and recorded for each of the activities performed by the participants, in accordance with various embodiments.FIGS.8A and8Bshow that for each activity the error in determining the step count was less than 10%, and less than 5% for any activity having a speed of greater than 1.5 mph. Accordingly, the activity monitoring system is capable of maintaining an error rate of less than 10% for activities that include taking consecutive steps such as walking, jogging, and running no matter whether the activity is performed on a treadmill or outdoors. In contrast, the conventional activity monitoring system is capable of maintaining an error rate of less than 10% for activities that include taking consecutive steps such as walking, jogging, and running outdoors but demonstrate high error rate (e.g., up to 60% error rate) for determining steps taken during certain physical activities such as walking on a treadmill. Consequently, the processes of the various embodiments discussed herein are capable of achieving minimal error in step count, and in some circumstances even improve upon conventional step count techniques. Advantageously, these approaches provide activity monitoring systems and methods that are capable of determining user activity with minimal error rate while operating in an environment such as a continuous glucose monitoring system with limited processing and power resources. While various embodiments have been described in detail, modifications within the spirit and scope of the present invention will be readily apparent to the skilled artisan. It should be understood that certain aspects and portions of various embodiments and various features recited above and/or in the appended claims may be combined or interchanged either in whole or in part. In the foregoing descriptions of the various embodiments, those embodiments which refer to another embodiment may be appropriately combined with other embodiments as will be appreciated by the skilled artisan. Furthermore, the skilled artisan will appreciate that the foregoing description is by way of example only, and is not intended to limit the present invention. | 44,191 |
11861518 | DETAILED DESCRIPTION The weak correlation between the initial priority that a product user assigns to a service ticket and the subsequent escalation of the service ticket is in part due to a lack of consensus between the product user and the assigned service agent on what the priority of the service ticket should be at the outset. This weak correlation may also be due to the product user's emotions decreasing or increasing the product user's perception of a service ticket's priority over time. However, a ticketing system does not record a product user's emotionally changing perception of the priority unless the product user submits such a mental change in priority into the ticketing system. For a system that predicts escalations of service tickets by focusing predominantly on leveraging metadata about the communications between the two (or more) parties in a service ticket's conversation, the analysis of the resulting escalation predictions may determine that while recall was excellent, the false positive rate could become excessive, if the predictions were executing at the most sensitive level. Detailed analysis of the false positives identified a variety of causes. This disclosure presents additional features that address these causes and thereby both reduce the number of false positives and improve the fidelity of the escalation predictions. Embodiments herein enable high fidelity predictions of service ticket escalation. A system derives training set change factors for services provided for a training set product user, a priority assigned to a training set service ticket initiated by the training set product user, times of service ticket interactions associated with the training set service ticket, and/or an age of the training set service ticket, and also for times of states of the training set service ticket. The system uses the training set service ticket and the training set change factors to train a change-based machine-learning model to predict a change-based training probability that the training set product user escalated service for the training set service ticket. The system derives change factors for services provided for a product user, a priority assigned to a service ticket initiated by the product user, times of service ticket interactions associated with the service ticket, and/or an age of the service ticket, and also for times of states of the service ticket. The system applies the change-based machine-learning model to the service ticket and the change factors to predict a change-based probability that the product user escalates service for the service ticket. The system outputs the change-based probability. For example, a training server receives training set data that includes a technical support ticket that contains all subsequent interactions102between a software product user Ann and a technical support agent Bob concerning a remote mount problem, and the technical support ticket's metadata104, as depicted byFIG.1. Then the training server derives the training set's change factors which indicate that Bob was the only technical support agent who replied to Ann, that Bob did not request any information from Ann, two of their three interactions included machine text, that Ann did not change the ticket's priority, the most recent comment was Ann's response thanking Bob for his advice, and the time series data's timestamps106indicate that Ann's thanks was before the first hourly observation of the service ticket's state after the ticket's initiation. The training server uses the technical support ticket's unchanging priority, and the natural language processing of the last comment as Ann's thankful response to Bob's advice within 35 minutes of the service ticket's initiation, which implies that Ann following Bob's advice corrected Ann's problem, to predict a 1% probability that Ann escalated her service ticket within 90 days of initiation. Then the training server accesses the time series data's timestamps106which indicate that Ann closed the service ticket on Wednesday at 2:45 P.M., which confirms the training server's 1% escalation prediction. A production server receives online production data that includes a pending urgent technical support ticket that contains all subsequent interactions202and204between a software product user Chris and a technical support agent Dana concerning a remote mount problem, and the technical support ticket's metadata206, as depicted byFIG.2. Then, the production server derives subsequent change factors which indicate that Chris changes the ticket's priority to urgent, the most recent comments were Chris' second request for help, which implies that Dana's advice was ineffective, and Chris's frustrated “Hello?”, the time series data's timestamps208which indicate that Chris' implied rejection of Dana's advice was before the first hourly observation of the service ticket's state after the ticket's initiation, and the lack of Dana's reply to Chris' second request for help within the next 5 hourly observations. The production server uses the technical support ticket's new urgent priority, the natural language processing of Chris' frustrated responses to Dana's mount problem advice, which implies that Chris following Dana's advice failed to correct Chris' problem, and the lack of Dana's reply to Chris' responses within 5 hours to predict a 95% probability that Chris will escalate service within the next 4 hours. The production server outputs the prediction of the 95% probability that Chris will escalate service within the next 4 hours, and the explanation that the prediction is based on 1) the last two comments which are Chris' frustrated responses to Dana's mount problem advice, which implies that Chris following Dana's advice failed to correct Chris' problem, 2) the lack of Dana's reply to Chris' responses within 5 hours, and 3) the technical support ticket's new urgent priority. Being able to predict the escalation probabilities of service tickets more intelligently in real-time can draw attention to those service tickets that might be escalated in the future. These predictions can give a service company sufficient time to be able to redirect or transfer resources to service tickets that need them the most. A timely transfer of resources can prevent the escalations. Integrating such an intelligent system that closely assesses service ticket progression is more robust and reliable than using simple pre-set indicators or round robin mechanisms to allocate resources. FIG.3illustrates a block diagram of an example system300for high fidelity predictions of service ticket escalation, under an embodiment. As shown inFIG.3, the system300may illustrate a cloud computing environment in which data, applications, services, and other resources are stored and delivered through shared data centers and appear as a single point of access for the product users. The system300may also represent any other type of distributed computer network environment in which servers control the storage and distribution of resources and services for different client users. In an embodiment, the system300represents a cloud computing system that includes a first client302, a second client304, a third client306, a fourth client308, a fifth client310; and a first server312and a second server314that may be provided by a hosting company. The clients302-310and the servers312-314communicate via a network316. The first server312may be referred to as the training server312, and the second server314may be referred to as the production server314. The training server312may include a training escalation prediction system318, which may include a history-based training machine-learning model320and a change-based training machine-learning model322; and the production server314may include a production escalation prediction system324, which may include a history-based production machine-learning model326and a change-based production machine-learning model328. The training escalation prediction system318may include the history-based training machine-learning model320and the change-based training machine-learning model322, or the history-based training machine-learning model320and the change-based training machine-learning model322may be combined into one training machine-learning model. Similarly, the production escalation prediction system324may include the history-based production machine-learning model326and the change-based production machine-learning model328, or the history-based production machine-learning model326and the change-based production machine-learning model328may be combined into one production machine-learning model. Even thoughFIG.3depicts the first client302as a smartphone302, the second client304as a terminal304, the third client306as a tablet computer306, the fourth client308as a laptop computer308, the fifth client310as a personal computer310, and the servers312-314as servers312-314, each of the system components302-316may be any type of computer system. The system elements302-314may each be substantially similar to the hardware device500depicted inFIG.5and described below. WhileFIG.3depicts the system300with five clients302-310, two servers312-314, one network316, two escalation prediction systems318and324, two training machine-learning models320and322, and two production machine-learning models326and328, the system300may include any number of clients302-310, any number of servers312-314, any number of networks316, any number of escalation prediction systems318and324, any number of training machine-learning models320and322, and any number of production machine-learning models326and328. AlthoughFIG.3depicts all of the training elements318-322residing completely on the training server312, any or all of the training elements318-322may reside completely on the production server314, or in any combination of partially on the training server312, partially on the production server314, partially on the clients302-310, such as by residing as data management applications on the clients302-310, and partially on another server which is not depicted inFIG.3. WhileFIG.3depicts all of the production elements324-328residing completely on the production server314, any or all of the production elements324-328may reside completely on the training server312, or in any combination of partially on the production server314, partially on the training server312, partially on the clients302-310, such as by residing as data management applications on the clients302-310, and partially on another server which is not depicted inFIG.3. After training to predict service ticket escalation, the system300may be referred to as the trained system300. A product user's decision to escalate a service ticket may be difficult to predict because such a decision is so heavily dependent on human nature. Since every person is unique, each product user has a different tolerance for a delayed resolution of a service ticket. Furthermore, each product user's tolerance varies based on a wide variety of subtle factors. Some of these factors are more quantifiable, including the recent history of the product user's problems, the recent history of the user's service ticket interactions, and the user's perception of the ‘appropriate’ resolution time for a service ticket. The system300can directly derive these factors from the service tickets, and from the record of historical interactions. Emotional factors are more difficult to quantify, including the impact on a product user's work or their business, whether the user is upset because the user was away or on vacation when the problem occurred and still had to respond to the problem, how visible the problem is to the user's supervisor, and the user's stress level. Since these factors are more difficult to capture, the system300can use proxies to incorporate the effect of these behavioral components. For example, the300system determines whether a product user's sentiment has deviated from their normal sentiment. The system300may determine a product user's normal and current sentiments by exposing an interface to accept individual comments associated with a service ticket as input data points that the system300scores based on sentiment. Such a query-able interface includes, but is not limited to, a REST application programming interface (API), a Python API, a web API, or a web user interface. The system300may be queried both when a new event occurs for a service ticket, such as when a product user generates a new comment, and at a specified frequency, such as every 4 hours, as time dependent factors may cause the escalation probability to change even when there is no service ticket activity. For example, when a service agent has not responded to a product user's question for 4 hours, the system300predicts that the probability of service ticket escalation increases. When a query occurs, the system300can internally derive the necessary factors from the service ticket's history and/or subsequent changes to the service ticket, and then output a predicted probability of service ticket escalation. Therefore, the system300can predict the probability of service ticket escalation based on information about the service ticket at initiation and/or any combination of information about the product user, the product, the service agent, and the service ticket, which may be at and/or subsequent to the initiation of the service ticket. The system300optionally extracts static initial or history factors about entities associated with a service ticket, such as information about a product user, a product, and an assigned service agent based on when a service ticket was initialized. The history factors may include a product user's escalation history and a product user's engagement score, for both an individual who is product user and an organization that is a product user, a service agent's performance history, a product's escalation history, a product problem's escalation history, a service ticket's region, and a service ticket's original urgency metrics. The system300can derive a product user's escalation history based on the number of times the user has escalated a service ticket over a period of time (such as 90 days) divided by the number of service tickets the user has initiated, which captures the unconditional propensity of the user to escalate a service ticket. The system300can derive a product user's engagement score based on the number of service tickets initiated by the user, the number of service ticket initiated by the user that are urgent, the number of service tickets initiated by the user after work hours and/or on the weekend, the elapsed time since the user began using the product, the breadth of products used by the user, and whether the user is a partner of the product user organization or eligible for a higher level of support based on the user's service contract. The different regions where service tickets are initiated can have different behavior patterns by the product user, service agent, and/or product, and consequently different escalation patterns. The system300can derive a service agent's history based on the rate at which the service agent who is assigned to the service ticket has internally escalated service tickets for the product user in the past, normalized with respect to the total number of service tickets that the service agent has worked on for the same user. In addition, the system300can base the service agent's history on the frequency with which the service tickets assigned to the service agent have been escalated by the product user in the past, which represents a more granular approach to a product user's escalation history, since it is grouped by the service agent. The system300can derive the service ticket's original urgency metrics based on whether a service ticket with an urgent priority was initiated after work hours or on the weekend, and the original priority when the service ticket was initiated. The system300can also use natural language processing to derive urgency metrics from the service ticket's body. Additionally, the system300can use named entity recognition to detect keywords from a service ticket's text, and the keywords can provide information that enables the machine-learning models320,322,326and328to discern between different ticket types that may have different behaviors. For example, a service ticket pertaining to a licensing request probably has an extremely different escalation profile than a ticket related to a production server crash. Rather than just providing detected keywords as a field to one of the machine-learning models320,322,326and328, and allowing a single machine-learning model to accommodate these differences, the system300can retain a separate machine-learning model for each ticket type or product type. While predicting an impending escalation of a service ticket is of immense value to a service company, predicting the escalation event well in advance is more actionable since an advance prediction gives the service company enough time to pool resources that may include re-directing the service ticket to the appropriate experts. Different service companies may prefer different windows for time to predicted escalation. Accordingly, the system300allows for the selection and tuning of models that perform best on the pre-specified time to escalation window. The underlying relationships between factors may change over time and this is reflected in changes in the distribution of the factors in the incoming data compared to that of the factors used to train the training machine-learning models320and/or322. Such distributional shifts may be identified along with monitoring model performance. When required, the training machine-learning models320and/or322may be retrained to remain up to date and capture all the variations in incoming data. In addition, the system300can bootstrap the training of the training machine-learning models320and/or322. Since the training machine-learning models320and/or322demonstrate portability, they may be deployed for service companies that may be newer and have not yet gathered enough historical data to train their customized models. The system300optionally derives a training set's history factors for a training set's product user, who initiated a training set's service ticket, and/or a training set's service agent, who was assigned to the training set's service ticket. The training set's history factors of the training set's product user may be based on any escalations of service and/or any training set service tickets that were initiated by the training set's product user, and any products used by the training set's product user and/or a service level agreement associated with the training set's product user. For example, the training escalation prediction system318receives training set data that includes a technical support ticket that contains the initial interactions100between the software product user Ann and the technical support agent Bob concerning a remote mount problem, as depicted byFIG.1, Continuing the example, the training escalation prediction system318derives the training set's history factors which indicate that Ann previously initiated 1 technical support ticket, Bob had solved her previous problem in 15 minutes, and Ann's employer purchased a basic service level agreement for the software product, which is the support company's only product that Ann's employer uses. The training set's service ticket may include a training set priority and the context in which the training set's product user assigned the training set priority. For example, Ann assigned a low priority to the technical support ticket when she initiated the ticket on a Wednesday afternoon at 2:00 P.M. A training set service ticket can be a request that had been logged on a work tracking system detailing an issue that needed to be addressed and that is used as an example for learning. A training set product user can be a person or organization that utilized an item that was sold or leased and who is used as an example for learning. A training set service agent can be a person who was responsible for providing an act of assistance and who is used as an example for learning. A training set history factor can be a group of examples that are used as an example for learning and that had previously contributed to a result. An escalation of service can be a requested increase in a level of assistance. A product can be at item that is sold or leased. A service level agreement can be a contract specifying degrees of support for a product user. A training set priority can be a condition of a thing having been more urgent than other things. A context can be the circumstances that form the setting for an event. Service ticket escalations arise due to the interaction of multiple entities such as product users, service agents, products, service ticket problems, and service ticket comments. Consequently, the system300can index each entity on a time axis to reflect the evolution and changes in the entity's interactions or behavior, and then derive and use cyclical (time-based) factors to enhance the probability predictions of a service ticket escalation. Since product users may go through the life cycle stages of onboarding. service upgrades, and service downgrades. the system300may base the predicted probability of a service ticket escalation on the stage of a product user in the sales cycle. At the initiation of the sales cycle, a product user is likely to be more focused on product exploration, and not be a prolific user. Once a product is fully embedded within a business, then the product user is likely to have upgraded to a service contract with stricter service level agreements, and therefore will have a lower tolerance for delays in service ticket handling, and a higher probability of production-related service ticket escalations. Consequently, the expectations of such product users for service ticket resolution time might vary over the course of the sales cycle. The system300can base the predicted probability of a service ticket escalation on additional cyclical information, such as a product user's recent escalation history. Since service agents may go through the life cycle stages of inexperience, followed by gradual experience building, as well as stages of being assigned to few service tickets and then multiple service tickets, the system300can base the predicted probability of a service ticket escalation on the life cycle stage of the service agent assigned to a service ticket. The solution the service agent proposes will depend upon the service agent's life cycle stage that reflects the level of experience and knowledge the service agent has regarding the service ticket's product and the product's problems. As a service agent gains more experience and knowledge about products, the service agent's ability to handle complex service tickets will improve, leading to a downward modulation of the probability of service ticket escalation. The system300can base the predicted probability of a service ticket escalation on additional cyclical information, such as a service agent's recent performance. Since products may go through the life cycle stages of launch, version upgrading, and eventual deprecation, the system300may base the predicted probability of a service ticket escalation on the product life cycle stage as part of the product information that is extracted. If a product has been newly launched, there might be more service tickets that are escalated for the new product since both the product users and the service agents might lack the requisite expertise in solving the new product's problems. Some product users with more experience may have tempered expectations regarding the capabilities of new products. The system300can base the predicted probability of a service ticket escalation on additional cyclical information, such as a product's recent escalation history. Similarly, product problems and service tickets may go through life cycle stages as similar problems are solved and more information is added in the form of comments. Once the derived history factors are computed for an input datapoint, the training escalation prediction system318optionally trains a classifier model to predict a probability that the product user escalated service for the service ticket. The training escalation prediction system318may feed the derived history factors to a variety of models, such as gradient boosted classifiers, k-nearest neighbor classifiers, neural networks, random forests, support vector machines, naive Bayes classifiers, and logistic regression models. For example, the training escalation prediction system318uses the derived history factors to train a gradient boosted classifier as a model which produces a score ranging between 0 and 100, where 100 indicates 100% certainty that a service ticket will be escalated within a given time window. After the derivation of the training set's history factors, the system300optionally uses the training set's service ticket and the training set's history factors to train a history-based machine-learning model to predict a history-based training probability that the training set's product user escalated service for the training set's service ticket. The history-based training probability may be further based on any life cycle stages corresponding to the training set's product user, the training set's service agent, and/or the training set's user product. For example, the history-based training machine-learning model320uses Ann's history of never escalating a service, Ann's recent stage as a new user in the sales cycle of a software product that has been sold for a significant time, and Bob's recent stage as a senior technical support agent with experience solving the software product's problems to predict a 5% probability that Ann escalated her service ticket within 90 days of initiation. A history-based training machine-learning model can be an application of artificial intelligence to static data that provides a system with the ability to automatically learn and improve from experience without being explicitly programmed. A history-based training probability can be the learned likelihood of something represented by static data having happened or having been the case. Service can be support for a product user. A life cycle stage can be a phase in a progression through a series of differing phases of development. A training set product can be an item that was sold or leased and that is used as an example for learning. The system300can analyze the comment-based interactions between service agents and product users to identify useful information that has value for predicting the probability of service ticket escalation. For example, the system300can predict the probability of a service ticket escalation using only the interactions between a product user and a service agent that are recorded in the service ticket comments. The system300can derive additional factors by aggregating on time slices, such as weekday, weekend, early morning, and evening. The system300can base the predicted probability of a service ticket escalation on such time slices because a product user might decide to escalate a service ticket on a Friday before leaving work, or on Sunday evening before starting the next week so that the user may follow up on the service ticket's progression. Instead of being constant, escalation rates vary by the time of the day and the day of the week; with customers pushing to get service tickets resolved prior to the end of the day, before morning business hours, or before the weekend. In another example, an increase in a product user's comments during these time slices may be indicative of an increasing urgency on the user's part and, as a consequence, an impending service ticket escalation. Consequently, the system300bases the predicted probability of a service ticket escalation on many change factors that are derived after a service ticket is initiated, such as the dynamic change factors for the services provided for the product user, the service ticket's urgency, the times of service ticket activity, the periodically observed states of the service ticket, any modified escalation risk created by the most recent service ticket activity, and the service ticket's age. All of these change factors are embedded within metadata as well as within text data in the ticketing system. A service provided can be support that was supplied to a product user. A day may be a 24-hour time period. A week can be a 7-day time period. The system300can monitor the responsiveness of the provided service based on whether a service ticket comment was from the product user or to the user, and compute the ratio of the number of service ticket comments from the service agent relative to the number of service ticket comments from the product user. The system300can use the change-based training machine learning model322or the change-based production machine-learning model328that takes a mixture of metadata as well as natural language processing on the service ticket comments to derive a “needs attention” score in real time to capture the responsiveness of the provided service, which may in turn impact the service ticket urgency. The system300can monitor changes in the variety of the provided service based on the changes in the number of service agents contributing to a service ticket as authors of outbound comments directed from the service company to the product user. The system300can perform this monitoring of changes in the variety of the provided service on a rolling-window basis, when looking for a sudden increase in the number of service agents communicating with the product user. The system300can monitor the quality of the provided service, which may indicate the extent to which service agents, engineers, managers and other members of the service company are involved in a service ticket, based on service ticket notes as a fraction of the total outbound service ticket comments. The system300can use natural language processing to extract specific categories of service ticket notes, such as service ticket notes that mention explicitly that the service agent is facing a dead-end with the troubleshooting process, or those notes that discuss the wide-ranging impact of the service ticket's problem is a problem for several other service tickets opened for the same product category, or those notes about a manager calling on subject matter experts asking for their input. The system300can also compute the proportion of log messages and other categories of machine text exchanged between the product user and the service agent, normalized with respect to the total number of comments in a service ticket. The system300can use a dedicated machine learning classifier to identify log messages and various categories of machine text on the basis of statistical differences between human text and machine text. Using statistical as well as natural language processing-based methods to extract such pieces of information from the service ticket can enable the system300to quantify the quality of the provided service along various axes. One such example is quantifying how many times the service agent asks for data, which might indicate growing frustration for the product user. Another example is identifying whether the same information was asked of the product user by multiple service agents as time progresses, where the ideal expectation is that every new service agent assigned to the service ticket should build on the context that has already been set prior to their assignment to the service ticket. The system300can monitor the changes in the urgency as a service ticket progresses by using metadata which tracks the service ticket's current priority. This metadata may be set by either the product user or the service agent. A product user's modifications that explicitly increase or decrease the priority assigned to a service ticket can provide insight into the service ticket as time progresses and also impact the probability of escalation. The system300can quantify the product user's implied change of urgency by computing the maximum number of consecutive inbound messages directed from the user to the service agent. In addition, the system300can base the implied change of urgency on the total comments exchanged between a product user and a service agent during a time window as a proportion of the total comments exchanged since the initiation of the service ticket. The system300can factor in the time and the day of week by determining whether a service ticket comment was created during or after work hours and on a weekday or weekend. The system300can determine whether a service ticket comment was created during or after work hours based on the product user's time zone, and can evaluate the promptness of the service agent's response based on the service agent's time zone and/or the difference between the product user's time zone and the service agent's time zone. The system300can use natural language processing to extract indications of a product user's impatience, frustration, sense of building urgency, and/or references to production issues from the user's service ticket comments. The system300can monitor the frequency of service ticket activity based on the total number of service ticket comments normalized with respect to the service ticket's age, grouped by both the times of inbound service ticket communications and the times of outbound of service ticket communications. The system300can also monitor service ticket age, based on the time elapsed between when the service ticket was initiated until the current time. The system300derives the training set's change factors for services provided for the training set's product user who initiated the training set's service ticket, a priority assigned to the training set's service ticket, the times of service ticket interactions with the training set's service agent, the periodically observed states of the service ticket, any modified escalation risk created by the most recent service ticket activity, and/or an age of the training set's service ticket. The training set's service ticket may include interactions between the training set's product user and the training set's service agent subsequent to the training set's service ticket being initiated. The training set's change factors for services provided for the training set's product user may be based on a rate of the training set's responses providing service, the number of the training set's service agents providing service, and/or the quality of the training set's services provided. The quality of the training set's services provided may be based on the number of the training set's service ticket notes relative to the number of the training set's responses providing service, the number of the training set's service ticket interactions that include machine text relative to the number of the training set's service ticket interactions, the number of the training set's service ticket interactions that request information from the training set's product user, and the number of the training set's service ticket interactions that request identical information from the training set's product user. For example, the training escalation prediction system318receives training set data that includes the technical support ticket that contains all subsequent interactions102between the software product user Ann and the technical support agent Bob concerning the remote mount problem, and the technical support ticket's metadata104, as depicted byFIG.1. Continuing the example, the training escalation prediction system318derives the training set's change factors which indicate that Bob was the only technical support agent who replied to Ann, that Bob did not request any information from Ann, two of their three interactions included machine text, that Ann did not change the ticket's priority, and the most recent comment was Ann's response thanking Bob for his advice, and the time series data's timestamps106which indicate that Ann's thanks was before the first hourly observation of the service ticket's state after the ticket's initiation. A training set change factor can be an influence that became different and previously contributed to a result and is used as an example for learning. A modification can be a change. A priority can be a condition of a thing being more urgent than other things. A time can be a digital record of the chronology of occurrence of a particular event. An interaction can be a communication with a product user. A service ticket interaction can be a communication with a product user about a request that is logged on a work tracking system detailing an issue that needs to be addressed. A state can be the particular condition that something is in at a specific time. An age can be the length of time that a thing has existed. A rate can be a quantity measured against some other quantity. A training set's response can be a reply that was made to a request that was logged on a work tracking system detailing an issue that needed to be addressed and is used as an example for learning. A number can be an arithmetical value representing a particular quantity and used in counting and making calculations. Quality can be the degree of excellence of something. A training set's service ticket note can be a brief record about a request that was logged on a work tracking system detailing an issue that needed to be addressed and is used as an example for learning. Machine text can be a set of characters in which a minority of the characters combine to form natural language elements. Information can be data. Identical information can be data that is the same as previous data. The system300can generate different types of training sets for training the change-based training machine-learning model322to predict escalations of service tickets. In one type of training set, a single observation either represents the “state” of a service ticket immediately prior to escalation or the state of an un-escalated service ticket prior to its successful resolution. In another type of training set, each of multiple observations are separated in time and capture the state of a service ticket at different points during the service ticket's lifespan. For example, the system300can periodically observe the state of a service ticket, with the first observation at the moment that the service ticket is opened, the second observation when the service ticket has been open for 1 hour, the third observation when the service ticket has been open for 2 hours, and so on. An observation is considered to be associated with an escalation if the system300determines that a service ticket was found to escalate within a certain time period following the time of the observation, such as if the escalation occurred within 72 hours of the observation. Instead of being detectable far in advance, most escalations become predictable near the time of escalation because of how the service ticket is progressing. The system300can increase the signal for the change-based training machine-learning model322by labeling only the service ticket interactions leading up to an escalation, and labeling service ticket interactions farther away from an escalation as “Not Escalation.” Compared to observations based on service ticket interactions, time-based observations capture the waiting time between service ticket interactions, which enables the change-based training machine-learning model322to use the waiting time between service ticket interactions to help predict when customers are likely to escalate a service ticket. When a customer sends a service ticket question to a service agent, the customer is unlikely to escalate the service ticket within the next 5 minutes. However, the probability that a customer will escalate a service ticket increases the longer that the customer is waiting for a response to the customer's question. Since multiple observations of the state of a service ticket creates time-series data, evaluating the change-based training machine-learning model322uses more criteria than just the standard accuracy, recall, precision. The evaluation of the change-based training machine-learning model322identifies how many escalations of service tickets are correctly predicted in advance over a time period, instead of evaluating every single escalation prediction in isolation. For example, the change-based training machine-learning model322predicts that a customer will escalate a service ticket in 2 days. This prediction that an escalation will occur in 2 days provides a service team 48 hours to mitigate the escalation threat. Consequently, the escalation prediction would be even more helpful if the advanced warning was 3 days instead of 2 days. In addition to the approach discussed above, more information about the state of a service ticket may be beneficial to the accuracy of escalation predictions, such as an expected response time, a time duration since the previous service ticket interaction, a time difference between consecutive service ticket interactions from a product user, and counts of negative sentiments and/or expressions of urgency within a preceding time period. Escalation predictions may be based on an expected response time because many customers have different expected response times according to the support tier specified in the customer's service contract, which dramatically impacts the chance of escalation, especially when some support tiers offer support every hour of every day, while other support tiers offer support only during regular business hours. The time duration since the previous service ticket interaction captures the time that a customer or service agent has been waiting since the last service ticket interaction. A time difference between consecutive service ticket interactions from the product user captures an event when a customer sent a flurry of messages, which may be a critical time for a service ticket, especially when the flurry occurs early in a service ticket's lifespan. Therefore, the system300may use an exponential decay as a better representation of this metric than a linear difference in time. In addition to using all of a customer's negative sentiments and expressions of urgency to predict the escalation of a service ticket, the system300may count the negative sentiments and or the expressions of urgency within a preceding time period, such as the last 24-hours, to reflect changes in the customer's emotions. An expected response time can be when a reply is anticipated. A time duration can be a measure of a chronological period. A previous service ticket interaction can be a preceding communication between a support agent and a product user about a request that is logged on a work tracking system detailing an issue that needs to be addressed. A time difference can be a chronological measure between events. Consecutive service ticket interactions can be sequential communications between a support agent and a product user about a request that is logged on a work tracking system detailing an issue that needs to be addressed. A count can be a total number of something. A negative sentiment can be an expression of dissatisfaction. An expression of urgency can be a communication about the requirement for swift action. A preceding time period can be the most recent chronological space. In most instances, the probability of a ticket escalation increases significantly if the final comment in a case is an inbound comment from the customer, because normally a service agent is expected to rapidly respond to any comment by the customer. There are a number of notable exceptions to this expectation of a rapid response to the last comment, which may be identified when evaluation of the most recent service ticket interaction indicates a reduction in the probability of escalation. Therefore, the training set change factors may also include a modified escalation risk created by the most recent service ticket interaction that is a communication of a reply improbability, a third party communication, an automated response, the scheduling of a communication, a communication of a pending closure of a service ticket, a communication of a pending report of work by a product user, and/or a modification of a service level agreement. A modified escalation risk can be a changed exposure to the possibility of a requested increase in a level of assistance. A most recent service ticket interaction can be the latest communication between a support agent and a product user about a request that is logged on a work tracking system detailing an issue that needs to be addressed. A communication can be a message containing information. A reply improbability can be the unlikelihood of a reply. A third-party communication can be message from a person or group besides the two people or groups who are primarily involved in exchanging messages. An automated response can be a reply that occurs without direct human involvement. A pending closure can be a forthcoming completion of a service ticket. A pending report work can be a forthcoming account of efforts to resolve a problem. In many service tickets, the customer may indicate that they are planning to be (or already are) out of the office. In these situations, a protracted silence may occur for the service ticket, but the customer is unlikely to escalate the service ticket during the time period when the customer is out of the office. The system300can use natural language processing techniques to identify communications indicating that the customer's subsequent replies are improbable, such as the out-of-office messages that are either written by the customer or created by an automated out-of-office response system. The system300can use standard off-the-shelf open source software libraries to identify the out-of-office time period from the detected message, use this time period information either as a training set change factor for the change-based training machine-learning model322, or with a filter before or after the change-based training machine-learning model322to remove service tickets in this state from consideration of escalation risks. Optionally, the system300can disregard such an out-of-office message if the service ticket seems to be progressing, even during the specified out-of-office time period, or if only one of the customers will be out of the office while the customer's colleagues will continues working on the service ticket. In many situations, if a third party, such as a person who represents an original equipment manufacturer, is participating in service ticket interactions, then simply evaluating the service ticket interactions as a two-way conversation between a service agent and a customer is overly simplistic. Instead, the system300needs to differentiate a third-party communication, such as a comment by a representative for the original equipment manufacturer, from communications by a customer and a service agent. Typically, customer relationship management systems record (and visually depict) service ticket comments as either from a service agent or from a customer, even though some of those ‘customer’ comments could be from a third-party, such as a representative for the original equipment manufacturer. By analyzing the email address or email signature of comments that are not from a service agent, the system300can disambiguate the customer communications and the third-party communications. There are several situations in which a service agent, a customer, or a third party may reply with an automated response. It is helpful to identify such a situation so that the system300does not evaluate such an automated response as part of the back-and-forth conversation. The system300can use the identification of automated responses as a training set change factor for the change-based training machine-learning model322, or with a filter before or after the change-based training machine-learning model322to remove service tickets in this state from consideration of escalation risks In many service tickets, a customer and a service agent may agree to synchronize communications on a future date, such as “Let's jump on a Zoom/WebEx call on Friday.” Once a customer and a service agent agree to communicate on a scheduled date and time, there may be a period of quiet in the service ticket until that scheduled date and time. The system300can use natural language processing techniques to identify the occurrence of scheduling of a communication and identify the associated time and date. When a service ticket enters such a waiting period, the system300can use the identification of the scheduling of a communication as a training set change factor for the change-based training machine-learning model322, or apply a filter before or after the change-based training machine-learning model322to the scheduling of a communication to remove service tickets in this state from consideration of escalation risks. In many instances, even when a service ticket's problem has been resolved, a customer may indicate that the service ticket may be closed at a service agent's leisure, or may indicate that the service ticket should be left open for some period of time, just in case the problem reoccurs, such that the service ticket may be left open for a significant period of time. When a service ticket is in this pending closure state, the number of comments drops, but there is a significantly reduced risk of escalation. The system300can use natural language processing techniques to identify a pending closure request or a discussion about holding a service ticket open for some period of time, or even until a correction is released, and use the identification of a pending closure as a training set change factor for the change-based training machine-learning model322. Alternatively, the system300can apply a filter before or after the change-based training machine-learning model322to the pending closure request to remove service tickets in this state from consideration of escalation risks. In some instances, a customer will respond to a service agent by indicating that the customer will undertake some additional work or analysis on the service ticket and then report the results of this pending work to the support agent. In these situations, a customer will be actively working on a service ticket and is not waiting for a response from a service agent. The system300can use natural language processing techniques to identify these expressions of a pending report of work in the customers comments by determining whether a customer comment includes a question that requires a response from a service agent, or the customer is providing the service agent with a update of plans the customer has to work on the service ticket. The system300can use this communication of a pending report of work as a training set change factor for the change-based training machine-learning model322, or with a filter before or after the change-based training machine-learning model322to remove service tickets in this state from consideration as escalation risks. Some service organizations, especially those that offer service level agreements that have stringent follow up requirements, may provide the functionality for a service agent to negotiate customized requirements for a service level agreement that applies to a customer's service ticket. For example, a service agent may suggest to the customer that if the service agent has to interrupt work on a solution to the customer's problem so that the service agent can provide the customer with the update within the time required by the service level agreement, then this interruption may delay the service agent resolving the customer's problem. Consequently, the service agent may negotiate with the customer, who can agree that even though the existing service level agreement requires the service agent to periodically update the customer with progress within a specific time period, such as within the next 4 hours, the customer will not require the service agent to provide the next update until some set time in the future that is after the specific time period has expired, such as within the next 6 hours. For service tickets which have this relaxed reporting requirement in a modified service level agreement, there may be protracted absence in service ticket interactions until the revised update time occurs, potentially making the service ticket appear to be an escalation risk. Until the revised update time has elapsed, the system300can use this modification of a service level agreement as a training set change factor for the change-based training machine-learning model322, or with a filter before or after the change-based training machine-learning model322to remove service tickets in this state from consideration as escalation risks. In the situations described above, instead of analyzing an entire service ticket, the system300can analyze the last comment (or short sequence of comments) to determine if the service ticket has entered a state with a lower escalation risk. After the service ticket conversation progresses and these comments are part of the historical record, these comments' impact on the prediction of escalation is muted, and should not be considered to have such a pronounced impact on the service ticket. In many instances, especially when a service level agreement requires a service agent to respond to a customer's comment within a relatively short amount of time, or provide progress updates to the customer at a prescribed frequency, the service agent may seem to comply with these requirements, but there may be little (or no) substance in these responses or updates. For example, if a service agent repeatedly responds “Still investigating; will get back to you,” the service ticket may have constant activity, but the customer may still escalate the service ticket because of the apparent lack of tangible progress. If an escalation predictor is only focused on the metadata associated with the service ticket interactions, such empty responses and updates may cause the service ticket to appear healthy and under control. The system300can calculate the change-based probability of service ticket escalation based on applying natural language processing to the service ticket interactions to identify a lack of progress with a problem associated the service ticket, which would detect the “empty” responses and updates. The system300can use any detected “empty” responses and updates as a training set change factor for the change-based training machine-learning model322, or with a filter before or after the change-based training machine-learning model322to process service tickets in this state as escalation risks. In addition to detecting empty responses and updates, the system300can identify other key expressions that may provide insight into the health of a service ticket. For example, the system300can calculate the change-based probability of service ticket escalation based on applying natural language processing to the service ticket interactions to identify a lack of progress with the major problem of a service ticket, such as customer comments indicating that the suggested solutions “did not work”, or “were not helpful.” Natural language processing can be a subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between human communications and computers, particularly how to program computers to process and analyze large amounts of human communication data. A lack of progress can be the absence of a resolution to a problem. In many instances, these factors could be added as an input to for the change-based training machine-learning model322in the form of engineered features. However, a filter before or after the change-based training machine-learning model322may be preferable, as such a filter provides additional flexibility. For example, filters may be tweaked and tuned in real-time, without necessitating a retraining of the change-based training machine-learning model322, which may be an expensive and time-consuming operation. Similarly, in many instances, if a single machine-learning model is deployed across multiple customers, the feature set used by the machine-learning model frequently defaults to the lowest common denominator, such as the feature set that is available across all customers. The system300can apply a filter before or after the change-based training machine-learning model322to leverage additional information for customers where this is available, or to tailor the change-based training machine-learning model322and its predictions to the nuances of the individual customer environments. Once the training set's derived change factors are computed for the input datapoint, the system300trains a classifier model to predict a probability that the product user escalated service for the service ticket. The system300may feed the training set's change factors to a variety of models, such as gradient boosted classifiers, k-nearest neighbor classifiers, neural networks, random forests, support vector machines, naive Bayes classifiers, and logistic regression models. For example, the system300uses the training set's change factors to train a gradient boosted classifier as a model which produces a score ranging between 0 and 100, where 100 indicates 100% certainty that the service ticket was escalated within a given time window. The system300uses the training set's service ticket and the training set's change factors to train a change-based machine-learning model to predict a change-based training probability that the training set's product user escalated service for the training set's service ticket. The change-based training probability may be also based on any life cycle stages corresponding to the training set's service ticket and/or the training set's product problem. For example, the change-based training machine-learning model322uses the technical support ticket's unchanging priority, and the natural language processing of the last comment as Ann's thankful response to Bob's mount problem advice within 35 minutes of the service ticket's initiation, which implies that Ann following Bob's advice corrected Ann's problem, to predict a 1% probability that Ann escalated her service ticket within 90 days of initiation. Then the change-based training machine-learning model322accesses the time series data's timestamps106which indicate that Ann closed the service ticket on Wednesday at 2:45 P.M., which confirms the 1% escalation prediction made by the change-based training machine-learning model322. A change-based training machine-learning model can be an application of artificial intelligence to dynamic data that provides a system with the ability to automatically learn and improve from experience without being explicitly programmed. A change-based training probability can be the learned likelihood of something represented by dynamic data having happened or having been the case. A training set product problem can be an issue that existed for item that was sold or leased and that is used as an example for learning. The training escalation prediction system318optionally derives factors that are a mix of static history factors (which are factors defined at the initiation of a service ticket and factors that are independent of the service ticket) and subsequent change factors based on the dynamic changes to the service ticket as time progresses. Therefore, the training escalation prediction system318can use a combination of two separate models—the history-based training machine-learning model320that is trained on static history factors, and the change-based training machine-learning model f322that is trained on dynamic change factors. The training escalation prediction system318can invoke the history-based training machine-learning model320when a service ticket is initiated, and then invoke the change-based training machine-learning model322when the service ticket's age reaches a pre-set threshold. The training escalation prediction system318can execute both training machine-learning models320and322simultaneously on a single service ticket, and then use a weighted sum of the escalation prediction probabilities from both training machine-learning models320and322as the final training prediction of escalation probability. The system300optionally creates a combined training probability based on the history-based training probability and the change-based training probability. For example, the training escalation prediction system318combines the prediction of the 5% probability, which is based on the training set's history factors, with the prediction of the 1% probability, which is based on training set's change factors, to result in a prediction of a weighted 2.5% probability, which is based on a mix of training set factors, that Ann escalated her service ticket within 90 days of initiation. The training escalation prediction system318optionally uses the combined training probability, the history-based training probability, and/or the change-based training probability to train the training machine-learning models320and/or322. A combined training probability can be the merging of learned likelihoods of something having happened or being the case. Additionally, the training escalation prediction system318can employ final filtering steps that allow per-instance nuances to be taken into consideration. For example, after a service ticket's problem has been resolved, support engineers may leave the service ticket open for a few days in order to ensure that the proposed correction to the problem actually resolved the product user's problem. A service ticket that is in this final stage of its life cycle is unlikely to be escalated. The training escalation prediction system318can reduce the predicted probability of escalation for such a service ticket by examining service ticket metadata (such as service ticket status fields) or leveraging natural language processing techniques to detect this situation in the text of the final comments in the service ticket. Furthermore, a product user may inform a service agent that a service ticket may be closed for a variety of reasons, such as the problem was resolved, the problem is no longer an issue, or the problem disappeared. In these situations, the service ticket is unlikely to be escalated before the service ticket is finally closed by the service agent. The training escalation prediction system318can reduce the predicted probability of escalation for such a service ticket by using natural language processing techniques to analyze the final comments of the service ticket to determine whether such closure requests have been made. As described above, the system300can detect a modified escalation risk created by the most recent service ticket interaction for the training set service ticket, which may be a communication of a reply improbability, a third party communication, the scheduling of a communication, a communication of a pending closure of the training set service ticket, a communication of a pending report of work by the training set product user, and/or a modification of a service level agreement. In many instances, these factors could be added as an input to for the change-based training machine-learning model322in the form of engineered features. However, a filter before or after the change-based training machine-learning model322may be preferable, as such a filter provides additional flexibility. For example, filters may be tweaked and tuned in real-time, without necessitating a retraining of the change-based training machine-learning model322, which may be an expensive and time-consuming operation. Similarly, in many instances, if a single machine-learning model is deployed across multiple customers, the feature set used by the machine-learning model frequently defaults to the lowest common denominator, such as the feature set that is available across all customers. The system300can apply a filter before or after the change-based training machine-learning model322to leverage additional information for customers where this is available, or to tailor the change-based training machine-learning model322and its predictions to the nuances of the individual customer environments. Furthermore, the system300can use subtle filtering, which is applying filters to escalation predictions only if the factors driving the escalation predictions are relevant to the purpose of a specific filter. For example, the system300predicts that a training set service ticket with a modified service level agreement will be escalated after the original update requirement time of within the next 4 hours but before the modified update requirement time of within the next 6 hours. If the factors driving the escalation prediction are related to the service agent's responsiveness for updates, then the system300should apply the relaxed update requirement filter to the escalation prediction. However, if the factors driving the escalation prediction involved increased expressions of urgency by the customer, then the system300should ignore the application of the relaxed update requirement filter, which is less relevant. If any filters are applied to a modified escalation risk, the system300can use the modified escalation risk to modify the output of the change-based probability. For example, the system300uses a filter to suppress the predicted probability of Ann escalating her service ticket100, because although the last comment102was from the customer and not answered, this last comment102from Ann implies that following Bob's advice corrected her problem, unlike the last comments in service tickets that are from customers and are not answered, which typically increase the predicted escalation probabilities. If no filters are applied to a modified escalation risk, the system300can use the modified escalation risk as a change factor that the change-based training machine-learning model322uses to modify the change-based probability. For example, the system300uses the last comment102from Ann, which implies that following Bob's advice corrected her problem, to reduce the predicted 2% probability of Ann escalating her service ticket100to a reduced 1% probability. An output can be the producing, delivering, or supplying of data using a computer. The training escalation prediction system318stores the results of the training machine-learning models320and/or322, which may be queried via REST endpoints or made accessible via a user interface. This enables the training escalation prediction system318to provide a list of any associated static history factors and/or any dynamic change factors, ranked by importance, that explain why the training machine-learning models320and/or322have predicted a particular probability of escalation for a service ticket associated with a specific point in time. The training escalation prediction system318can generate this list of relevant history and change factors using a process that analyzes localized predictions for perturbations around the input datapoint and computes model-agnostic explanations for the prediction. The system300can train the training machine-learning models320and/or322without human supervision, because whether or not service was escalated for a training set's service ticket is readily available in the training set that includes many service tickets and their associated data. The amount of the training set's service tickets may be selected to provide sufficient data points so that the training server312can automatically train the training machine-learning models320and/or322to learn to predict escalations of service for the training set's service tickets. After the system300completes the training of the training machine-learning models320and/or322, the system300deploys any sufficiently trained machine-learning models as the production machine-learning models326and/or328. Then, the system300optionally derives history factors for a product user, who initiated a service ticket, and/or a service agent, who is assigned to the service ticket. The service ticket can include a priority and a context in which the product user assigned the priority. The history factors for the product user may be based on any escalations of service and/or any service tickets that were initiated by the product user, and any products used by the product user and/or a service level agreement associated with the product user. For example, the production escalation prediction system324receives online production data that includes a pending urgent technical support ticket that contains the initial interactions200between the software product user Chris and the technical support agent Dana concerning a remote mount problem, as depicted byFIG.2, Continuing the example, the production escalation prediction system324derives history factors which indicate that Chris previously initiated 19 technical support tickets and escalated 9 of these tickets, Dana was the technical support agent for all of Chris' tickets, and Chris' employer upgraded to a stricter service level agreement for the software product, which is among 5 of the support company's products that Chris' employer uses. A service ticket can be a request logged on a work tracking system detailing an issue that needs to be addressed. A product user can be a person or an organization that utilizes an item that was sold or leased. A service agent can be a person who is responsible for providing an act of assistance. A history factor can be a past influence that contributes to a result. Once the derived history factors are computed for an input datapoint, the system300optionally applies a trained classifier model to predict a probability that the product user will escalate service for the service ticket. The system300may feed these history factors to a variety of models, such as gradient boosted classifiers, k-nearest neighbor classifiers, neural networks, random forests, support vector machines, naive Bayes classifiers, and logistic regression models. For example, the system300applies history factors to a gradient boosted classifier as a trained model which produces a score ranging between 0 and 100, where 100 indicates 100% certainty that the service ticket will be escalated within a given time window. The system300optionally applies the history-based trained machine-learning model to the service ticket and the history factors to predict a history-based probability that the product user escalates service for the service ticket. The history-based probability may also be based on any life cycle stages corresponding to the product user, the service agent, and/or the product. For example, the history-based production machine-learning model326uses Chris' history of frequently escalating services, including when Dana was the technical support agent, Dana's recent stage as a junior technical support agent with limited experience solving the software product's problems, and Chris' employer upgrade to a stricter service level agreement for the software product to predict a 45% probability that Chris will escalate service within the next 4 hours. A history-based machine-learning model can be an application of artificial intelligence to static data that provides a system with the ability to automatically learn and improve from experience without being explicitly programmed. A history-based production machine-learning model can be an application of artificial intelligence to static data that provides a system with the ability to automatically learn and improve from experience without being explicitly programmed. A history-based probability can be the learned likelihood of something represented by static data happening or being the case. Escalating service can be requesting an increase in a level of assistance. The system300optionally outputs the history-based probability that the product user escalates service for the service ticket. Outputting the history-based probability may include outputting an explanation why a list of any relevant history factors, ranked by importance, resulted in the prediction of the history-based probability. An explanation may be tied to a workflow, such as suggesting and facilitating actions based on the explanation, Outputting the history-based probability that the product user escalates service for the service ticket may include identifying service tickets which are similar to the service ticket and outputting any relevant history factors associated with any identified service tickets which averted escalated service or efficiently handled escalated service, For example, the history-based production machine-learning model326outputs the prediction of the 45% probability that Chris will escalate service within the next 4 hours, and the explanation that the prediction is based on 1) Chris' history of frequently escalating services, 2) Dana's limited experience solving the software product's problems, and 3) the stricter service level agreement for the software product. An explanation can be a statement or account that makes something clear. A list can be a number of connected items written or printed consecutively. A relevant history factor can be a pertinent past influence that contributes to a result. Importance can be a condition of having a level of significance. A prediction can be a forecast or an estimate. An identified service ticket can be a recognized request logged on a work tracking system detailing an issue that needs to be addressed. Escalated service can be an increased level of support. The system300derives change factors for services provided for a product user who initiated a service ticket, a priority assigned to the service ticket, times of service ticket interactions with a service agent, the periodically observed state of escalation for the training set's service ticket, any modified escalation risk created by the most recent service ticket activity, and/or an age of the service ticket. The change factors for services provided by the service agent to the product user may be based on a rate of responses providing service, the number of service agents providing service, and/or a quality of services provided. The quality of services provided may be based on the number of service ticket notes relative to the number of responses providing service, the number of service ticket interactions that include machine text relative to the number of the service ticket interactions, the number of service ticket interactions that request information from the product user, and/or the number of service ticket interactions that request identical information from the product user. For example, the production escalation prediction system324receives online production data that includes the pending urgent technical support ticket that contains all subsequent interactions202and204between the software product user Chris and the technical support agent Dana concerning the remote mount problem, and the technical support ticket's metadata206, as depicted byFIG.2. Continuing the example, the production escalation prediction system324derives subsequent change factors which indicate that Chris changes the ticket's priority to urgent, the most recent comments were Chris' second request for help, and Chris's frustrated “Hello?”, the time series data's timestamps208which indicate that Chris' implied rejection of Dana's advice was before the first hourly observation of the service ticket's state after the ticket's initiation, and the lack of Dana's reply to Chris' second request for help within the next 5 hourly observations. A change factor can be an influence that becomes different and contributes to a result. Once the derived change factors are computed for an input datapoint, the system300applies a trained classifier model to predict a probability that the product user will escalate service for the service ticket. The system300may feed these change factors to a variety of models, such as gradient boosted classifiers, k-nearest neighbor classifiers, neural networks, random forests, support vector machines, naive Bayes classifiers, and logistic regression models. For example, the system300applies the change factors to a gradient boosted classifier as a trained model which produces a score ranging between 0 and 100, where 100 indicates 100% certainty that the service ticket will be escalated within a given time window. The system300applies the change-based machine-learning model to the service ticket and the change factors to predict a change-based probability that the product user escalates service for the service ticket. The change-based probability may also be based on any life cycle stages corresponding to the service ticket and/or a product problem. For example. the change-based production machine-learning model328uses the technical support ticket's new urgent priority, the natural language processing of Chris' frustrated responses to Dana's mount problem advice, which implies that Chris following Dana's advice failed to correct Chris' problem, and the lack of Dana's reply to Chris' responses within 5 hours to predict a 95% probability that Chris will escalate service within the next 4 hours. A product problem can be an issue for item that is sold or leased. A change-based machine-learning model can be an application of artificial intelligence to dynamic data that provides a system with the ability to automatically learn and improve from experience without being explicitly programmed. A change-based production machine-learning model can be an application of artificial intelligence to dynamic data that provides a system with the ability to automatically learn and improve from experience without being explicitly programmed. A change-based probability can be the learned likelihood of something represented by dynamic data happening or being the case. The system300outputs the change-based probability that the product user escalates service for the service ticket. Outputting the change-based probability may include outputting an explanation why a list of any relevant change factors, ranked by importance, resulted in the prediction of the change-based probability. Outputting the change-based probability that the product user escalates service for the service ticket may include identifying service tickets which are similar to the service ticket and outputting any relevant change factors associated with any identified service tickets which averted escalated service or efficiently handled escalated service, For example, the change-based production machine-learning model328outputs the prediction of the 95% probability that Chris will escalate service within the next 4 hours, and the explanation that the prediction is based on 1) the last two comments which are Chris' frustrated responses to Dana's mount problem advice, which implies that Chris following Dana's advice failed to correct Chris' problem, 2) the lack of Dana's reply to Chris' responses within 5 hours, and 3) the technical support ticket's new urgent priority. A relevant change factor can be a pertinent influence that becomes different and contributes to a result. The production escalation prediction system324optionally derives factors that are a mix of static history factors (which are factors defined at the initiation of a service ticket and factors that are independent of the service ticket) and subsequent factors based on the dynamic changes to the service ticket as time progresses. Therefore, the production escalation prediction system324can use a combination of two separate models—the history-based production machine-learning model326that was trained on static history factors, and the change-based production machine-learning model328that was trained on dynamic change factors. The production escalation prediction system324can invoke the history-based production machine-learning model326when a service ticket is initiated, and then invoke the change-based production machine-learning model328when the service ticket's age reaches a pre-set threshold. The production escalation prediction system324can execute both production machine-learning models326and328simultaneously on a single service ticket, and then use a weighted sum of the escalation prediction probabilities from both production machine-learning models326and328as the final predicted escalation probability. When a service ticket is initiated, the production escalation prediction system324can use only the history-based probability from the history-based production machine-learning model326. However, as the service ticket progresses, the change-based probability from the change-based production machine-learning model328can gradually override the history-based probability. In the situation when a new service company has no historical data or limited historical data, the production escalation prediction system324can use only the change-based probability from the change-based production machine-learning model328. Additionally, the production escalation prediction system324can employ final filtering steps that allow per-instance nuances to be taken into consideration. For example, after a service ticket's problem has been resolved, support engineers may leave the service ticket open for a few days in order to ensure that the proposed correction to the problem actually resolved the product user's problem. A service ticket that is in this final stage of its life cycle is unlikely to be escalated. The production escalation prediction system324can reduce the predicted probability of escalation for such a service ticket by examining service ticket metadata (such as service ticket status fields) or leveraging natural language processing techniques to detect this situation in the text of the final comments in the service ticket. Furthermore, a product user can inform a service agent that a service ticket may be closed for a variety of reasons, such as the problem was resolved, the problem is no longer an issue, or the problem disappeared. In these situations, the service ticket is unlikely to be escalated before the service ticket is finally closed by the service agent. The production escalation prediction system324can reduce the predicted probability of escalation for such a service ticket by using natural language processing techniques to analyze the final comments of the service ticket to determine whether these closure requests have been made. As described above, the system300can detect a modified escalation risk created by the most recent service ticket interaction, which may be a communication of a reply improbability, a third party communication, the scheduling of a communication, a communication of a pending closure of the service ticket, a communication of a pending report of work by the product user, and/or a modification of a service level agreement. In many instances, these factors could be added as an input to for the change-based production machine-learning model328in the form of engineered features. However, a filter before or after the change-based production machine-learning model328may be preferable, as such a filter provides additional flexibility. For example, filters may be tweaked and tuned in real-time, without necessitating a retraining of the change-based production machine-learning model328, which may be an expensive and time-consuming operation. Similarly, in many instances, if a single machine-learning model is deployed across multiple customers, the feature set used by the machine-learning model frequently defaults to the lowest common denominator, such as the feature set that is available across all customers. The system300can apply a filter before or after the change-based production machine-learning model328to leverage additional information for customers where this is available, or to tailor the change-based production machine-learning model328and its predictions to the nuances of the individual customer environments. Furthermore, the system300can use subtle filtering, which is applying filters to escalation predictions only if the factors driving the escalation predictions are relevant to the purpose of a specific filter. For example, the change-based production machine-learning model328predicts that a service ticket with a modified service level agreement will be escalated after the original update requirement time of within the next 4 hours but before the modified update requirement time of within the next 6 hours. If the factors driving the escalation prediction are related to the service agent's responsiveness for updates, then the system300should apply the relaxed update requirement filter to the escalation prediction. However, if the factors driving the escalation prediction involved expressions of increased urgency by the customer, then the system300should ignore the application of the relaxed update requirement filter, which is less relevant. If any filters are applied to a modified escalation risk, the system300can use the modified escalation risk to modify the output of the change-based probability. For example, the system300uses a filter to suppress the predicted probability of Ann escalating her service ticket100, because although the last comment102was from the customer and not answered, this last comment102from Ann implies that following Bob's advice corrected her problem, unlike the last comments in service tickets that are from customers and are not answered, which typically increase the predicted escalation probabilities. If no filters are applied to the modified escalation risk, the system300can use the modified escalation risk as a change factor that the change-based production machine-learning model328uses to modify the change-based probability. For example, the system300uses the last comment102from Ann, which implies that following Bob's advice corrected her problem, to reduce the predicted 2% probability of Ann escalating her service ticket100to a reduced 1% probability The production escalation prediction system324stores the results of the production machine-learning models326and/or328, which may be queried via REST endpoints or made accessible via a user interface. This enables the production escalation prediction system324to provide a list of any associated static history factors and/or any dynamic change factors, ranked by importance, that explain why the production machine-learning models326and/or328have predicted a particular probability of escalation for a service ticket associated with a specific point in time. The production escalation prediction system324can generate this list of factors using a process that analyzes localized predictions for perturbations around the input datapoint and computes model-agnostic explanations for the prediction. The system300optionally outputs a combined probability based on the history-based probability and the change-based probability that the product user escalates service for the service ticket. Outputting the combined probability may include outputting an explanation why a list of any relevant factors, ranked by importance, resulted in the prediction of the combined probability. Outputting the combined probability that the product user escalates service for the service ticket may include identifying service tickets which are similar to the service ticket and outputting any relevant factors associated with any identified service tickets which averted escalated service or efficiently handled escalated service, For example, the production escalation prediction system324combines the prediction of the 45% probability, which is based on history factors, with the prediction of the 95% probability, which is based on change factors, to result in a prediction of a weighted 75% probability which is based on a mix of factors, that Chris will escalate service within the next 4 hours. The production escalation prediction system324outputs the prediction of the 75% probability that Chris will escalate service within the next 4 hours, and the explanation that the prediction is based on 1) the last two comments which are Chris' frustrated responses to Dana's mount problem advice, which implies that Chris following Dana's advice failed to correct Chris' problem, 2) the lack of Dana's reply to Chris' responses within 5 hours, 3) Chris' history of frequently escalating services, and 4) Dana's limited experience solving the software product's problems. A combined probability can be the merging of likelihoods of something happening or being the case. When the system300outputs the history-based probability, the change-based probability, and/or the combined probability that a product user escalates service for a service ticket, it would not always be optimal for a service agent to review service tickets strictly by their escalation probability because preventing some escalations will be more important than preventing other escalations. For example, if the change-based production machine-learning model328predicts the same escalation probability of 50% for the service tickets from both Acme Company and MegaCorp., and MegaCorp. is twice as valuable (in terms of economic value such as deal size or strategic importance) as Acme Company is to a service agent's organization, then the service agent should respond first to MegaCorp.'s service ticket because MegaCorp.'s escalations will be more costly. Consequently, the more valuable that a customer is to a service organization, the more important it is to ensure customer satisfaction and prevent escalations. Therefore, important customers' service tickets that have even a modest likelihood of escalation should be triaged by the service organization, even if the majority of the escalation predictions for these service tickets are potentially false positives. In contrast, less significant customers' service tickets may need to have a significantly greater probability of escalation before a service agent takes any action because of the non-zero cost associated with any intervention. Therefore, a service company administrator can set a probability threshold that determines when and how the system300outputs the history-based probability, the change-based probability, and/or the combined probability that a product user escalates service for a service ticket. Given that any service organization has finite resources, probability thresholds may be based on many factors, such as an economic value associated with the product user, an initial service contract stage associated with the product user, a service contract renewal date associated with the product user, and a service contract renewal risk associated with the product user. Additional factors that probability thresholds may be based upon include a quality of services provided to the product user, any escalations of service and any service tickets that were initiated by the product user, any products used by the product user, and an impact of a problem associated with the service ticket. A probability threshold can be the magnitude or intensity that a likelihood must satisfy for a certain result to occur. An economic value can be the measure of a service organization's benefit from providing a service to a product user. A probability threshold may be based on an initial service contract stage associated with a customer. For example, the system300sets a low probability threshold for a new customer's service tickets to ensure that the new customer's first interactions with service agents are flawless. An initial service contract stage can be the primary phase of an agreement to support a product user. A probability threshold can be based on a service contract renewal date associated with a customer. For example, the system300sets a low probability threshold for a customer's service tickets based on the customer's subscription renewal date because deals may be lost if a customer's impression of a product or a service organization is impacted at critical times. Providing “white glove” treatment for a customer approaching a renewal date improves the likeliness of the customer renewing their service contract A service contract renewal date can be a time to decide to extend an agreement to support a product user. A probability threshold may be based on a service contract renewal risk associated with a customer. For example, the system300sets the probability threshold for a customer's service tickets based on the customer's churn risk. Customers that have been flagged as a potential churn risk should also receive special attention if it is believed that the customer's account may be saved. Conversely, if a customer has decided not to renew their service contract, preventing escalations from the customer is likely of lower importance and benefit. A service contract renewal risk can be a possibility of a product user not extending an agreement to support the product user. A probability threshold may be based on the historical quality of services provided to a customer. For example, the system300sets a low probability threshold for a customer's service tickets based on the customer's negative experience in previous service interactions. Customers may forgive the occasional poor experience with a service agent. However, if poor service becomes the norm, the customer experience may reach a point of negativity whereby that customer may be willing to terminate a business relationship. Examining a customer's recent experiences with service agents and providing early/proactive intervention for customers that have experienced poor support is critical to improving the customer relationship and increasing the likelihood of continued business with that customer. A probability threshold may be based on escalations of service associated with a customer. For example, the system300sets a low probability threshold for a customer's service tickets based on the customer's recent spate of escalations. Customers that have been forced to escalate a relatively large number of service tickets in the recent past should be closely monitored to ensure that every precaution is taken to prevent further missteps. A probability threshold may be based on an impact of a problem of a service ticket associated with a customer. For example, the system300sets the probability threshold for a customer's service ticket based on the impact of the service ticket's problem. The customer impact associated with different types of service ticket problems can range from significant to insignificant. Service tickets which are producing a more severe disruption to the customer are obviously more important to prevent from escalating. An impact can be the effect of one thing on another thing. A problem can be an issue that needs to be dealt with and overcome. Based on these factors, a system administrator can customize the probability threshold at which a prediction is output to a service agent on a per service ticket basis. This customization of a probability threshold occurs after the system trained the change-based production machine-learning model328and the customized probability threshold may be a separate filter between the change-based production machine-learning model328and the user interface and application programming interface providing the escalation prediction to service agents. These factors may be used to generate the expected cost of an escalation for each customer, E(cost|escalation). Multiplying this estimate by the change-based production machine-learning model328's escalation probabilities, Prob(escalation) produces the expected escalation-related costs, E(cost) for each open service ticket, and optimizes service agents' efforts to the highest value service tickets. E(cost)=Prob(escalation)*E(cost|escalation). When a predicted probability exceeds a probability threshold, the system300can output an alert to the service agent responsible for the service ticket, or the service agent's supervisor, and/or the system300can output the service ticket to the user interface of the service agent and/or supervisor. Further, when a predicted probability exceeds a specific probability threshold, the system300can provide alerts to additional teams, such as sales and product development teams. Since the probability threshold can differ by customer, product type, size of account, proximity to contract renewal, overall customer sentiment, potential churn risk, a quality of services provided, escalations of service, service tickets initiated, and impacts of problems, etc., the probability threshold does not need to be the same across all service tickets. An alert can be a warning. The system300can continuously recompute the escalation probability as the service ticket evolves, thereby providing the service company with a continuous view of a service ticket's status as time progresses. When deployed into production, the system300for high fidelity escalation predictions can display the escalation predictions being generated to the service agents and/or escalation managers that are managing the prediction queue. As discussed above, these escalation predictions are generally accompanied by an explanation of the escalation prediction, such as the factors that are driving the escalation prediction, which makes the escalation predictions significantly more actionable. When a service agent is presented with an escalation prediction that a service ticket is at risk of escalation, the service agent can review the factors driving the escalation prediction, review the service ticket, and then take action and intervene in the service ticket (either directly or indirectly), or decide that no action is necessitated. The system300enables service agents to review a probability that a product user will escalate a service ticket, to review the output of the probability to the service agent who is responsible for the service ticket, to review queues of service tickets that are at risk of escalation, and to provide feedback by acknowledging, dismissing, or pausing the reviewed information. After learning from this feedback, the system300will be more effective at generating a probability that a product user will escalate a service ticket, determining a probability threshold that the probability must satisfy to output an alert to the service agent who is responsible for the service ticket, and managing the queues of service tickets that are at risk of escalation. The feedback functionality may be based on a service agent acknowledging any change-based probability. A service agent can acknowledge an escalation prediction, which indicates that the service agent took action based on the escalation prediction. The system300can use this acknowledgement to move an escalation prediction to a separate queue of service tickets, temporarily suppress an escalation prediction to help a service agent “clear their queue,” provide information to other service agents that the escalation prediction has already been handled, and/or provide feedback for training future versions of the change-based production machine-learning model328, indicating that the service agent agreed with the escalation prediction. Acknowledging can be agreeing with the validity of a prediction. The feedback functionality may be based on a service agent dismissing any change-based probability, A service agent can dismiss an escalation prediction, which indicates that the service agent does not agree with the prediction that this service ticket is at risk of escalation. By providing dismiss support, the system300enables a service agent to clean-up erroneous escalation predictions from their queue, which provides invaluable feedback for future versions of the change-based production machine-learning model328. Dismissing can be rejecting the validity of a prediction. The feedback functionality may be based on a service agent pausing the output of any change-based probability and/or the output of any alert associated with any change-based probability, A service agent can pause an escalation prediction, which indicates that the service agent thinks that a service ticket that is predicted to be escalated does not yet require intervention, but the service agent does not want to dismiss the service ticket. The pause functionality allows a service agent to hide a prediction of escalation for a user-selected amount of time, such as 1-day or 3 business days. Pausing can be temporarily stopping an action. The system300can enable the service agent to provide explicit feedback about why the service agent selected an option to acknowledge, dismiss, or pause a prediction. For example, if a service agent is dismissing a prediction, the system300can provide an option to answer multiple choice questions or enter free format text to explain why the agent thinks that the prediction is wrong. The production escalation prediction system324can integrate with a feedback loop, which captures information from the user interface or interaction layer and incorporates the captured information as inputs to the machine-learning models320,322,326, and/or328. The information captured is in essence the escalation management team's follow-up behavior on the predictions, as recorded via the user interface. Sweeping in these follow-up actions back into the machine-learning models320,322,326, and/or328can enable the machine-learning models320,322,326, and/or328to be retrained in a manner that closely follows the human-decision making component, which the production escalation prediction system324attempts to model. This feedback loop can enable the machine-learning models320,322,326, and/or328to evolve in a personalized manner with respect to the preferences of the escalation management team, in a way that is relevant to the team. Furthermore, the production escalation prediction system324can include key metrics captured from the user interface or interaction layer, such as page share counts and number of views (eyeballs), in order to account for implicit prioritizations from a service team's senior management personnel. This can enable the system300to refine the machine-learning models320,322,326, and/or328over time as additional data are gathered from the user interface about factors that may impact escalation probability beyond the immediate product user-service agent interactions in a ticketing system's service ticket itself. Since the production escalation prediction system324evaluates a multitude of interactions between entities that can impact escalation, the production escalation prediction system324can capture information about not only whether a current service ticket is likely to be escalated, but also information about similar service tickets in the past that had averted escalations, or handled escalated service tickets efficiently. This crucial knowledge of service ticket arcs can provide a service company with a holistic view of the various factors the service company can act upon to direct a service ticket on the path to successful resolution in a way that also ensures user happiness. User happiness in such a case could be dependent upon assigning the appropriate service personnel, resources, prompter responses, and an improved quality of responses. In doing so, the production escalation prediction system324is a proactive and personalized solution towards user satisfaction, rather than a generic diagnosis based on readily available service ticket metrics. The production escalation prediction system324can empower service companies to strengthen their relationships with users, enabling the service companies to maintain standards of accountability and, as a consequence, maintain a high degree of user satisfaction. The system300may be deployed as one combined model, or one combined model per product, region, customer, etc. Since certain support ticketing systems do not have an explicit escalation tag or type, an escalation event is not necessary, as the system300can predict other labels, such as “at-risk.” As long as problematic service tickets are labelled to form a training set, the system300can predict the probabilities of problematic or at-risk service tickets. Furthermore, the entities that the production escalation prediction system324analyzes are relevant even beyond the support domain, because factors extracted from these entities and their evolving relationships may be used to model behavior patterns in other business workflows which operate on the assumption of the desire for continuously sustained business relationships between the user and a company across multiple product cycles. Examples of such industries where the system300may be successfully applied are the pharmaceutical, healthcare and medical devices industry, as well as the consumer electronics industry. FIG.4is a flowchart that illustrates a computer-implemented method for high fidelity predictions of service ticket escalation, under an embodiment. Flowchart400depicts method acts illustrated as flowchart blocks for certain actions involved in and/or between the system elements302-328ofFIG.3. A training set's history factors of the training set's product user, who initiated the training set's service ticket, and/or the training set's service agent, who was assigned to the training set's service ticket, are optionally derived, block402. The system300derives a training set's history factors used to train a model to predict service escalation. For example, and without limitation, this can include the training escalation prediction system318receiving training set data that includes a technical support ticket that contains the initial interactions100between the software product user Ann and the technical support agent Bob concerning a remote mount problem, as depicted byFIG.1, Continuing the example, the training escalation prediction system318derives the training set's history factors which indicate that Ann previously initiated 1 technical support ticket, Bob solved her previous problem in 15 minutes, and Ann's employer purchased a basic service level agreement for the software product, which is the support company's only product that Ann's employer uses. After the derivation of the training set's history factors, the training set's service ticket and the training set's history factors are optionally used to train a history-based machine-learning model to predict a history-based training probability that the training set's product user escalated service for the training set's service ticket, block404. The system300uses a training set's history factors to train a model to predict service escalation. By way of example and without limitation, this can include the history-based training machine-learning model320using Ann's history of never escalating a service, Ann's recent stage as a new user in the sales cycle of a software product that has been sold for a significant time, and Bob's recent stage as a senior technical support agent with experience solving the software product's problems to predict a 5% probability that Ann escalated her service ticket within 90 days of initiation. Following the prediction of the history-based training probability that the training set's product user escalated service for the training set's service ticket, the training set's change factors are derived for services provided for the training set's product user who initiated the training set's service ticket, a priority assigned to the training set's service ticket, times of service ticket interactions with the training set's service agent, states and corresponding times associated with the training set's service ticket, and/or an age of the training set's service ticket, block406. The system300derives a training set's change factors used to train to a model to predict service escalation. In embodiments, this can include the training escalation prediction system318receiving training set data that includes the technical support ticket that contains all subsequent interactions102between the software product user Ann and the technical support agent Bob concerning the remote mount problem, and the technical support ticket's metadata104, as depicted byFIG.1. Continuing the example, the training escalation prediction system318derives the training set's change factors which indicate that Bob was the only technical support agent who replied to Ann, that Bob did not request any information from Ann, two of their three interactions included machine text, that Ann did not change the ticket's priority, and the most recent comment was Ann's response thanking Bob for his advice, and the time series data's timestamps106indicate that Ann's thanks was before the first hourly observation of the service ticket's state after the ticket's initiation. Having derived a training set's change factors, the training set's service ticket and the training set's change factors are used to train a change-based machine-learning model to predict a change-based training probability that the training set's product user escalated service for the training set's service ticket, block408. The system300uses a training set's changes factors to train a model to predict service escalation. For example, and without limitation, this can include the change-based training machine-learning model322using the technical support ticket's unchanging priority, and the natural language processing of the last comment as Ann's thankful response to Bob's mount problem advice within 35 minutes of the service ticket's initiation, which implies that Ann following Bob's advice corrected Ann's problem, to predict a 1% probability that Ann escalated her service ticket within 90 days of initiation. Then the change-based training machine-learning model322accesses the time series data's timestamps106which indicate that Ann closed the service ticket on Wednesday at 2:45 P.M., which confirms the 1% escalation prediction by the change-based training machine-learning model322. After predicting the history-based and change-based training probabilities that the training set's product user escalated service for the training set's service ticket, a combined training probability that the training set's product user escalated service for the training set's service ticket is optionally created based on the history-based training probability and the change-based training probability, block410. The system300trains a model to predict service escalation based on combined training probabilities. By way of example and without limitation, this can include the training escalation prediction system318combining the prediction of the 5% probability, which is based on the training set's history factors, with the prediction of the 1% probability, which is based on the training set's change factors, to result in a prediction of a weighted 2.5% probability, which is based on a mix of training set factors, that Ann escalated her service ticket within 90 days of initiation. The training escalation prediction system318can use the combined training probability, the history-based training probability, and/or the change-based training probability to train the training machine-learning models320and/or322. Following the training to predict service escalation probabilities, history factors are optionally derived for a product user, who initiated a service ticket, and/or a service agent, who is assigned to the service ticket, block412. The system300derives the history factors for a model to predict service escalation. In embodiments, this can include the production escalation prediction system324receiving online production data that includes a pending urgent technical support ticket that contains the initial interactions200between the software product user Chris and the technical support agent Dana concerning a remote mount problem, as depicted byFIG.2, Continuing the example, the production escalation prediction system324derives history factors which indicate that Chris previously initiated 19 technical support tickets and escalated 9 of these tickets, Dana was the technical support agent for all of Chris' tickets, and Chris' employer upgraded to a stricter service level agreement for the software product, which is among 5 of the support company's products that Chris' employer uses. Having derived history factors, the history-based machine-learning model is optionally applied to the service ticket and the history factors to predict a history-based probability that the product user escalates service for the service ticket, block414. The system300applies history factors to a model to predict service escalation. For example, and without limitation, this can include the history-based production machine-learning model326using Chris' history of frequently escalating services, including when Dana was the technical support agent, Dana's recent stage as a junior technical support agent with limited experience solving the software product's problems, and Chris' employer upgrade to a stricter service level agreement for the software product to predict a 45% probability that Chris will escalate service within the next 4 hours. After the history-based escalation probability that the product user escalates service for the service ticket is predicted, the history-based probability is optionally output, block416. The system300outputs the history-based probability that a product user will escalate service. By way of example and without limitation, this can include the history-based production machine-learning model326outputting the prediction of the 45% probability that Chris will escalate service within the next 4 hours, and the explanation that the prediction is based on 1) Chris' history of frequently escalating services, 2) Dana's limited experience solving the software product's problems, and 3) the stricter service level agreement for the software product. Having optionally predicted a history-based escalation probability that the product user escalates service for the service ticket, change factors are derived for services provided for a product user who initiated a service ticket, a priority assigned to the service ticket, times of service ticket interactions with a service agent, states and corresponding times associated with the training set's service ticket, and/or an age of the service ticket, block418. The system300derives change factors for a model to predict service escalation. In embodiments, this can include the production escalation prediction system324receiving online production data that includes the pending urgent technical support ticket that contains all subsequent interactions202and204between the software product user Chris and the technical support agent Dana concerning the remote mount problem, and the technical support ticket's metadata206, as depicted byFIG.2. Continuing the example, the production escalation prediction system324derives subsequent change factors which indicate that Chris changes the ticket's priority to urgent, the most recent comments were Chris' second request for help, which implies that Dana's advice was ineffective, and Chris's frustrated “Hello?”, the time series data's timestamps208which indicate that Chris' implied rejection of Dana's advice was before the first hourly observation of the service ticket's state after the ticket's initiation, and the lack of Dana's reply to Chris' second request for help within the next 5 hourly observations After deriving change factors, the change-based machine-learning model is applied to the service ticket and the change factors to predict a change-based probability that the product user escalates service for the service ticket, block420. The system300applies changes factors to a model to predict service escalation. For example, and without limitation, this can include the change-based production machine-learning model328using the technical support ticket's new urgent priority, the natural language processing of Chris' frustrated responses to Dana's mount problem advice, which implies that Chris following Dana's advice failed to correct Chris' problem, and the lack of Dana's reply to Chris' responses within 5 hours to predict a 95% probability that Chris will escalate service within the next 4 hours. When predicting a change-based probability that a product user escalates service for a service ticket, a modified escalation risk associated with a most recent service ticket interaction of the service ticket interactions is optionally detected, block422. The system detects last comments in service tickets that are from customers and are not answered, but which should not increase the predicted escalation probabilities. By way of example and without limitation, this can include the change-based production machine-learning model328detecting the last comment102in Ann's service ticket100was from the customer and not answered, but this last comment102from Ann implies that following Bob's advice corrected her problem, unlike the last comments in service tickets that are from customers and are not answered, which typically increase the predicted escalation probabilities. If a modified escalation risk is detected, the modified escalation risk is optionally used to modify the output of the change-based probability or modify the change-based probability, block424. The system modifies the predicted escalation probability or its output based on last comments in service tickets that are from customers and are not answered, but which should not increase the predicted escalation probabilities. In embodiments, this can include the change-based production machine-learning model328using a filter to suppress the predicted probability of Ann escalating her service ticket100, because although the last comment102was from the customer and not answered, this last comment102from Ann implies that following Bob's advice corrected her problem, unlike the last comments in service tickets that are from customers and are not answered, which typically increase the predicted escalation probabilities. If no filters are applied to the modified escalation risk, the system300can use the modified escalation risk as a change factor that the change-based training machine-learning model322uses to modify the change-based probability. For example, the system300uses the last comment102from Ann, which implies that following Bob's advice corrected her problem, to reduce the predicted 2% probability of Ann escalating her service ticket100to a reduced 1% probability. Following the prediction of the change-based escalation probability that the product user escalates service for the service ticket, the change-based probability is output, block426. The system300outputs the change-based probability that a product user will escalate service. By way of example and without limitation, this can include the change-based production machine-learning model328outputting the prediction of the 95% probability that Chris will escalate service within the next 4 hours, and the explanation that the prediction is based on 1) the last two comments which are Chris' frustrated responses to Dana's mount problem advice, which implies that Chris following Dana's advice failed to correct Chris' problem, 2) the lack of Dana's reply to Chris' responses within 5 hours, and 3) the technical support ticket's new urgent priority. Having predicted the history-based and change-based escalation probabilities that the product user escalates service for the service ticket, a combined probability that the product user escalates service for the service ticket based on the history-based probability and the change-based probability is optionally output, block428. The system300outputs the combined probability that a product user will escalate service. In embodiments, this can include the production escalation prediction system324combining the prediction of the 45% probability, which is based on history factors, with the prediction of the 95% probability, which is based on change factors, to result in a prediction of a weighted 75% probability, which is based on a mix of factors, that Chris will escalate service within the next 4 hours. The production escalation prediction system324outputs the prediction of the 75% probability that Chris will escalate service within the next 4 hours, and the explanation that the prediction is based on 1) Chris' quick and frustrated responses to Dana's mount problem advice, which implies that Chris following Dana's advice failed to correct Chris' problem, 2) the lack of Dana's reply to Chris' responses, 3) Chris' history of frequently escalating services, and 4) Dana's limited experience solving the software product's problems. AlthoughFIG.4depicts the blocks402-428occurring in a specific order, the blocks402-428can occur in another order. In other implementations, each of the blocks402-428can also be executed in combination with other blocks and/or some blocks may be divided into a different set of blocks. In exemplary hardware device in which the subject matter may be implemented shall be described. Those of ordinary skill in the art will appreciate that the elements illustrated inFIG.5can vary depending on the system implementation. With reference toFIG.5, an exemplary system for implementing the subject matter disclosed herein includes a hardware device500, including a processing unit502, a memory504, a storage506, a data entry module508, a display adapter510, a communication interface512, and a bus514that couples elements504-512to the processing unit502. The bus514can comprise any type of bus architecture. Examples include a memory bus, a peripheral bus, a local bus, etc. The processing unit502is an instruction execution machine, apparatus, or device and can comprise a microprocessor, a digital signal processor, a graphics processing unit, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), etc. The processing unit502may be configured to execute program instructions stored in the memory504and/or the storage506and/or received via the data entry module508. The memory504can include a read only memory (ROM)516and a random access memory (RAM)518. The memory504may be configured to store program instructions and data during operation of the hardware device500. In various embodiments, the memory504can include any of a variety of memory technologies such as static random-access memory (SRAM) or dynamic RAM (DRAM), including variants such as dual data rate synchronous DRAM (DDR SDRAM), error correcting code synchronous DRAM (ECC SDRAM), or RAMBUS DRAM (RDRAM), for example. The memory504can also include nonvolatile memory technologies such as nonvolatile flash RAM (NVRAM) or ROM. In some embodiments, it is contemplated that the memory504can include a combination of technologies such as the foregoing, as well as other technologies not specifically mentioned. When the subject matter is implemented in a computer system, a basic input/output system (BIOS)520, containing the basic routines that help to transfer information between elements within the computer system, such as during start-up, is stored in the ROM516. The storage506can include a flash memory data storage device for reading from and writing to flash memory, a hard disk drive for reading from and writing to a hard disk, a magnetic disk drive for reading from or writing to a removable magnetic disk, and/or an optical disk drive for reading from or writing to a removable optical disk such as a CD ROM, DVD or other optical media. The drives and their associated computer-readable media provide nonvolatile storage of computer readable instructions, data structures, program modules and other data for the hardware device500. It is noted that the methods described herein may be embodied in executable instructions stored in a computer readable medium for use by or in connection with an instruction execution machine, apparatus, or device, such as a computer-based or processor-containing machine, apparatus, or device. It will be appreciated by those skilled in the art that for some embodiments, other types of computer readable media may be used which can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, Bernoulli cartridges, RAM, ROM, and the like can also be used in the exemplary operating environment. As used here, a “computer-readable medium” can include one or more of any suitable media for storing the executable instructions of a computer program in one or more of an electronic, magnetic, optical, and electromagnetic format, such that the instruction execution machine, system, apparatus, or device can read (or fetch) the instructions from the computer readable medium and execute the instructions for carrying out the described methods. A non-exhaustive list of conventional exemplary computer readable medium includes: a portable computer diskette; a RAM; a ROM; an erasable programmable read only memory (EPROM or flash memory); optical storage devices, including a portable compact disc (CD), a portable digital video disc (DVD), a high definition DVD (HD-DVD™), a BLU-RAY disc; and the like. A number of program modules may be stored on the storage506, the ROM516or the RAM518, including an operating system522, one or more applications programs524, program data526, and other program modules528. A user can enter commands and information into the hardware device500through data entry module508. The data entry module508can include mechanisms such as a keyboard, a touch screen, a pointing device, etc. Other external input devices (not shown) are connected to the hardware device500via an external data entry interface530. By way of example and not limitation, external input devices can include a microphone, joystick, game pad, satellite dish, scanner, or the like. In some embodiments, external input devices can include video or audio input devices such as a video camera, a still camera, etc. The data entry module508may be configured to receive input from one or more users of the hardware device500and to deliver such input to the processing unit502and/or the memory504via the bus514. A display532is also connected to the bus514via the display adapter510. The display532may be configured to display output of the hardware device500to one or more users. In some embodiments, a given device such as a touch screen, for example, can function as both the data entry module508and the display532. External display devices can also be connected to the bus514via the external display interface534. Other peripheral output devices, not shown, such as speakers and printers, may be connected to the hardware device500. The hardware device500can operate in a networked environment using logical connections to one or more remote nodes (not shown) via the communication interface512. The remote node may be another computer, a server, a router, a peer device or other common network node, and typically includes many or all of the elements described above relative to the hardware device500. The communication interface512can interface with a wireless network and/or a wired network. Examples of wireless networks include, for example, a BLUETOOTH network, a wireless personal area network, a wireless 802.11 local area network (LAN), and/or wireless telephony network (e.g., a cellular, PCS, or GSM network). Examples of wired networks include, for example, a LAN, a fiber optic network, a wired personal area network, a telephony network, and/or a wide area network (WAN). Such networking environments are commonplace in intranets, the Internet, offices, enterprise-wide computer networks and the like. In some embodiments, the communication interface512can include logic configured to support direct memory access (DMA) transfers between the memory504and other devices. In a networked environment, program modules depicted relative to the hardware device500, or portions thereof, may be stored in a remote storage device, such as, for example, on a server. It will be appreciated that other hardware and/or software to establish a communications link between the hardware device500and other devices may be used. It should be understood that the arrangement of the hardware device500illustrated inFIG.5is but one possible implementation and that other arrangements are possible. It should also be understood that the various system components (and means) defined by the claims, described below, and illustrated in the various block diagrams represent logical components that are configured to perform the functionality described herein. For example, one or more of these system components (and means) may be realized, in whole or in part, by at least some of the components illustrated in the arrangement of the hardware device500. In addition, while at least one of these components are implemented at least partially as an electronic hardware component, and therefore constitutes a machine, the other components may be implemented in software, hardware, or a combination of software and hardware. More particularly, at least one component defined by the claims is implemented at least partially as an electronic hardware component, such as an instruction execution machine (e.g., a processor-based or processor-containing machine) and/or as specialized circuits or circuitry (e.g., discrete logic gates interconnected to perform a specialized function), such as those illustrated inFIG.5. Other components may be implemented in software, hardware, or a combination of software and hardware. Moreover, some or all of these other components may be combined, some may be omitted altogether, and additional components may be added while still achieving the functionality described herein. Thus, the subject matter described herein may be embodied in many different variations, and all such variations are contemplated to be within the scope of what is claimed. In the descriptions above, the subject matter is described with reference to acts and symbolic representations of operations that are performed by one or more devices, unless indicated otherwise. As such, it is understood that such acts and operations, which are at times referred to as being computer-executed, include the manipulation by the processing unit of data in a structured form. This manipulation transforms the data or maintains it at locations in the memory system of the computer, which reconfigures or otherwise alters the operation of the device in a manner well understood by those skilled in the art. The data structures where data is maintained are physical locations of the memory that have particular properties defined by the format of the data. However, while the subject matter is described in a context, it is not meant to be limiting as those of skill in the art will appreciate that various of the acts and operations described hereinafter can also be implemented in hardware. To facilitate an understanding of the subject matter described above, many aspects are described in terms of sequences of actions. At least one of these aspects defined by the claims is performed by an electronic hardware component. For example, it will be recognized that the various actions may be performed by specialized circuits or circuitry, by program instructions being executed by one or more processors, or by a combination of both. The description herein of any sequence of actions is not intended to imply that the specific order described for performing that sequence must be followed. All methods described herein may be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. While one or more implementations have been described by way of example and in terms of the specific embodiments, it is to be understood that one or more implementations are not limited to the disclosed embodiments. To the contrary, it is intended to cover various modifications and similar arrangements as would be apparent to those skilled in the art. Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements. | 133,050 |
11861519 | DETAILED DESCRIPTION The present invention, in some embodiments thereof, relates to a probabilistic fault diagnosis system and, more specifically, but not exclusively, to a system for probabilistic diagnosis of a fault in an electrical appliance. A probabilistic diagnosis system is a diagnosis system using probabilistic reasoning. Computerized probabilistic diagnosis systems facilitate incorporating expertise of more than one professional in a certain domain, and/or experience gained over a period of time, into a common expert system for the certain domain. Such expertise may comprise not only knowing which problems can be associated with which observed symptoms, but also which problems are more likely given a plurality of observed symptoms. When building a probabilistic diagnosis system it is necessary to choose a mathematical model (a statistical model) representing one or more probabilistic relationships between a plurality of possible problems and a plurality of possible observed symptoms. In addition, it is necessary to create a domain-specific structure for the statistical model comprising a domain-specific plurality of possible dysfunctions (problems), a domain-specific plurality of observed phenomena (symptoms) and a domain-specific plurality of probabilistic relationships, and populate the statistical model with one or more probabilities. Given a plurality of observed symptoms, the statistical model may be used to compute the probabilities of the presence of various problems. When historical data, describing a plurality of historical events, is available, the historical data may be used to calculate one or more probabilities of an occurrence of a problem, given one or more observed symptoms. Each historical event in such historical data may comprise one or more historical observed symptoms and a historical diagnosed problem. A plurality of historical observed symptoms and historical diagnosed problems may be extracted from the plurality of historical events and used to train the statistical model, populating the statistical model with the one or more probabilities. It may be that some of the historical data is structured in an identified structure. For example, in a system for diagnosing an electrical appliance, the identified structure may comprise one or more identified error codes and one or more component identifiers. In such a system, the historical data may comprise for example one or more of: an observed identified error code and a component identifier of a faulty component. On the other hand, it may be that some of the historical data is unstructured, typically comprising free text describing one or more observed symptoms and additionally or alternatively one or more diagnosed faults. For example, in a system for diagnosing an electrical appliance, the free text may describe one or more actions performed by a technician of the appliance, such as “I cleaned the filter”. In another example, the free text may describe one or more observations made by a person, such as “I saw a puddle of water under the dishwasher”, “there is a clicking sound when the pump works”, and “the washed laundry has grey stains”. Such unstructured historical data typically cannot be used directly to train a statistical model. However, training the statistical model using information described in the unstructured historical data may increase accuracy of the one or more probabilities of the statistical model. A professional using an output of a system trained using the information described in the unstructured historical data may be able to resolve a problem faster and for a lower cost than when using an output of a less accurately trained statistical model. For example, when using a statistical model for diagnosing a medical problem, more accurate statistical model probabilities may reduce occurrences of a false diagnosis and reduce administration of ineffective treatments. In another example, when using a statistical model for diagnosing a problem in an electrical appliance, more accurate statistical model probabilities may reduce appliance downtime by expediting repair time and may reduce cost of maintenance by reducing occurrences of unnecessary component replacements due to misdiagnosis. The present invention, in some embodiments thereof, proposes generating a statistical model for fault diagnosis in an identified domain by extracting a plurality of values from both the structured historical data and the unstructured historical data according to a semantic model describing the identified domain, and training a statistical model derived from the semantic model using input data comprising the extracted plurality of values. In some embodiments of the present invention, such a statistical model is then used in a diagnosis system for diagnosis given one or more observed symptoms by computing one or more probabilities of one or more possible problems. Using a semantic model allows incorporating unstructured historical data into a formal structure of the statistical model, making a combination of the structured historical data and the unstructured historical data available for a training process of the statistical model, improving accuracy of the statistical model's probabilities. In addition, in some embodiments the present invention proposes further training the statistical model used in the diagnosis system using additional input data generated from the one or more observed symptoms and one or more resolution descriptions related to the one or more observed symptoms. Optionally, further training the statistical model is done continuously. Further training the statistical model using additional input data generated from one or more resolution descriptions related to the one or more observed symptoms may further improve accuracy of the statistical model's probabilities. Continuously further training the statistical model may continuously improve accuracy of the statistical model's probabilities over time, thus improving diagnosis performed using the statistical model allowing reduction of penalties incurred by misdiagnosis such as length of patient recovery time, length of appliance downtime, cost of replacement components, and length of repair time. In some embodiments of the present invention the semantic model comprises a plurality of semantic entities, each selected from: one or more observable phenomena (symptoms), one or more dysfunctions (problems), one or more factors each influencing a probability of a structural or functional relationship between two of the plurality of nodes (influencing-factors), one or more symptom details (issues) and one or more locations. Some examples of a symptom are a fault code, “PuddleUnderAppliance”, and “DishesNotClean”. A dysfunction (problem) may be a condition that manifests as a symptom. In a semantic model describing fault diagnosis of an electrical appliance, some examples of a problem are a bad initialization status code, “Pipe123Cracked”, and “Fuse5Disconnected”. In a semantic model describing diagnosis of a medical issue an example of a problem is “MitralValveLeaking”. Some examples of an influencing-factor are age of an appliance, a weather condition and a production run identifier. Some examples of an issue are “DoesNotStart” and “Leaks”. Some examples of a location are a component identifier and “Appliance”. In addition, in such embodiments the statistical model comprises a plurality of semantic relationships, each representing a parent-child relationship between two semantic entities of the plurality of semantic entities. Possible parent-child relationships are a causes relationship, where a child problem is caused by a parent problem, a manifests relationship, where a child symptom is manifested by a parent problem, an influences relationship, where a child symptom or a child problem is influenced by a parent influencing factor, and a location-of relationship where a child location is related to a parent issue. In addition, in such embodiments the semantic model optionally comprises a mapping identifying each of one or more symptoms and/or problems by one or more combinations of an issue and a location connected by a location-of relationship. For example, a symptom of “machine does not start” may be mapped to a combination of an issue of “does not start” connected by a location-of relationship to a location of “machine”. Optionally, the mapping identifies each of one or more symptoms and/or problems by an issue. This mapping may improve accuracy of extraction of some of the plurality of values from the unstructured historical data. In some embodiments the statistical model is a Bayesian network model. A Bayesian network is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). Each of the plurality of nodes of the DAG represents one of the set of variables, and each of the plurality of edges of the DAG represents a conditional dependency between two of the set of variables. In some embodiments of the present invention where the statistical model is a Bayesian network model, each node of a plurality of nodes of the Bayesian network model represents a Boolean occurrence variable of one of some of the plurality of semantic entities of the semantic model. In addition, in such embodiments, each edge of a plurality of edges of the Bayesian network model represents one of some of the plurality of semantic relationships of the semantic model. In such embodiments, each edge of the plurality of edges connecting two nodes of the plurality of nodes represents a conditional dependency between two Boolean occurrence variables of the two semantic entities represented by the two nodes, one node being a parent node and a second node being a child node such that a child Boolean occurrence variable represented by the child node is conditionally dependent on all parent Boolean occurrence variables represented by all parent nodes of the child node. The value of a node's Boolean occurrence variable is probabilistically independent of values of Boolean occurrence variables of other ancestor nodes besides the node's direct parents. A Bayesian network has one or more probabilities. The one or more probabilities of a Bayesian network include for each node a table specifying a probability of each value of the node for each set of possible values for the node's parents. Such a table is called a conditional probability table (CPT). Optionally, the present invention proposes representing each node's CPT by a local probability model. An unrestricted CPT, for a node having an amount of parents denoted by n, would have 2n-1independent entries. Using a local probability model may allow specifying fewer parameters than when specifying an unrestricted CPT. Specifying fewer parameters may reduce an amount of storage required for storing a statistical model and additionally or alternatively an amount of storage required for storing training data for training the mathematical model. In embodiments where the statistical model's plurality of nodes and plurality of edges are defined as described above, the causes relationship and the manifests relationship have a causal nature, allowing associating with each such relationship a local probability model that satisfies a property of independence of causal influences, where an effect of each parent on a child may be specified individually and an overall CPT may be calculated from a plurality of individual influences. Non-limiting examples of a local probability model satisfying a property of independence of causal influences are a noisy-or model and a logistic conditional probability distribution. In addition, in some embodiments the present invention proposes defining one or more formal definitions tying the statistical model's structure and probabilities to semantics of the domain according to the semantic model, and using the one or more formal definitions to expedite extraction of some of the plurality of values from the unstructured historical data. Some examples of a formal definition are a regular expression, i.e. a sequence of characters that define a search pattern, optionally defined according to the semantic model describing the identified domain, a rule mapping an identified text to a semantic entity of the statistical model's plurality of semantic entities or to a semantic relationship of the semantic model's plurality of semantic relationships, and an annotation of free text of historical unstructured data mapping an identified text to a semantic entity of the semantic model's plurality of semantic entities or to a semantic relationship of the semantic model's plurality of semantic relationships. Optionally, rule based natural language processing methods are used to extract some of the plurality of values from the unstructured historical data according to one or more rules. In addition or alternatively, machine learning techniques are optionally used to extract some of the plurality of values from the unstructured historical data according to some annotated free text of the unstructured historical data. Using machine learning techniques may facilitate capturing text that is similar but not identical to the annotated text. In addition, some machine learning techniques are capable of identifying relationships as well as entities and using such machine learning techniques facilitates extracting one or more symptom values and/or problem values according to one or more combinations of an issue and a location connected by a location-of relationship mapped by the semantic model to one or more symptoms and problems. In addition or alternatively, one or more of the plurality of values are extracted from the unstructured historical data by matching free text of the unstructured historical data with one or more regular expressions. The plurality of semantic entities and plurality of semantic relationships may be an ontology, that is a formal naming and definition of types, properties, and interrelationships of entities that exist in a domain of discourse. The following non-limiting disclosure focuses on possible embodiments where the present invention is used in an appliance-repair domain, for fault diagnosis of an electrical appliance, and the semantic model comprises a plurality of semantic entities and semantic relationship existing in the appliance repair domain. However, the present invention is not limited to the appliance-repair domain and may be used in other domains such as medical diagnosis and technical system maintenance. In embodiments used in such another domain, the plurality of semantic entities and semantic relationships are defined according to the other domain. Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not necessarily limited in its application to the details of construction and the arrangement of the components and/or methods set forth in the following description and/or illustrated in the drawings and/or the Examples. The invention is capable of other embodiments or of being practiced or carried out in various ways. The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. Reference is now made toFIG.1, showing a schematic block diagram of an exemplary system100for generating a statistical model, according to some embodiments of the present invention. In such embodiments, at least one hardware processor101is connected to at least one non-volatile digital storage102. At least one storage102is optionally used to store a generated statistical model. Examples of a non-volatile digital storage are a hard disk and non-volatile random access memory (NVRAM). Optionally, at least one storage102is used to store structured historical information and in addition or alternatively unstructured historical free text information related to at least some of a plurality of historical events, for example a plurality of historical electrical appliance events, or a plurality of historical medical records. An example of a historical electrical appliance event is an appliance malfunction event, where the structured historical information comprises a component identifier of a malfunctioning component and the unstructured historical free text information comprises a description of some observed symptoms. Optionally, at least one hardware processor is connected to at least one digital communication network interface103. Examples of a digital communication network interface are a wireless network interface or an Ethernet interface. Optionally, at least one hardware processor101sends the generated statistical model to another hardware processor using at least one network interface103. Optionally, the structured historical information and in addition or alternatively the unstructured historical free text information is digital information, stored in a digital repository such as a database. Optionally, at least one hardware processor101receives the structured historical information and in addition or alternatively the unstructured historical free text information regarding the plurality of historical electrical appliance events via at least one network interface103. In order to generate a statistical model for fault diagnosis in an identified diagnosis domain, system100implements in some embodiments the following optional method. Reference is now made also toFIG.2, showing a flowchart schematically representing an optional flow of operations200for generating a statistical model for fault diagnosis, according to some embodiments of the present invention. In such embodiments, at least one hardware processor101uses a statistical model derived from a semantic model. The semantic model optionally represents an ontology of the identified diagnosis domain and optionally has a plurality of semantic entities, each semantic entity relates to one or more properties of the identified diagnosis domain, e.g. a plurality of domain entities existing in the identified diagnosis domain. When the identified diagnosis domain is diagnosis of an electrical appliance, some possible properties are an electrical appliance's structure, for example an identifier of a component of the electrical appliance, an electrical appliance's functionality, for example “does not start” and “leaks”, and an electrical appliance's environment, for example “under the machine”. When the identified diagnosis domain is diagnosis of a medical condition, some of the properties may be related to a medical term of a medical ontology such as an organ or a measured vital sign. Optionally, each of the plurality of semantic entities is a member of a group of identified semantic entities comprising: at least one observable phenomenon (a symptom), at least one dysfunction (a problem), at least one factor influencing a probability of a structural or functional relationship between two of the plurality of semantic entities (an influencing-factor), a symptom detail (an issue), and a location. Some examples of a symptom are a fault code value, “PuddleUnderAppliance”, and “DishesNotClean”. Some examples of a problem are a bad initialization status code value, “Pipe123Cracked”, and “Fuse5Disconnected”. Some examples of an influencing-factor are age of an appliance; a weather condition value and a production run identifier. Some examples of an issue are “DoesNotStart” and “Leaks”. Some examples of a location are a component identifier and “Appliance”. The semantic model optionally has a plurality of semantic relationships, each semantic relationship connecting two of the plurality of semantic entities and representing a parent-child relationship therebetween. Optionally, each of the plurality of semantic relationships is selected from a group of identified parent-child relationships comprising: a causes relationship, a manifests relationship, an influences relationship, and a location-of relationship. Optionally, each causes relationship of the group of identified parent-child relationships connects a parent problem of the group of identified semantic entities with a child problem of the group of identified semantic entities. In an example where the diagnosis domain is diagnosis of an electrical appliance, a causes relationship may connect a parent problem of “current regulator malfunction” with a child problem of “component 123 malfunction”. Optionally, each manifests relationship of the group of identified parent-child relationships connects a parent problem of the group of identified semantic entities with a child symptom of the group of identified semantic entities. In an example where the diagnosis domain is diagnosis of an electrical appliance, a manifests relationship may connect a parent problem of “current regulator malfunction” with a child symptom of “component 123 is black”. Optionally, each influences relationship of the group of identified parent-child relationships connects a parent influencing-factor of the group of identified semantic entities with a child problem value of the group of identified semantic entities or a symptom of the group identified semantic entities. In an example where the diagnosis domain is diagnosis of an electrical appliance, an influences relationship may connect a parent influencing-factor of “machine age=20 years” with a child symptom of “component 123 is cracked”. Optionally, each location-of relationship of the group of identified parent-child relationships connects a parent issue of the group of identified semantic entities with a child location of the group of identified semantic entities. In an example where the diagnosis domain is diagnosis of an electrical appliance, location-of relationship may connect a parent issue of “does not start” with a child location of “machine”. Optionally, each symptom or problem of the group of identified semantic entities is mapped to a group comprising one or more pairs of semantic entities of the group of identified semantic entities, each pair consisting of an issue of the group of identified semantic entities and a location of the group of identified semantic entities connected by a location-of relationship of the group of identified parent-child relationships. Optionally a symptom is identified by the one or more pairs the symptom is mapped to. Optionally, a symptom is identified by an issue mapped to the symptom, without a location connected to the issue. The statistical model is optionally a Bayesian network. Optionally, the statistical model represents some of the plurality of semantic entities of the semantic model. Optionally, the statistical model represents some of the plurality of semantic relationships of the semantic model. Optionally, the statistical model comprises a plurality of nodes, each representing one of some of the plurality of semantic entities, connected by a plurality of edges, each representing one of some of the plurality of semantic relationships. Optionally, the some of the plurality of semantic entities are each one of a group comprising: a problem of the group of identified semantic entities, a symptom of the group of identified semantic entities, and an identifying-factor of the group of identified semantic entities. Optionally, the some of the plurality of semantic relationships are each one of a group consisting of: a causes relationship of the group of identified parent-child relationships, a manifests relationship of the group of identified parent-child relationships, an influences relationship of the group of identified parent-child relationships. In204, at least one hardware processor101optionally extracts a plurality of structured values from structured historical information organized in an identified structure and related to at least some of a plurality of historical events, for example historical electrical appliance events or historical medical events. When generating a statistical model for diagnosing a fault in an electrical appliance, the identified structure optionally comprises a plurality of component identifiers of one or more components of the electrical appliance. Optionally, each of the plurality of structured values is associated with at least one of the plurality of semantic entities or the plurality of semantic relationships. In207, at least one hardware processor101optionally extracts a plurality of unstructured values from unstructured historical free text information related to at least some of the plurality of historical events. Optionally, each of the plurality of unstructured values is associated with at least one of the plurality of semantic entities or the plurality of semantic relationships. In210at least one hardware processor optionally generates input data to train the statistical model from the plurality of structured values and the plurality of unstructured values. Optionally, some of the plurality of structured values associated with one or more symptoms of the plurality of semantic entities and, additionally or alternatively, some of the plurality of unstructured values associated with the one or more symptoms of the plurality of semantic entities are extracted according to one or more pairs consisting of an issue of the plurality of semantic entities and a location of the plurality of semantic entities mapped to the one or more symptoms. In addition, some of the plurality of structured values associated with one or more problems of the plurality of semantic entities, and additionally or alternatively some of the plurality of unstructured values associated with the one or more problems of the plurality of semantic entities, are extracted according to an issue of the plurality of semantic entities mapped to the one or more problems or one or more pairs consisting of an issue of the plurality of semantic entities and a location of the plurality of semantic entities mapped to the one or more problems. Reference is made now also toFIGS.3,4and5, showing flowcharts schematically representing optional flows of operations for extracting values and generating input data, according to some embodiments of the present invention. Reference is made now toFIG.3, showing a flowchart schematically representing an optional flow of operations500for extracting unstructured values using one or more regular expressions, according to some embodiments of the present invention. In such embodiments, in501at least one hardware processor101receives at least one regular expression defined according to the plurality of semantic entities and the plurality of semantic relationships. Optionally, the at least one regular expression matches at least one value associated with at least one of the plurality of semantic entities and the plurality of semantic relationships. When the diagnosis domain is electrical appliance diagnosis, the at least one regular expression may identify a fault code of an electrical appliance. Optionally, at least one hardware processor receives the at least one regular expression by reading the at least one regular expression from at least one storage102or via at least one network interface104. In510, at least one hardware processor101identifies one or more matching values in the unstructured historical free text information matching the at least one regular expression. Reference is made now toFIG.4, showing a flowchart schematically representing optional flows of operations600for extracting values and generating input data, according to some embodiments of the present invention. In such embodiments, at least one hardware processor101receives at least one rule mapping an identified text to a semantic entity of the plurality of semantic entities or a semantic relationship of the plurality of semantic relationships. For example, a rule may map identified text “the fuse is defective” to a semantic entity of “DefectiveFuse”. In610, at least one hardware processor optionally applies rule based natural language processing methods as known in the art to the unstructured historical free text information to extract one or more unstructured values from the unstructured historical free text information and map the one or more unstructured values to one or more semantic entities of the plurality of semantic entities or one or more sematic relationships of the plurality of semantic relationships, according to the at least one rule. Examples of a rule based natural language processing method are International Business Machines (IBM) Watson Explorer Content Analytics and Stanford Core NLP Suite. Optionally, the input data comprises the mapping between the one or more unstructured values to the one or more semantic entities of the plurality of semantic entities or one or more sematic relationships of the plurality of semantic relationships. Reference is made now toFIG.5, showing a flowchart schematically representing optional flows of operations700for extracting values and generating input data, according to some embodiments of the present invention. In such embodiments, at least one hardware processor101receives in701annotated text comprising some of the unstructured historical free text information annotated with one or more annotations each mapping an identified text to a semantic entity of the plurality of semantic entities or a semantic relationship of the plurality of semantic relationships. For example an annotation may map identified text “the fuse is defective” to a semantic entity of “DefectiveFuse”. In701, at least one hardware processor101optionally trains at least one machine learning software module using the annotated text. Examples of a machine learning software module are IBM Watson Natural Language Understanding and RASA NLU. In720, at least one hardware processor101optionally applies the at least one machine learning software module to the unstructured historical free text information to extract one or more unstructured values from the free text and map the one or more unstructured values to one or more semantic entities of the plurality of semantic entities or one or more sematic relationships of the plurality of semantic relationships. Optionally, the input data comprises the mapping between the one or more unstructured values to the one or more semantic entities of the plurality of semantic entities or one or more sematic relationships of the plurality of semantic relationships. Reference is made again toFIG.2. At least one hardware processor101in214optionally trains the statistical model by inputting the generated input data to the statistical model and in220optionally outputs the statistical model. Optionally, the trained statistical model output by at least one hardware processor101is used in a fault diagnosis system in order to process digital data comprising binary and textual representations of a plurality of observations for the purpose of diagnosing a fault in a diagnosable entity of the identified diagnosis domain. For example, a statistical model generated for an electrical appliance diagnosis domain may be used in a system for diagnosing a fault in an electrical appliance, and the plurality of observations may be related to the electrical appliance. In another example, a statistical model generated for a medical diagnosis domain may be used in a system for diagnosing a medical condition of a patient. Optionally, at least one hardware processor101outputs the statistical model by storing the statistical model on at least one storage102or by sending the statistical model to at least one other hardware processor via at least one network interface103. When the statistical model is a Bayesian network, methods as known in the art may be used to train the statistical model. In some embodiments of the present invention, for example some embodiments using a Bayesian network, a local probability model may be assigned to some of the plurality of nodes. Reference is now made also toFIGS.6A and6B, showing flowcharts schematically representing an optional flow of operations for defining a local probability model. Reference is made toFIG.6A, showing a flowchart schematically representing an optional flow of operations300for defining a local probability model for a node representing a symptom, according to some embodiments of the present invention. In such embodiments, each node of the plurality of nodes has a Boolean occurrence value indicating an occurrence of the node, and at least one hardware processor101assigns each node representing a symptom of the plurality of identified semantic entities a local probability model defined by in301optionally identifying a plurality of manifesting parent nodes, such that each is a parent node connected by a manifesting edge representing a manifests relationship of the group of identified parent-child relationships, to the symptom node, in303optionally identifying a plurality of influencing-factor parent nodes, such that each is a parent node connected by an influencing-factor edge representing an influencing-factor relationship of the group of identified parent-child relationships, to the symptom node, and in305optionally identifying a plurality of influencing combinations of a plurality of Boolean occurrence values of the plurality of influencing-factor parent nodes. Next, in307, for each of the plurality of influencing combinations, at least one hardware processor101optionally associates the symptom node with a noisy-or distribution. The associated noisy-or distribution may be defined according to the following optional method. Reference is made now also toFIG.6B, showing a flowchart schematically representing an optional flow of operations330for defining a noisy-or distribution, according to some embodiments of the present invention. In331, at least one hardware processor101optionally associates with each manifesting parent node of the plurality of manifesting parent nodes a probability that a true Boolean occurrence value of the manifesting parent node does not cause a true Boolean occurrence value of the symptom node, denoted by λw, and in333optionally associates with the symptom node a probability that the symptom node has a true Boolean occurrence value when none of the plurality of manifesting parent nodes has a true Boolean occurrence value, denoted by λ0. Next, at least one hardware processor101optionally associates with the symptom node a noisy-or distribution computed by in335optionally computing a plurality of node terms by for each of the plurality of manifesting parent node's having a true Boolean occurrence value subtracting the manifesting parent node's λwfrom 1, in337optionally multiplying the plurality of node terms to produce a parent product, in339optionally computing an independent term by subtracting λ0from 1, in341optionally multiplying the parent product by the independent term to produce a node product, and in343optionally subtracting the node product from 1. Optionally, training the statistical model in214comprises deriving values for a plurality of λwand λ0values from the input data using methods as known in the art. Reference is now made also toFIGS.7A and7B, showing flowcharts schematically representing another optional flow of operations for defining a local probability model. Reference is made toFIG.7A, showing a flowchart schematically representing an optional flow of operations400for defining a local probability model for a node representing a problem, according to some embodiments of the present invention. In such embodiments, each node of the plurality of nodes has a Boolean occurrence value indicating an occurrence of the node and a numerical value of 1 in the node's Boolean occurrence value is true, otherwise 0, and at least one hardware processor101assigns each node representing a problem of the plurality of identified semantic entities a local probability model defined by in401optionally identifying a plurality of manifesting parent nodes, such that each is a parent node connected by a manifesting edge representing a manifests relationship of the plurality of identified parent-child relationships to the problem node, in403optionally identifying a plurality of influencing-factor parent nodes, such that each is a parent node connected by an influencing-factor edge representing an influencing-factor relationship of the plurality of identified parent-child relationships, to the problem node, and in405optionally identifying a plurality of influencing combinations of a plurality of Boolean occurrence values of the plurality of influencing-factor parent nodes. Next, in407, for each of the plurality of influencing combinations at least one hardware processor101optionally associates the symptom node with a logistic conditional probability distribution. Reference is made now also toFIG.7B, showing a flowchart schematically representing an optional flow of operations430for defining a logistic conditional probability distribution, according to some embodiments of the present invention. In431, at least one hardware processor101optionally associate with each causing parent node of the plurality of causing parent nodes a node weight modifying the local probability model, denoted by wk, and in433optionally associates with the problem node an independent weight modifying the local probability model, denoted by w0. Next, at least one hardware processor101optionally associates with the problem node a logistic conditional probability distribution computed by: in435optionally computing a plurality of node terms by for each of the plurality of causing parent node's multiplying the causing parent node's wkthe causing parent node's numerical value, in437optionally adding the plurality of node terms to produce a parent sum, in439optionally adding the parent sum to the independent term to produce a node sum, and in441optionally computing a sigmoid function of the node sum. Optionally, the sigmoid function is a logistic function defined by the formula: S(x)=exex+1 where e denotes the base of the natural logarithm. Optionally, training the statistical model in214comprises deriving values for a plurality of wkand w0values from the input data using methods as known in the art. In some embodiments of the present invention, a statistical model generated by system100is used by a fault-diagnosis system. Reference is made now also toFIG.8, showing a schematic block diagram of an exemplary diagnosis system1000, according to some embodiments of the present invention. In such embodiments, at least one hardware processor1001is connected to at least one non-volatile digital storage1002. At least one storage1002is optionally used to store a statistical model received from system100. Examples of a non-volatile digital storage are a hard disk and non-volatile random access memory (NVRAM). Optionally, at least one hardware processor1001is connected to at least one digital communication network interface1003. Examples of a digital communication network interface are a wireless network interface or an Ethernet interface. Optionally, at least one hardware processor1001sends result digital data representing one or more possible problems to another hardware processor using at least one network interface1003. Optionally, at least one hardware processor1001receives a statistical model from system100via at least one network interface1003. At least one hardware processor1001is optionally connected to at least one display device1004for the purpose of displaying one or more messages comprising result digital data representing one or more possible problems. An example of a display device is a computer screen. Optionally, at least one hardware processor1001is connected to at least one input device1005for the purpose of receiving digital data comprising binary and textual representations of a plurality of observations and/or resolution digital data comprising binary and textual representations of one or more resolutions of a fault. An example of an input device is a keyboard. Optionally, at least one hardware processor receives the digital data, and additionally or alternatively, the resolution digital data via at least one network interface1003. The digital data may be received by at least one hardware processor1001from a database of a customer support system and include binary and textual representations of a plurality of observations made by a person and reported by the person to the customer support system. For example, the plurality of observations may be related to an electrical appliance. The resolution data is optionally received by at least one hardware processor1001from the database of the customer support system. To diagnose a fault, in some embodiments of the present invention system1001implements the following optional method. Reference is made now also toFIG.9, showing a flowchart schematically representing an optional flow of operations900for fault diagnosis, according to some embodiments of the present invention. In such embodiments, at least one hardware processor receives in901a statistical model generated by system100implementing method200. At least one hardware processor1001optionally receives the statistical model via at least one network interface1003. In904, at least one hardware processor1001optionally receives a digital data comprising binary and textual representations of a plurality of observations. At least one hardware processor1001optionally receives the digital data via at least one input device1005. Optionally, the digital data is stored in a database, for example in a database of a customer support system. Optionally, at least one hardware processor1001receives the digital data comprising the binary and textual representations of the plurality of observations via at least one network interface1003. Optionally, the binary and textual representations of the plurality of observations comprise structured observation information organized in the identified structure, and alternatively or in addition unstructured free text observation information. In embodiments where system1000is used for diagnosing a fault in an electrical appliance, the plurality of observations is optionally related to an electrical appliance event. Examples of structured observation information are electrical appliance component identification values and error code values. Optionally, in908at least one hardware processor1001applies the statistical model to the digital data to identify one or more possible problems. Reference is now made also toFIG.10, showing a flowchart schematically representing an optional flow of operations950for applying a statistical model, according to some embodiments of the present invention. At least one hardware processor1001in951optionally extracts a plurality of structured symptom values, each associated with at least one of the plurality of semantic entities of the plurality of semantic relationships, from the structured observation information, in954optionally extracts a plurality of unstructured symptom values, each associated with at least one of the plurality of semantic entities of the plurality of semantic relationships, from the unstructured free text observation information, and in958optionally generates symptom input data from the plurality of structured symptom values and the plurality of unstructured symptom values. Optionally, at least one hardware processor implements one or more methods similar to method500,600, and/or700in order to extract the plurality of structured symptom values in951, to extract the plurality of unstructured symptom values in954and to generate the symptom input data in958. In960at least one hardware processor1001optionally computes a plurality of probabilities of a plurality of suggested problems from the plurality of possible node values, by inputting the symptom input data to the statistical model and in964optionally identifies some of the plurality of suggested problems as the one or more possible problems, each possible problem optionally associated with a confidence level or probability. Optionally, at least one hardware processor1001identifies an identified amount of suggested problems of the plurality of suggested problems having highest probabilities as the one or more possible problems. For example, between 2 and 10 suggested problems having highest probabilities. For example, 4 suggested problems having highest probabilities. Reference is made again toFIG.9. In912, at least one hardware processor1001optionally outputs result digital data representing the one or more possible problems, optionally to a user of the diagnosis system, for example a technician of an electrical appliance. Optionally, at least one hardware processor1001outputs the result digital data representing the one or more possible problems by displaying one or more messages on at least one display device1004. Optionally, at least one hardware processor1001outputs result digital data representing the one or more possible problems by sending at least one message on at least one network interface1003. Optionally, the result digital data representing the one or more problems comprises for each of the one or more possible problems a problem description and a rank value, for example a probability value. Optionally, resolution information of an appliance event related to the observations may be used with the observations to further train the statistical model. In some embodiments, in914at least one hardware processor1001receives resolutions digital data comprising binary and textual representations of one or more resolutions of a fault, for example a fault of an electrical appliance. The comprising binary and textual representations of the one or more resolutions optionally comprise structured resolution information organized in the identified structure and alternatively or in addition unstructured resolution free text information. At least one hardware processor1001optionally receives the plurality of resolution digital data via at least one network interface1003or via at least one input device1005. At least one hardware processor1001in918optionally extracts a plurality of structured resolution values from the structured resolution information, in920optionally extracts a plurality of unstructured resolutions values from the unstructured free text resolution information, and in922optionally generates event input data from the plurality of structured resolution values and the plurality of unstructured resolution values. Optionally, at least one hardware processor implements one or more methods similar to method500,600, and/or700in order to extract the plurality of structured resolution values in918, to extract the plurality of unstructured resolution values in920and to generate the event input data in922. Optionally, in922at least one hardware processor1001generates the event input data from the plurality of structured symptom values, the plurality of unstructured symptom values, the plurality of structured resolution values, and the plurality of unstructured resolutions values. Next, in924, at least one hardware processor1001optionally trains the statistical model by inputting the event input data into the statistical model. Optionally, at least one hardware processor1001trains the statistical model in a plurality of iterations, using a plurality of one or more observations and a plurality of one or more resolutions. The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. It is expected that during the life of a patent maturing from this application many relevant “possible node values” and parent-child relationships will be developed and the scope of the terms “possible node value” and “parent-child relationship” are intended to include all such new technologies a priori. As used herein the term “about” refers to ±10%. The terms “comprises”, “comprising”, “includes”, “including”, “having” and their conjugates mean “including but not limited to”. This term encompasses the terms “consisting of” and “consisting essentially of”. The phrase “consisting essentially of” means that the composition or method may include additional ingredients and/or steps, but only if the additional ingredients and/or steps do not materially alter the basic and novel characteristics of the claimed composition or method. As used herein, the singular form “a”, “an” and “the” include plural references unless the context clearly dictates otherwise. For example, the term “a compound” or “at least one compound” may include a plurality of compounds, including mixtures thereof. The word “exemplary” is used herein to mean “serving as an example, instance or illustration”. Any embodiment described as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments and/or to exclude the incorporation of features from other embodiments. The word “optionally” is used herein to mean “is provided in some embodiments and not provided in other embodiments”. Any particular embodiment of the invention may include a plurality of “optional” features unless such features conflict. Throughout this application, various embodiments of this invention may be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range. Whenever a numerical range is indicated herein, it is meant to include any cited numeral (fractional or integral) within the indicated range. The phrases “ranging/ranges between” a first indicate number and a second indicate number and “ranging/ranges from” a first indicate number “to” a second indicate number are used herein interchangeably and are meant to include the first and second indicated numbers and all the fractional and integral numerals therebetween. It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination or as suitable in any other described embodiment of the invention. Certain features described in the context of various embodiments are not to be considered essential features of those embodiments, unless the embodiment is inoperative without those elements. All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. To the extent that section headings are used, they should not be construed as necessarily limiting. | 55,888 |
11861520 | Common reference numerals are used throughout the figures to indicate similar features. DETAILED DESCRIPTION Embodiments of the invention are described below by way of example only. These examples represent the suitable modes of putting the invention into practice that are currently known to the Applicant although they are not the only ways in which this could be achieved. The description sets forth the functions of the example and the sequence of steps for constructing and operating the example. However, the same or equivalent functions and sequences may be accomplished by different examples. The present disclosure offers a crop monitoring solution that incorporates the use of both historical agricultural data and seasonal image data. The seasonal image data may be obtained from multi-annual satellite imagery and fused into a combined dataset that is used in part to train one or more crop models within a Bayesian framework in providing crop monitoring. One or more crop models may be one or more machine learning models or configured to apply one or more machine learning methods or techniques described herein. The application of the Bayesian framework accounts for seasonality (i.e. pre-season and in-season) in the image data by iteratively applying to the results of the crop models (in the pre-season or previous season) as “prior” to one or more crop states (in the season or following seasons). Each crop state is thereby analysed statistically and aggregately in the Bayesian framework for predicting, in a more accurate and reliable manner, the crop states corresponding to the entirety of the crop cycle, which includes predictions such as planting dates, acreage, crop classification, harvest and yield. The seasonal image data used by the Bayesian framework could be obtained for one or more agricultural fields. Each agricultural field, referring to a unit of area in which at least one crop is situated, provides the soil for growing such crop or other high-value plant species. This unit of area may be highly managed and often provisioned with artificial nutrients (i.e. fertilized) to the extent exhibited by the seasonal image data. At least one crop species may grow in an agricultural field at any particular time such that, in a given agricultural field, a multitude of crop species may grow in an agricultural season. The seasonal image data herein refer to data with the characteristic of a time series in which the data experiences regular and predictable changes that recur every calendar year. Seasonal image data comprises one or more images or data representations of said one or more images. The raw season image data obtained externally may be processed or pre-processed. The processed season image data may be used as training data for training one or more crop models of the Bayesian framework. The seasonal image data can be obtained from at least one source, i.e. satellite imagery. For example, seasonal image data may data obtained from a synthetic aperture radar (SAR) satellite and/or an optical imagery satellite. Examples of historical agricultural data and seasonal image data are shown inFIGS.7to10. The seasonal image data for an agricultural field retains certain information on or about the particular crop or species of the crop, which may be used to infer crop states. Historical agricultural data with respect to the same will retain the same or similar information, which corresponds to the seasonal image data in some manner, whether it is for a planting date, classification, harvest, yield, or any other crop state described herein. A crop state, or equivalently, a particular state of a crop, refers to the condition of the crop characterized by at least one objective criterion associated with the crop upon an event or activities performed for the crop in one or more agricultural seasons when the crop is present on the agricultural field being considered. A crop state may be a type of crop type, time passed after a crop planting date, a crop yield, a crop acreage, a crop emergence date, a crop harvest, or damage to the crop. A crop type refers to the classification of the crop or plant species. For example, crop classification maps or mapping may result from or derivative of the data source for agricultural monitoring and acreage reporting. A crop planting date refers to the planting date or a limited range of dates that have been forecasted and/or based on a combination of crop or plant species in a certain field. The date or range of dates is presented with respect to the likelihood that planting will occur on or before such date(s) in the following or predicted agricultural season. For example, the crop planting date forecast may be 66% probability of planting within 7 days of April 15th for corn in a certain field. A crop yield refers to an estimation of the amount of agricultural production harvested per unit of land area in the monitored agricultural field based on one or more crop states. The estimated yield may be measured in bushels, tons, or pounds per acre, and such measurement may be compared or validated after harvesting has taken place. A crop emergence date refers to the date or a range of dates that has been forecasted for a combination of a crop or plant species and a field, where the crop or plant species first emerges, which can be a critical input to models of crop development and biomass accumulation. The crop emergence data may be (the date) when the crop established leaves during the vegetative stage. For example, the crop emergence date may correspond to a 75% probability that the crop emerged between May 23rd and May 26th. A crop harvest date refers to a determination of the date with which the crop is to be harvested in a certain field, where significant biomass for the crop has accumulated concerning the crop type. The crop harvest determination may help validate the other crop states, such as crop planation date, crop emergence data, and crop yield. The damage to the crop refers to harm caused by events that are adverse to the continued growth of the crop. For example, the events may comprise flood, drought, high wind, other environmental factors causing the crop stress, or inherent factors such as disease to which the crop may be afflicted. The damage may be presented and could result from whether such adverse events will likely occur in the following or agricultural season of interest. The agricultural season may be defined for each crop type and extend to a period in which the crop is grown or has been growing. For example, the agricultural season may be a period in which the crop, depending on whether a cover crop exists, is planted to a second period when the crop has been harvested and is no longer present in the field or replaced by a cover crop. For an agricultural season, whether the crop is present or absent from the agricultural field may be determined iteratively using the Bayesian framework described herein. The Bayesian framework is configured to include the seasonality while capturing attributes in the data that would otherwise be missed. The Bayesian framework refers to one or more statistical-based models compatible with seasonal data or data with time-series characteristics. The framework is used to discover or predict the causations with its counterfactual prediction with respect to the observable data, such as the seasonal image data herein described. The models may include techniques for time series decomposition, where different state variables, i.e. trend, seasonality, and regression, may be included. The models also include predictors that are selected for regression analysis, where regression results may be combined in tum. Given the exemplary Bayes' theorem, where P(A❘"\[LeftBracketingBar]"B)=P(A⋂B)P(B) the implementation of the Bayesian framework may be depicted as: the probability of the predictions of each of the models' change (increases or decreases) proportionally with respect to the results of the other models, e.g. assuming that event A describes the situation that [John Smith's field was planted on May 15th] and that event B describes the situation where [the crop on John Smith's field has emerged on June 7th], it can be understood that the time it takes this crop to emerge (from literature) is a normal distribution with a mean at 21 days and a standard deviation of 7 days. Since it is easier to analyse SAR imagery for emergence than for planting date, we can recalculate, using the exemplary Bayes' theorem, the probability that event A occurred, given that we “know” (or at least believe that we know) the occurrence of event B (the “prior” in this case) can be computed with respect to the pertinent crop model. Herein described one or more probabilities computed by the Bayesian framework refers to or could describe as the likelihood of an event occurring given a condition reflected by the crop state, where the likelihood may be presented as a percentage. One or more possibilities may comprise or be inferred from one or more conditional probabilities, marginal probabilities, joint probabilities, probabilities, etc., using with or as an aspect of the Bayesian framework described herein. The Bayesian framework derives from the seasonal data (or historical data) prior probability distribution or referring herein as prior or prior probability. The Bayesian framework uses the prior probability distribution and a likelihood function, or joint probability of the seasonal image as a function of the parameters of a crop model, to produce a posterior probability distribution and provide thereafter said one or more probabilities of a crop state. For each crop state or herein described crop model, the Bayesian framework may separately designate, for each crop model, a density of a random variable whose probability distribution is continuous with respect to model input. The following figures refer with respect to any of the above concepts in providing any of the herein described agricultural monitoring system, apparatus and method for delivering crop-related forecasting based on one or more crop states in a more accurate and reliable manner while overcoming the disadvantages in the existing solutions such as missed or ignored attributes during data processing and the at least partial disregarded of seasonality in data when deploying sequential machine learning methods. FIG.1is a flow diagram illustrating an example process100of forecasting crop states based on historical and/or seasonal image data relating to at least one aspect of the crop monitoring system described herein. Process100applies a Bayesian framework to predict the conditional likelihood of a crop state in the season taking place, given the data from the previous agricultural season(s) in an iterative manner. A crop state may include but are not limited to a crop type, a crop planting date, a crop yield, a crop acreage, a crop emergence date, a crop harvest date, and damage to the crop. Each state may be correlated with another state on which the forecast is based. In step102, the seasonal image data associated with at least one agricultural field is obtained or received from at least one source. Such source or sources may comprise one or more satellite sources. For example, the satellite may be a SAR satellite and/or an optical imagery satellite, where multi-annual satellite imagery can be readily obtained or obtained through indirect means. Further, the historical agricultural data such as annotated historical common land unit data may be used in combination with the seasonal image data during the crop models training. The images obtained from the satellite data may be fused or aggregated to provide a combined data set herein coined as seasonal image data. The seasonal image data may also be used in connection with the historical agricultural data. In step104, the seasonal image data is processed using a Bayesian framework. The Bayesian framework comprises one or more (ML or statistical) models described herein, where the models are configured to predict, based on the seasonal image data, one or more probabilities indicative of the crop states. The models may be selected from, for example, a crop planting date prediction model, crop yield prediction model, crop acreage model, cover crop model, crop emergence date model, crop harvest model, and crop damage model. The models of the Bayesian framework may be trained separately or otherwise dependently using the seasonal image data annotated with respect to at least one crop from said at least one agricultural field. More specifically, each crop model is adapted to learn from a subset of seasonal image data, wherein the subset comprises images outputted from a data source that is different to the data source used in another crop model when more than one crop model is/are being used to determine said at least one crop state. Each crop model may further comprise a base model conditioned on at least two crop states in a previous agricultural season. The base model is representative of a prior probability distribution used for predicting the state of the crop in the subsequent agricultural season or in the next iteration of recalibrating the same or additional crop models. More specifically, the Bayesian framework may be configured to determine a crop state in a previous agricultural season using seasonal image data of the previous agricultural season; and recalibrating said one or more probabilities based on the configuration, adapting the Bayesian framework to the outputs of said one or more crop models in accordance with step106. Further, the received seasonal image data may be processed with respect to each pixel (or related pixels) of an image in the data, corresponding to a crop planted in said at least one agricultural field. Furthermore, at least one base model is trained using annotated historical common land unit data in order to predict said at least two crop states in the agricultural season that follows from a previous agricultural season. Using the seasonal image data obtained, said at least two crop states in the agricultural season are calculated, at least in part, where the calculation may be performed initially according to the base model and thereafter configured to generate the forecast of said at least one crop state based on further seasonal image data. In step106, at least one crop model of the Bayesian framework is updated based on said one or more probabilities, wherein said one or more probabilities are adapted to, or to be, the outputs of said one or more crop models. More specifically, in relation to step104and/or step106, the Bayesian framework is configured to: classify, based on a crop type, at least one crop from at least one subset of the seasonal image data; determine said one or more probabilities for each classified crop; and update the Bayesian framework based on the classification in relation to said one or more probabilities. In step108, a forecast of said at least one crop state is outputted based on said one or more probabilities as described herein. The Bayesian framework and its underlying crop models are used to predict at least one crop state in a following agricultural season with respect to said one or more probabilities. The prediction is based on the seasonal image data from at least one agricultural season, where events captured by the seasonal images data may comprise that the crop will be planted on or before the crop planting date. Further, each crop model of the Bayesian framework may be configured to generate at least in part the forecast of said at least one crop state in accordance with said one or more probabilities. A particular crop state prediction may be generated as output, where crop state prediction may be associated with said at least one agricultural field in relation to the seasonal image data from at least one agricultural season. More specifically, the Bayesian framework and its underlying crop models are applicable for predicting a crop yield result using at least one crop yield prediction model, wherein the crop yield prediction model is configured to characterize growth of at least one crop in order to provide the crop yield result; and/or predicting an acreage estimate of a crop in said at least one agricultural field, wherein the acreage estimate is generated by a crop acreage model configured to characterize growth of the crop in said at least one agricultural field, and calculating, based on the growth, the acreage estimate of the crop; and/or predicting a cover crop type of a crop using a cover crop model configured to characterize growth of the crop; and determining, based on the growth, the cover crop type from one or more possible types of cover crops; and/or predicting a date of crop emergence using a crop emergence date model, wherein the date of crop emergence corresponds to a probability indicative of a potential emergence event that will occur or will not occur, on or before said date; predicting a crop harvest using a crop harvest model configured to determine when a harvest event needs to take place in a following agriculture season based on a probability indicative of a crop state; and/or predicting a damage to crop using a crop damage model configured to identify based on a crop state the damage to the crop has or will take place. The process of crop monitoring may also include, with respect to the Bayesian framework, a step for validating each crop model based on the output of at least one other crop model; and updating at least one model in the Bayesian framework based on the validation. The other crop model may be constructed using actual data collected during the season, or based on another correlated crop model in the framework. For example, the validation may be performed because seasonal image data (specifically SAR satellite data) contains both amplitude and phase. The amplitude and phase may be used to compute the following exemplary coherence coefficient: γ=❘"\[LeftBracketingBar]"∑∑I1I2*❘"\[RightBracketingBar]"∑∑❘"\[LeftBracketingBar]"I1❘"\[RightBracketingBar]"2∑∑❘"\[LeftBracketingBar]"I2❘"\[RightBracketingBar]"2, where I1,I2represent two (complex) SAR images taken from the same angle at different times and an asterisk represents the complex conjugate. By definition the coherence coefficient is normalized between 0 to 1. The coherence represents the change between the images per pixel, i.e. significant changes are represented by black pixels (values close to 0) while unchanged areas are represented by white pixels (values close to 1). Further, various environmental factors affect the amplitude and phase. These factors include weather/climate events such as rain, winds, thermal, which cause damage to the crop. These factors may be modelled by one or more crop models described herein and similarly validated using one or more ML methods or techniques described herein. FIG.2is a flow diagram illustrating an example process200of predicting the probabilities of a crop state for an agricultural season conditioned on states of the crop in the prior seasons of the agricultural season according to the one or more described aspects of the present disclosure. In this process200, more specifically, the Bayesian framework is being updated iteratively using further seasonal image data. The update is progressed in relation to the weighted confidence information associated with the underlying crop models. The update may be constrained by the weighted confidence predictions of each crop model. Herein described weighted confidence information refers to data representative of model parameters or weights corresponding to relevant model features of a particular crop model in the framework. The weighted confidence predictions are resultant output, which may be presented in the form of said one or more probabilities described herein. In step202, the Bayesian framework or module associated with the framework is configured to receive one or more set(s) of weighted confidence predictions, where the weighted confidence predictions are associated with at least one probability generated from each crop model. In step204, whether the received one or more set(s) of weighted confidence predictions fall within a predetermined range is determined. The predetermined range may be one or more thresholds to which the model applies in order to assess the weighted confidence predictions as part of the forecast. In one example, the predetermined range may represent an objective criterion associated with the crop upon an event or activities performed with respect to the crop in one or more agricultural seasons. In another example, the predetermined range may correspond to one or more sets of rules or filters. In yet another example, the predetermined range may correspond to a set of dynamic rules amongst at least two crop models with respect to the Bayesian framework. In further examples, if the received information is not within the predetermined range, a separate set of weighted confidence information different from the received one or more set(s) of weighted confidence predictions may be determined; and such that said one or more probabilities are predicted based on the second set of weighted confidence information in addition to the received one or more set(s) of the weighted confidence information. Furthermore, each crop model of the Bayesian framework is configured to receive the weighted confidence information from at least one other crop model. Said at least one other crop model may be from a different agricultural field captured in the seasonal image data. In step206, said one or more probabilities is predicted based on the determination in step204. One or more probabilities forms a part of the crop forecasting using the Bayesian framework. Additionally or optionally, a dynamic rule set amongst at least two crop models of the Bayesian framework may be generated based on said at least one crop state and weighted confidence information associated with said at least two crop models. The dynamic rule set may further contain the Bayesian framework based on the weighted confidence information in order to optimize the predictions. Further, said at least two crop models can be combined to form an ensemble model based on an order provided by the dynamic rule set, wherein the order is determined based on the output of each of said at least two crop models. The ensemble model may be further updated based on the seasonal image data until the ensemble model is able to predict a crop state in the following agricultural season within at least one confidence interval in relation to the dynamic rule set. As part of the Bayesian framework, the ensemble model provides a forecast of said at least two crop states. Finally, the dynamic rule set may be updated based on the forecast. FIG.3is a flow diagram illustrating an example of processing300the seasonal image data to be used with the Bayesian framework. The seasonal data may be obtained from multiple sources and fused to form a combined dataset. In obtaining the combined dataset, the data is initially pre-processed according to, for example, the following steps. In step302, the seasonal image data is received from at least one satellite, i.e. SAR satellite and/or optical imagery satellite. The seasonal image data may be imagery of the agricultural field. In step304, the received seasonal image data is pre-processed based on one or more parameters, where the data may be converted, normalized, augmented, segmented, and standardized based on said one or more parameters. The parameters may differ based on how the data will be used, whether it is purposed for validation, training, or during inference. Similarly, the parameter may be obtained from one or more crop models to which the detail may serve as input. In step306, as an option, the pre-processed dataset may be further selected based on an input from a user or a generated system input, where the data is filtered based on the input. For example, the input may comprise instructions to remove a specific section of the data pertaining to a unit of area. In step308, the selected image data from the previous steps may be vectorized and further processed using at least one ML model(s) described herein. The models may comprise a neural network of at least one layer(s). The neural network may be pre-train using similar data in order to make a threshold assessment310of whether imaging information is in the final selected image data. Provided that the threshold assessment yields a positive result, the final selected image data is further processed by the Bayesian framework312. Otherwise, the process will acquire additional seasonal image data, steps302to308are iterated until assessment310is positive. FIGS.4A-4Bare schematic diagrams illustrating an example of a system400for monitoring crop growth in an agricultural field. The system may comprise one or more modules adapted to forecast one or more crop states based on seasonal image data obtained from at least one source. The modules are configured to carry out any one or more of the steps described in the present disclosure. The modules of the system may be configured to implement the method steps described according toFIGS.1to3as one or more aspects of the invention. For example, said one or more modules may receive the seasonal image data comprises one or more images associated with at least one crop type, where the seasonal image data, when obtained from two or more sources, can be combined using one or more unifying algorithms420configured to fuse said one or more images for processing by a Bayesian framework. In both pre-season410and during season430, the unifying algorithm(s)420may receive input SAR414,434and optical imagery416,436. In pre-season410, the output crop state prediction(s) may be related to the cover crop424a, flood likelihood424b, and vegetation state424c. During the season, the output crop state prediction(s) may also be related to vegetation444a, crop acreage444b, crop type444c, emergence date444d, and planting date444e. As shown and understood that the pre-season seasonal image data cannot be used directly to forecast crop type and planting date during the season. Instead, the corresponding seasonal image data from satellite imagery may only be used to infer priors probability distribution (or prior) for the forecast model518b, based on historical (agricultural) data412, such as data of cover crops or floods occurring before the planting season (or in the previous seasons). The historical data412is provided to train one or more crop models, for example, the crop forecast model518b. The crop model(s) therefore comprises at least one base model associated with said at least one type of species or crop. The base model, for example, may be associated with either planting data model518aor crop forecast518bmodel, or corresponds to both models in a correlated manner with respect to the historical data412. In this case, no flooding or cover crops were detected in the pre-season when seasonal image data is considered, and these observations are passed as “prior” to the crop forecast and planting date forecast models, updating the Bayesian framework. The exemplary crop models shown for the pre-season respectively provide a prediction of two crop states: a planting date and crop forecast/type. The trained crop model(s) of the Bayesian framework is configured to predict, based on the further received seasonal image data during the season, one or more probabilities indicative of at least one crop state in the season. For the current season, or the following seasons, the trained crop model(s) can be or will be updated based on said one or more probabilities provided by the Bayesian framework with respect to the crop states of the previous season. The output of the Bayesian framework may be a forecast of said at least one crop state based on said one or more probabilities indicative of the crop states during the season. Further, the crop model(s) prediction during the season may be validated by a separate crop validation model438b. The crop validation model438bmay be constructed using actual data collected during the planting season. The validation may be performed statistically using amplitude and phase in the data. The validation results: validated crop type442aand validated planting date442bmay be used to update the parameters of the crop model(s) FIGS.5A-5Care schematic diagrams illustrating an example of another system500for monitoring crop growth in an agricultural field, where said other system500extends additionally beyond said system400according toFIGS.4A-4B. Unlike said system400ofFIGS.4A-4B, said other system500ofFIGS.5A-5Cillustrates the present season in two stages. The same exemplary data sources used by said system400could be used with said other system500, comprising historical agricultural data512in conjunction with seasonal image data from SAR imagery514and Optical imagery516. The said other system500may similarly be configured to implement any one or more steps described herein. The first stage following the pre-season510is the early season530. In relation to the early season530, it is understood that the seasonal image data from satellite imagery unified (applying satellite unifying algorithm520for the pre-season510cannot be utilized to forecast crop type562ain the late season550directly. Once the seasonal image data are used to infer planting date in the pre-season, however, the same data may serve as “prior” for the Bayesian framework in order to calculate the crop type in the late season550as well as other crop states in the early and late seasons530,550, i.e. vegetation544a, crop acreage544b, planting date544c, and emergence date562cmay also be predicted based on the “prior”. For example, in the early season530, the planting date predictions554cmay serve as “prior” for the Bayesian framework to validation of one or more crop states, i.e. planting date552aforecast552bfrom the pre-season. In the early season, using new SAR imagery data534and optical imagery data536, the planting date544cpredicted in terms of probabilities can be 1) 75% probability that planting occurred between May 5th and May 8th; 2) 12% probability that planting occurred between May 2nd and May 5th; and 3) and 13% probability that planting occurred on another date or not at all. In retrospect, the prediction in the pre-season for planting can be 1) 66% probability of planting within 7 days of April 15th for corn; 2) 78% probability of planting within 7 days of May 1st for soy; 3) 75% probability that soy will be planted next season; 4) 24% probability that corn will be planted next season; and 5) 1% probability of another crop or no crop next season. Since these predictions do not correlate, the model in the pre-season may be invalidated and subsequently updated using the results obtained based on the seasonal image data. The crop forecast542calculated from the updated and validated model538could be 1) 66% probability that soy was planted; 2) 19% probability that corn was planted; and 3) 15% probability that another crop or no crop was planted. This validation process, inherent to the Bayesian framework, effectively increases the accuracy and reliability of the crop state predictions. The updated crop forecast542may in tum be used as “prior” in the calculation of the late season550. In the late season550, it is understood that new SAR544and Optic data554can be used to infer, for example, emergence date562cand crop type562awith high confidence levels compared to another crop state inferences. Because of the high confidence levels, these inferences may be used to validate or invalidate earlier forecasts (crop state predictions from pre-season and early season), such as validating planting date564to produce the validated planting date566. For example, if the predictions from the early season are 1) 66% probability that soy was planted; 2) 19% probability that corn was planted; and 3) 15% probability that another crop or no crop was planted. These predictions may serve as “prior” to the Bayesian framework. Based on the new SAR544and Optic data554, the crop model(s) underlying the framework may predict with high confidence (over 90 percent) that the type of the crop is soy in a 50 crop acreage, and the emergence date is between May 20th and May 23rd. These predictions may be iteratively used to validate the crop model(s). FIG.6is a schematic diagram illustrating an example process600for modelling a plurality of agricultural fields using a Bayesian framework configured to determine one or more probabilities of a crop state given predetermined probability distribution associated with the crop state estimated from one or more previous agricultural seasons. As shown, the Bayesian framework comprises crop model(s) CM11, CM12, CM21, and CM22 configured to forecast a plurality of crop states not shown in the figure, where each crop state is predicted using a crop model trained using seasonal image data from a previous agricultural season. Accordingly, the Bayesian framework is iteratively updated using further seasonal image data obtained from one or more following agricultural seasons to the previous agricultural season. In the figure, the crop model(s) may be configured to model the respective agricultural fields F11602, F12604, F21606, and F22608. Each crop model may also take into consideration the adjacent fields. For example, CM11 may also model crop state in other agricultural fields F12602and F21604in addition to F11 such that, when multiple crops are planted, at least one other crop model can be from a different agricultural field. The crop model(s) CM11, CM12, CM21 and CM22 may comprise one or more set(s) of weighted confidence information associated with each base model trained on historical data, the crop model associated with the same or a different agricultural field. The process600shown inFIG.6may be incorporated with or serve as part of a further or optional step according to one or more other processes100,200,300described herein. Accordingly, process600may also be part of or used in or with the crop monitoring system(s) described herein. FIG.7is a pictorial diagram illustrating an example of the seasonal image data obtained from at least one source, where the seasonal image serves as input to the agricultural monitoring system, apparatus and method(s) described herein. As shown, the seasonal image data are collected from satellite sources. The seasonal image data may be further processed to generate spectra of different crop types, and more specifically, the seasonal image data may be processed with respect to each pixel of an image corresponding to the different crop types, producing a spectrogram of a single pixel representation of an image corresponding to a crop planted in said at least one agricultural field. The resultant spectrogram may be used to train the crop model(s) and predict, based on the training, one or more crop states associated with a crop type. FIGS.8A to8Care pictorial diagrams illustrating an example of training a machine learning model configured to process the seasonal image data. The Machine Learning model may be used to pre-process seasonal image data and/or historical agricultural data in order to characterize features (capturing important attributes of the data that tend to be missed or ignored) within data such that the ML model may be deployed as the crop forecast model in the crop monitoring system according toFIGS.4and5. Further, the ML model may be used in place of, or replacing one or more crop models and may be used according to any of the systems and/or methods described herein. In one example, the ML model may comprise a convolutional neural network (CNN) to process image data, spanning all sensor channels and across the time axis. The CNN may be arranged to process segments of the data, i.e. data is segmented by sections 1 to N. By doing so, the use of the network serves as a roaming detector along the time axis of the data. Further, the CNN may comprise causal convolutional layer(s). The CNN may be used in conjunction with a recurrent neural network (RNN), a Long-Short term Memory (LSTM), and/or a Gated Recurrent Unit (GRU) to provide the requisite output. The output may be used to update the Bayesian framework for further iteration(s) of the forecast. As part of the Bayesian framework, a 1-D convolutional neural network (1DCNN) and a 2-D convolutional neural network (2DCNN) are shown inFIG.8B, where either network may be used to process the image data of an agricultural field. It is understood that raw image data may be obtained from one or more satellite sources, and the data may be pre-processed, i.e. segmented and/or unified (or aggregated). Either 1DCNN or 2DCNN may serve as a roaming detector traversing the pre-processed image data as shown with respect to the time axis (of the crop seasons) and sensor channels. The processing may be done in a pixel-wised manner that enables more accurate and reliable performing, for example, crop classification via a crop model in the Bayesian framework to identify the crop and/or state of at least one crop (i.e. crop planting date) based on each pixel from segmented/unified images of the crop in the agricultural field. The ML model(s) may also comprise one or more machine learning methods or techniques. Examples of these ML methods or techniques may include or be based on, by way of example only but is not limited to, one or more of: any ML technique or algorithm/method that can be used to generate a trained model based on a labelled and/or unlabeled training datasets; one or more supervised ML techniques; semi-supervised ML techniques; unsupervised ML techniques; linear and/or non-linear ML techniques; ML techniques associated with classification; ML techniques associated with regression and the like and/or combinations thereof. Some examples of ML techniques/model structures may include or be based on, by way of example only but is not limited to, one or more of active learning, multitask learning, transfer learning, neural message parsing, one-shot learning, dimensionality reduction, decision tree learning, association rule learning, similarity learning, data mining algorithms/methods, artificial neural networks (NNs, i.e. Convolutional Neural Network and/or Recurrent Neural Networks as shown inFIG.8B), auto encoder/decoder structures, deep NNs, deep learning, deep learning ANNs, inductive logic programming, support vector machines (SVMs), sparse dictionary learning, clustering, Bayesian networks, types of reinforcement learning, representation learning, similarity and metric learning, sparse dictionary learning, genetic algorithms, rule-based machine learning, learning classifier systems, and/or one or more combinations thereof and the like. During processing or when being processed by the above-described ML method(s) or technique(s), the seasonal image data and/or historical agricultural data used according to any of the preceding steps may be pre-processed and vectorized for further processing using the same or different ML method(s) or technique(s). This pre-processing and vectorization provides added flexibility in switching between various machine learning For example, a resultant spectrogram may be obtained during pre-processing using unifying algorithms. The resultant spectrogram serves as input to the neural network for providing a forecast prediction. The network may be trained previously using data arranged in a similar manner from a previous season. Based on the resultant spectrogram, the network is enabled to make a threshold determination of whether the crop state information or any of the interested information is captured in the selected image data. More specifically, the CNN puts spectrogram input through a set of convolutional filters, carrying forward only the active features by mapping negative values to zero and maintaining only the positive values in the spectrogram input. The CNN down samples the remaining spectrogram data by simplifying the output of the network, reducing the number of parameters. The output of the CNN may be in the form of one or more vectors corresponding to the prediction made or in the desired format for further processing by a second ML model, or in this case, an RNN. The output from the RNN may be further processed in relation to the Bayesian framework. FIG.9is a pictorial diagram illustrating an example of pixel-wised processing of the seasonal image obtained from a satellite source of an agricultural field, where the seasonal image is presented in different formats for further processing. The seasonal image may be segmented when processed by, for example, a crop model in the Bayesian framework to correctly identify the crop and/or state of at least one crop (i.e. crop planting date) based on each pixel from segmented images of the crop to the extent that the format maintained. The seasonal images may also be combined using one or more unifying algorithms configured to fuse said one or more images for processing by the Bayesian framework. FIG.10is a pictorial diagram illustrating an example of segmented seasonal image data for a particular crop state. A gradient representative of crop damage is shown where the segmented image captures information on crop damage. Based on other data and models described herein, this information may be used to train a crop damage model to predict the level or degree of damage sustained by the crop. The damage may be caused by various environmental factors such as high wind, floods, and drought. The damage may also be caused by problems inherent to the crop type, such as disease insect damage, inadequate nutrition or compaction. The degree of damage is captured from further analysis of the seasonal image data. FIG.11is a block diagram illustrating an example computing apparatus/system1100that may be used to implement one or more aspects of the invention and any modifications thereof, and/or as described herein with reference toFIGS.1to6. The computing apparatus/system1100includes one or more processor unit(s)1102, an input/output unit1104, communications unit/interface1106, a memory unit1108in which the one or more processor unit(s)1102are connected to the input/output unit1104, communications unit/interface1106, and the memory unit1108. In some embodiments, the computing apparatus/system1100may be a server, or one or more servers networked together. In some embodiments, the computing apparatus/system1100may be a computer or supercomputer/processing facility or hardware/software suitable for processing or performing the one or more aspects of the system(s), apparatus, method(s), and/or process(es) combinations thereof, modifications thereof, and/or as described with reference to any of the figures. The communications interface1106may connect the computing apparatus/system1100, via a communication network, with one or more services, devices, the server system(s), cloud-based platforms, systems for implementing subject-matter databases and/or knowledge graphs for implementing the invention as described herein. The memory unit1108may store one or more program instructions, code or components such as, by way of example only but not limited to, an operating system and/or code/component(s) associated with the process(es)/method(s) as described with reference toFIGS.1to3, additional data, applications, application firmware/software and/or further program instructions, code and/or components associated with implementing the functionality and/or one or more function(s) or functionality associated with one or more of the method(s) and/or process(es) of the device, service and/or server(s) hosting the process(es)/method(s)/system(s), apparatus, mechanisms and/or system(s)/platforms/architectures for implementing the invention as described herein. In the embodiments, examples, and aspects of the invention as described above such as process(es), method(s), system(s) may be implemented on and/or comprise one or more cloud platforms, one or more server(s) or computing system(s) or device(s). A server may comprise a single server or network of servers, the cloud platform may include a plurality of servers or network of servers. In some examples, the functionality of the server and/or cloud platform may be provided by a network of servers distributed across a geographical area, such as a worldwide distributed network of servers, and a user may be connected to an appropriate one of the network of servers based upon a user location and the like. Further, it is understood at least the following aspects of the invention may be combined with aspects of any of the other examples, or with any examples of the optional features described herein. In one aspect is a computer-implemented method of forecasting and/or analyzing crop states based on at least one data source, the method comprising: receiving seasonal image data from at least one source, wherein the seasonal image data is associated with at least one agricultural field; processing the seasonal image data using a Bayesian framework, wherein the Bayesian framework comprises one or more crop models configured to predict, based on the seasonal image data, one or more probabilities indicative of at least one crop state; updating at least one crop model of the Bayesian framework based on said one or more probabilities; and outputting a forecast of said at least one crop state based on said one or more probabilities. In another aspect is system for monitoring crop growth in an agricultural field, the system comprising: one or more modules adapted to forecast one or more crop states based on seasonal image data obtained from at least one source, wherein said one or more modules are configured to: receive the seasonal image data comprises one or more images associated with at least one crop type, wherein the seasonal image data, when obtained from two or more sources, can be combined using one or more unifying algorithms configured to fuse said one or more images for processing by a Bayesian framework, wherein the Bayesian framework comprises one or more crop models configured to predict, based on the received seasonal image data, one or more probabilities indicative of at least one crop state; update at least one crop model of the Bayesian framework based on said one or more probabilities, wherein said at least one crop model comprises a base model associated with said at least one crop type; and output a forecast of said at least one crop state based on said one or more probabilities. In yet another aspect is a Bayesian framework for determining one or more probabilities of a crop state given predetermined probability distribution associated with the crop state estimated from one or more previous agricultural seasons, the framework comprising: one or more crop models configured to forecast a plurality of crop state, wherein each crop state is predicted using a crop model trained using seasonal image data from a previous agricultural season; and wherein the Bayesian framework is iteratively updated with respect to further seasonal image data obtained from one or more following agricultural seasons to the previous agricultural season. In yet another aspect is a computer-readable medium comprising data or instruction code which, when executed on a processor, causes the processor to perform a method according to any one of the above aspects. In yet another aspect is apparatus comprising a processor unit, a memory unit, a communications interface, the processor unit connected to the memory unit and communications interface, wherein the apparatus is adapted to perform a process according to any one of the above aspects. As an option, said one or more crop models comprise at least one base model conditioned on at least two crop states in a previous agricultural season. As an option, said at least one base model is trained using annotated historical common land unit data, in order to predict said at least two crop states in an agricultural season following the previous agricultural season. As an option, said at least two crop states in the agricultural season are calculated at least in part from said seasonal image data. As an option, said at least one base model is configured to generate the forecast of said at least one crop state based on said seasonal image data. As an option, each crop state is a crop type, a crop planting date, a crop yield, a crop acreage, a crop emergence date, a crop harvest date, or a damage to crop. As an option, the seasonal image data is processed with respect to each pixel of an image corresponding to a crop planted in said at least one agricultural field. As an option, further comprising: configuring the Bayesian framework to model a crop state in a previous agricultural season using seasonal image data of the previous agricultural season; and recalibrating said one or more probabilities based on the configured Bayesian framework, wherein said one or more probabilities are adapted to outputs of said one or more crop models. As an option, the Bayesian framework is configured to: classify, based on a crop type, at least one crop from at least one subset of the seasonal image data; determine said one or more probabilities for each classified crop; and update the Bayesian framework based on the classification in relation to said one or more probabilities. As an option, said one or more crop models comprises a crop planting date prediction model, crop yield prediction model, crop acreage model, cover crop model, crop emergence date model, crop harvest model, and crop damage model. As an option, each crop model is configured to generate, based on said one or more probabilities, a crop state prediction associated with said at least one agricultural field in relation to the seasonal image data from at least one agricultural season. As an option, each crop model is trained using the seasonal image data annotated with respect to at least one crop from said at least one agricultural field. As an option, each crop model is trained using a labelled training dataset comprising historical planting information annotated with respect to at least one crop from said at least one agricultural field. As an option, each crop model is adapted to learn from a subset of seasonal image data, wherein the subset comprises images outputted from a data source that is different to the data source used in another crop model when more than one crop model is/are being used to determine said at least one crop state. As an option, further comprising: predicting at least one crop state in a following agricultural season with respect to said one or more probabilities based on said seasonal image data from at least one agricultural season. As an option, further comprising: predicting a crop planting date using a crop planting date prediction model, wherein the crop planting date corresponds to a probability indicative of a planting event of the crop. As an option, wherein event comprises that the crop will be planted on or before the crop planting date. As an option, further comprising: predicting a crop yield result using at least one crop yield prediction model, wherein the crop yield prediction model is configured to characterize growth of at least one crop in order to provide the crop yield result. As an option, further comprising: predicting an acreage estimate of a crop in said at least one agricultural field, wherein the acreage estimate is generated by a crop acreage model configured to characterize growth of the crop in said at least one agricultural field, and calculating, based on the growth, the acreage estimate of the crop. As an option, further comprising: predicting a cover crop type of a crop using a cover crop model configured to characterize growth of the crop; and determining, based on the growth, the cover crop type from one or more possible types of cover crops. As an option, further comprising: predicting a date of crop emergence using a crop emergence date model, wherein the date of crop emergence corresponds to a probability indicative of a potential emergence event that will occur or will not occur, on or before said date. As an option, further comprising: predicting a crop harvest using a crop harvest model configured to determine when a harvest event needs to take place in a following agriculture season based on a probability indicative of a crop state. As an option, further comprising: predicting a damage to crop using a crop damage model configured to identify based on a crop state the damage to the crop has or will take place. As an option, said at least one source comprises one or more satellite sources. As an option, updating the Bayesian framework, further comprising: receiving the one or more set(s) of weighted confidence predictions, associated with each crop model; determining whether the received one or more set(s) of weighted confidence predictions is within a predetermined range; and predicting said one or more probabilities based on the determination. As an option, further comprising: if the received information is not within a predetermined range, determining a separate set of weighted confidence information different from the received one or more set(s) of weighted confidence predictions; and predicting said one or more probabilities based on the received one or more set(s) of the weighted confidence information and the second set of weighted confidence information. As an option, wherein each crop model is configured to receive weighted confidence information from at least one other crop model. As an option, wherein said at least one other crop model is from a different agricultural field. As an option, further comprising: validating each crop model based on the output of at least one other crop model; and updating at least one model in the Bayesian framework based on the validation. As an option, further comprising: generating a dynamic rule set amongst at least two crop models based on said at least one crop state and weighted confidence information associated with said at least two crop models; combining said at least two crop models to form an ensemble model based on an order provided by the dynamic rule set, wherein the order is determined based on the output of each of said at least two crop models; updating the ensemble model based on the seasonal image data until the ensemble model is able to predict a crop state in the following agricultural season within at least one confidence interval in relation to the dynamic rule set; providing the ensemble model as part of the Bayesian framework for generating the forecast; and updating the dynamic rule set based on the forecast. The above description discusses embodiments of the invention with reference to a single user for clarity. It will be understood that in practice the system may be shared by a plurality of users, and possibly by a very large number of users simultaneously. The embodiments described above may be configured to be semi-automatic and/or are configured to be fully automatic. In some examples a user or operator of the querying system(s)/process(es)/method(s) may manually instruct some steps of the process(es)/method(es) to be carried out. The described embodiments of the invention a system, process(es), method(s) and the like according to the invention and/or as herein described may be implemented as any form of a computing and/or electronic device. Such a device may comprise one or more processors which may be microprocessors, controllers or any other suitable type of processors for processing computer executable instructions to control the operation of the device in order to gather and record routing information. In some examples, for example where a system on a chip architecture is used, the processors may include one or more fixed function blocks (also referred to as accelerators) which implement a part of the process/method in hardware (rather than software or firmware). Platform software comprising an operating system or any other suitable platform software may be provided at the computing-based device to enable application software to be executed on the device. Various functions described herein can be implemented in hardware, software, or any combination thereof. If implemented in software, the functions can be stored on or transmitted over as one or more instructions or code on a computer-readable medium or non-transitory computer-readable medium. Computer-readable media may include, for example, computer-readable storage media. Computer-readable storage media may include volatile or non-volatile, removable or non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. A computer-readable storage media can be any available storage media that may be accessed by a computer. By way of example, and not limitation, such computer-readable storage media may comprise RAM, ROM, EEPROM, flash memory or other memory devices, CD-ROM or other optical disc storage, magnetic disc storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Disc and disk, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc (BD). Further, a propagated signal is not included within the scope of computer-readable storage media. Computer-readable media also includes communication media including any medium that facilitates transfer of a computer program from one place to another. A connection or coupling, for instance, can be a communication medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of communication medium. Combinations of the above should also be included within the scope of computer-readable media. Alternatively, or in addition, the functionality described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, hardware logic components that can be used may include Field-programmable Gate Arrays (FPGAs), Program-specific Integrated Circuits (ASICs), Program-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs). Complex Programmable Logic Devices (CPLDs), etc. Although illustrated as a single system, it is to be understood that the computing device may be a distributed system. Thus, for instance, several devices may be in communication by way of a network connection and may collectively perform tasks described as being performed by the computing device. Although illustrated as a local device it will be appreciated that the computing device may be located remotely and accessed via a network or other communication link (for example using a communication interface). The term ‘computer’ is used herein to refer to any device with processing capability such that it can execute instructions. Those skilled in the art will realize that such processing capabilities are incorporated into many different devices and therefore the term ‘computer’ includes PCs, servers, IoT devices, mobile telephones, personal digital assistants and many other devices. Those skilled in the art will realize that storage devices utilized to store program instructions can be distributed across a network. For example, a remote computer may store an example of the process described as software. A local or terminal computer may access the remote computer and download a part or all of the software to run the program. Alternatively, the local computer may download pieces of the software as needed, or execute some software instructions at the local terminal and some at the remote computer (or computer network). Those skilled in the art will also realize that by utilizing conventional techniques known to those skilled in the art that all, or a portion of the software instructions may be carried out by a dedicated circuit, such as a DSP, programmable logic array, or the like. It will be understood that the benefits and advantages described above may relate to one embodiment or may relate to several embodiments. The embodiments are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages. Variants should be considered to be included into the scope of the invention. Any reference to ‘an’ item refers to one or more of those items. The term ‘comprising’ is used herein to mean including the method steps or elements identified, but that such steps or elements do not comprise an exclusive list and a method or apparatus may contain additional steps or elements. As used herein, the terms “component” and “system” are intended to encompass computer-readable data storage that is configured with computer-executable instructions that cause certain functionality to be performed when executed by a processor. The computer-executable instructions may include a routine, a function, or the like. It is also to be understood that a component or system may be localized on a single device or distributed across several devices. Further, as used herein, the term “exemplary”, “example” or “embodiment” is intended to mean “serving as an illustration or example of something”. Further, to the extent that the term “includes” is used in either the detailed description or the claims, such a term is intended to be inclusive in a manner similar to the term “comprising” as “comprising” is interpreted when employed as a transitional word in a claim. The figures illustrate exemplary methods. While the methods are shown and described as being a series of acts that are performed in a particular sequence, it is to be understood and appreciated that the methods are not limited by the order of the sequence. For example, some acts can occur in a different order than what is described herein. In addition, an act can occur concurrently with another act. Further, in some instances, not all acts may be required to implement a method described herein. Moreover, the acts described herein may comprise computer-executable instructions that can be implemented by one or more processors and/or stored on a computer-readable medium or media. The computer-executable instructions can include routines, subroutines, programs, threads of execution, and/or the like. Still further, results of acts of the methods can be stored in a computer-readable medium, displayed on a display device, and/or the like. The order of the steps of the methods described herein is exemplary, but the steps may be carried out in any suitable order, or simultaneously where appropriate. Additionally, steps may be added or substituted in, or individual steps may be deleted from any of the methods without departing from the scope of the invention. Aspects of any of the examples described above may be combined with aspects of any of the other examples described to form further examples without losing the effect sought. It will be understood that the above description of embodiments of the invention is given by way of example only and that various modifications may be made by those skilled in the art. What has been described above includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable modification and alteration of the above devices or methods for purposes of describing the aforementioned aspects, but one of ordinary skill in the art can recognize that many further modifications and permutations of various aspects are possible. Accordingly, the described aspects are intended to embrace all such alterations, modifications, and variations. | 64,410 |
11861521 | DETAILED DESCRIPTION The systems and methods described herein address a technical problem tied to conversational systems and arising in the realm of Interactive Voice Response (IVR) systems, namely improving the identification and verification of users. The described systems and methods provide knowledge-based identification and/or verification of users from speech. In other words, the systems and methods identify and/or verify the user based on “what the user knows”, where the user provides the information indicating “what they know” as speech. Such speech-based knowledge-based authentication can be used to automatically identify and/or verify users through spoken language. Other automated approaches to identification and verification may use non-verbal input methods or different authentication factors, such as biometric authentication factors. The use of non-verbal input methods may be unsuitable for some applications, for example where audio is the only available input mechanism for the user. Furthermore, even where non-verbal input methods are available, the use of such methods within an IVR system may inconvenience the user, e.g. such non-verbal input methods may require tactile interaction with a device so cannot be used hands-free. Knowledge-based authentication factors may be used by the system in order to provide a service, in addition to performing user identification and verification. For example, where the IVR system is a booking system, the user's full name, address, postcode and telephone number may be stored in order to make bookings. This information can then also be used to perform identification and verification. Knowledge-based authentication factors can also be used in addition to such other identification and verification approaches in some examples. For example, knowledge-based authentication factors may be used in combination with biometric authentication factors to provide a greater level of security. In a knowledge based authentication approach which uses speech input, non-exact matches between information extracted from the speech input and the stored knowledge-based authentication factors can occur. In particular, the automatic speech recognition performed on the input audio may sometimes return a word which is similar to but not the same as the word spoken by the user. Thus, it is desirable that knowledge-based identification and verification systems using speech can account for non-exact matches, while maintaining a desired level of security. The proposed systems and methods can utilize non-exact matches for identification and verification while maintaining a level of security. The proposed systems and methods use fuzzy logic to calculate a total score usable for identification and verification, based on scores across multiple knowledge-based authentication factors. The total score may be used to quantify an overall closeness of a match between the inputs provided by a user and the knowledge-based authentication factors. Fuzzy logic is used to support the use of non-exact matches and therefore increase recall, e.g. reduce false negatives. Fewer operations may be performed in identifying and verifying a user, as fewer reattempts at the identification and verification may be needed. In some examples, whether a user is identified and/or verified is determined based on whether the score is above or below a threshold. Therefore, a requisite level of security may be achieved by setting this threshold to an appropriate value. For example, the requisite level of security for a telephone help IVR system may be lower than that for an e-banking IVR system, so the threshold used for a telephone help system may be lower than that used for an e-banking IVR system. FIG.1shows a schematic of a system100for identifying or verifying a user in accordance with an embodiment. The system100comprises a spoken dialogue system that requests information relevant for identification or verification from the user over several dialogue turns. The system100includes an automatic speech recognition module110, a text-to-speech module120, a user database130, a natural language understanding module140, a phonetic processing module150, a dialogue module160, and an identification and verification module170. The system is used to conduct a dialogue with a user. The automatic speech recognition (ASR) module110receives speech data from a speech input provided by a user and generates a text signal based on the speech data. The speech data may comprise an audio file or audio stream. In this example, the text signal comprises an orthographic text signal. For example, the text signal may be orthographic text representing text in a natural language, e.g. English in Latin characters, Russian in Cyrillic characters, or Mandarin Chinese in Chinese characters. Characters of orthographic text representing text in a natural language may be referred to as graphemes. The speech recognition module110transcribes the user's utterance into a list of N possible orthographic texts, referred to as an N-best list, where N is an integer greater than zero. In some examples, the ASR module110additionally or alternatively outputs a phonetic text signal. For example, the text signal may represent the pronunciation of the one or more audio inputs using a phonetic alphabet, e.g. the International Phonetic Alphabet (IPA) or Speech Assessment Methods Phonetic Alphabet (SAMPA). Characters of phonetic text may be referred to as phonemes. The ASR module110may perform a first step of generating a phonetic text signal from the audio signal, and a second step of generating an orthographic text signal from the phonetic text signal. In some examples, both the phonetic text signal and the orthographic text signal are outputted, in the form of a list of N possible orthographic texts and the N corresponding phonetic texts. The automatic speech recognition module110may perform speech recognition using any suitable method. For example, a trained speech recognition algorithm based on a neural network or Hidden Markov Model may be used. ASR models may assign posterior probabilities to words and/or characters in an utterance given the input audio signal. The ASR output takes the form of an N-best list, which approximates the full posterior distributions over the ASR outputs by returning the top N most probable outputs with their respective probabilities. In some examples, only the top scoring ASR output is used, in other words N is equal to 1. In other examples, N is greater than one, such that multiple ASR outputs are outputted. The ASR module110outputs an N-best list for each user input utterance. The speech data comprises a user input. The inputs by the user may be received using an audio input device, e.g. a microphone. The inputs may be received by an audio input device connected to or forming part of the computing device(s) implementing the ASR module110, e.g. where the ASR module110is implemented on a client computing device. Alternatively, the ASR module110may be implemented on a server computing device, with the inputs being obtained on a different client computing device. The speech of the user may be encoded into one or more audio data files and/or one or more audio data streams by the client computing device. The client computing device may send the one or more audio data files and/or one or more audio data streams to the server computing device. The server computing device may receive the one or more audio data files and/or the one or more audio data streams. The ASR module110may process the received one or more audio data files and/or the one or more audio data streams as the speech data in order to generate the text signal. The ASR module110may be a biased ASR module110. A biased ASR module110may boost the probabilities that certain words and/or types of words are recognised in the provided speech. For example, the ASR module110may be adapted so as to increase the likelihood that names are recognised. As another example, where verification of a particular user is being performed, the ASR module110may be biased so as to increase the likelihood that the details of that user are recognised, e.g. if the user to be verified is called ‘John Smith’ and was born in November then the probability that ‘John’, ‘Smith’, and ‘November’ are recognised may be increased. The user database130stores reference values corresponding to a plurality of user data fields for each of a plurality of registered users. The plurality of user data fields comprises user data fields for identifying and/or verifying the user. Examples of such user data fields include but are not limited to: first name, last name, middle name(s), full name, postal codes (e.g. a postcode, ZIP code, Postal Routing Number, or Postal Index Number), address, one or more telephone number(s), date-of-birth, identification (ID) number, passphrase and/or password. The reference values comprise the information that populates the user data fields. For example, for a first registered user, the user data field “first name” is populated with the reference value “John”, and the user data field “second name” is populated with the reference value “Smith”. For a second registered user, the user data field “first name” is populated with the reference value “Joan”, and so on. The user database130provides data storage and access functionality, whereby data can be written to, deleted from, overwritten to and read from the user database130. The user database130may be any combination of software and hardware capable of providing this functionality. For example, the database software may be: a relational database server, e.g. a Standard Query Language (SQL) database server; a NoSQL database server, e.g. a key-value store server, a column store server, a document store server or a graph database server; or a file storage server, e.g. a file transfer protocol. The natural language understanding (NLU) module140includes a value extraction module142and a parsing module144. The natural language understanding module140receives text from the ASR module110and extracts proposed values for one or more user data fields based on the received text signal(s). In this example, the NLU module140receives the N-best list of possible orthographic text signals from the ASR module110for the dialogue turn. The value extraction module142and the parsing module144extract relevant identifying information from these transcribed texts, to populate the user data fields. The extracted identifying information, also referred to as proposed values, might be postcode, names, etc. corresponding to the user data fields as described above. The natural language understanding module140outputs the proposed values for the one or more user data fields. In this example, the natural language understanding module140sends the proposed values for the one or more data fields to the dialogue module160. The NLU module140may also send the received N-best list from the ASR module110to the dialogue module160. The dialogue conducted with the user may comprise multiple dialogue turns. A dialogue turn comprises a user input and the subsequent system output. For example, a dialogue turn may begin at the start of a user input and end at the end of the next system output. The ASR module110outputs the N-best ASR outputs for each user input, i.e. for each dialogue turn. In this example, the ASR output is processed one turn at a time by the NLU module140. The received orthographic text signal(s) at the NLU module140comprises one or more ASR outputs from the ASR module110corresponding to a dialogue turn. In examples where N=1, the received text signal is a single ASR output. Proposed values for one or more user data fields are then extracted from the single ASR output. In examples where N is greater than 1, the received text signals include multiple ASR outputs, in the order of the probability assigned to each ASR output by the ASR module110, from the most likely to the least likely. Proposed values for one or more user data fields are extracted from the multiple ASR outputs, where the ASR outputs are analysed in order. N proposed values may be extracted for a user data field where the received text signal comprises an N-best list of ASR outputs. Determining one or more proposed values for a user data field from the received text signal(s) comprises extracting values for the user data field using the value extraction module142. In examples where N is greater than 1, a proposed value for a given user data field may be obtained from each of the multiple ASR outputs. The set of proposed value(s) for the user data field comprises the extracted value(s) for that user data field. The proposed value(s) for the user data field may further comprise one or more transformed value(s) derived from the extracted value(s) using the parsing module144. The parsing module144is configured to derive one or more transformed values based on an extracted value. An example is described in relation to date parsing below. The value extraction module142processes the received text signal(s) for the dialogue turn to extract one or more proposed values from the text signal corresponding to a user data field. The value extraction module142takes as input the text corresponding to an entry from the N-best list output from the ASR module—in other words text corresponding to the user utterance. The value extraction module142outputs a span of this text, in other words a part of this text, which is an extracted value. For example, the text signal from the automatic speech recognition module110is “My first name is John”. The value extraction module142extracts “John” as a proposed value corresponding to the “first name” user data field. In some examples, the value extraction module142uses a specific functionality for capturing character sequences, such as postal codes or phone numbers. For example, a regular expression that specifies a specific sequence of characters or character sequence pattern can be used to search for and extract a post code or phone number from the ASR output. The value extraction performed by the value extraction module142is based on information which indicates the expected user data field for the dialogue turn. During the dialogue, the system provides an output to the user each dialogue turn, which prompts the user to speak a relevant value corresponding to a user data field. For example, the system asks the user “What is your first name?”. In the next dialogue turn, a response including a value for the “first name” user data field is expected from the user. In one example, the value extraction module142may perform value extraction using the techniques described inHenderson&Vulić—“ConVEx: Data-Efficient and Few-Shot Slot Labelling”, 7 Jun. 2021, arXiv:2010.11791, the entire contents of which are incorporated herein by reference. Alternative methods of performing value extraction may be used however. FIGS.9(a) and9(b)show a schematic illustration of an example value extraction module142. The value extraction module142may perform a span based extraction task, in which a value is identified as a single span of text in the ASR output or entirely absent. The value extraction module may first represent the ASR output as a sequence of embeddings using a first model400, and a trained third model409is then used to determine the tag sequence. The tag sequence tags each unit in the sequence with a tag from a set of tags, including a first tag which represents a value. A plurality of third models409can be used, each corresponding to a different user data field. The value extraction module142has a transformer structure. In more detail, and as shown inFIGS.9(a) and9(b), the embeddings output from the first model400are taken as input to an additional features residual layer402. The sequence of embeddings output from this layer are taken as input to an input feed forward network (FFN)404and also as input to a template feed forward network406in the third model409. These project the embeddings down to lower dimensional embeddings. The third model409further comprises a second model411. The second model411determines a tag sequence from the sequence of embeddings output from the input FFN404. The second model411comprises a repeating decoder block408. In the example shown, the repeating block408comprises: a block of self-attention412, followed by a layer normalisation413, a block of attention414over the sequence of embeddings output from the template FFN406, followed by a layer normalisation415and an FFN layer416, followed by a layer normalisation417. The attention layer414uses the embeddings output from the template FFN406. These layers are repeated, such that the second model411comprises a second set of these layers. For simplicity, a single block408is shown in the FIGURE, with the repetition indicated by “×2”. The output sequence of the first block408is fed as an input to the second block408. The second model further comprises a CRF layer410. The sequence of vectors output from the second repeated block408is then taken as input to the CRF layer410, which is a linear layer which computes Conditional Random Field (CRF) parameters for tagging the value span using four tags: BEFORE, BEGIN, INSIDE, and AFTER. The BEFORE tag tags the vectors in the part of the input utterance before the value. The BEGIN and INSIDE tags tag the vectors corresponding to the value. The AFTER tag tags the vectors in a part after the value. In order to train the first model400and the third model409, a transfer learning approach is used, in which a general pre-training process is performed, followed by a specific “fine-tuning” training process using annotated examples.FIG.9(c)shows an example pre-training process. The pre-training objective comprises sentence-pair value extraction. The training data set comprises sentence pairs from natural language data which share a key phrase, for example a “value” such as a name. In the example shown inFIG.9(c), the input sentence is “Are you Tina yes?” and the template sentence is “hold old is [blank]”, where the phrase “Tina” has been replaced by a special [BLANK] token in the template sentence. The first model400generates a fixed number of embeddings corresponding to the input sentence and corresponding to the template sentence. During the pre-training stage, the residual layer402is not included. A first part of the input data (in this case “Tina”) corresponds to a first unit (in this case referred to as “BLANK”) in the template data. The input set of embeddings corresponding to the input sentence is taken as input to the first input FFN404. However, unlike during inference, when the input embeddings are also taken as input to the second template FFN406, during the pre-training process the template embeddings are taken as input to the template FFN406. The token [BLANK] corresponds to an embedding. The second model411determines a tag sequence. The attention layer414uses the embeddings output from the template FFN406. The value extractor model is trained to predict which tokens in the input sentence constitute the key phrase. The input sentence is automatically labelled with the “true” tag values. The part of the input sentence prior to the identified key phrase is labelled “BEFORE”, the part of the input sentence corresponding to the key phrase is labelled “BEGIN” and “INSIDE”, and the part of the input sentence after the key phrase is labelled “AFTER”. These labels are applied automatically, based solely on the key phrase (i.e. that which is in the template sentence). Parameters of the value extractor model are then fine-tuned by training using manually annotated data corresponding to the intended use case after the pre-training stage. In some examples, the NLU module140is further configured to perform intent classification. For example, the NLU module140may classify the received text signal, or a part thereof, as corresponding to one of a set of possible user intents. Each of the intents may provide an indication of an operation to perform as part of, instead of, or in addition to the identification and/or verification process. The NLU module140sends the intent to the dialogue module160. An example intent might be that the user wishes to speak to a human operator for example. The parsing module144parses values extracted by the value extraction module142to derive transformed values. The parsing module may be implemented using one or more finite state transducers (FSTs), a date parsing library, and/or one or more machine-learning models, e.g. one or more neural networks for example. Parsing may be applied to some of the values extracted using the value extraction module142but may not be applied to others of the values extracted using the value extraction module142. For example, extracted values corresponding to certain user data fields may be parsed while extracted values corresponding to other user data fields may not be parsed. The parsing module144outputs data in a format of a specific data type, for example a date. User data fields whose values are to be parsed may include user data fields which are to be utilized as structured data and/or in a specified format for example. For example, the parsing module164may transform values for user data fields corresponding to dates from a transcription of a speech input of a date into a date data format, e.g. a numerical date data format. By transforming dates into a date data format, comparison of proposed dates with reference dates stored in the user database is facilitated. For example, a reference date may be stored in the user database using a numerical format of the type ‘yyyymmdd’, e.g. ‘19831205’ represents 5 Dec. 1983. Examples of transcriptions of a user's speech input for this date include “fifth December eighty-three” (UK spoken date), “December fifth eighty-three” (US spoken date), “twelve five nineteen eighty-three” (US numerical data) and “five twelve nineteen eighty-three” (UK numerical date). The parsing module164may receive any of the above transcriptions extracted by the value extraction module and transform them into the date data format, e.g. as ‘19831205’. For certain transcriptions, multiple valid transformations into a date data format may be possible. For example, in the previously presented example ‘five twelve nineteen eighty-three’ may represent 5 Dec. 1983 or 12 May 1983, depending on whether the date is interpreted as a ‘UK numerical date’ or ‘US numerical date’, so may be validly transformed into ‘19831205’ or ‘19830512’. Where such multiple valid transformations of a transcription data are possible, all or a subset of these multiple transformed values may be outputted by the parsing module144as transformed values for the respective user data field. The parsing module144may also perform parsing for spellings for example. For example, an extracted value may be parsed to generate a transformed value that corresponds to an alternative spelling of the extracted value. The values output from the value extraction module142are referred to as the extracted values, where an extracted value comprises a span of text. The values output from the parsing module144are referred to here as the transformed values, where the transformed value may comprise a span of text or data formatted as a specific data type (e.g. a date). The set of proposed values generated for a user data field comprises the extracted values and any transformed values corresponding to the user data field. The proposed values are outputted from the NLU module140to the dialogue module160. The dialogue module160includes a dialogue manager162and dialogue state164. The dialogue manager162selects an appropriate system response following the latest user input. In this way, the dialogue manager162controls a dialogue flow in which a user is queried for information to populate the user data fields, as part of a multi-turn conversation. A rule based dialogue manager162may be used, in which the desired system behaviour is manually specified in a set of stored rules. The dialogue manager162selects a system output from a set of stored possible system outputs, based on the set of stored rules. The rules are applied to the current information in the dialogue state164to determine the next system output. Each system output comprises a text response to the user. At least some of the set of text responses correspond to requests for information to populate the various user data fields, such as “What is your first name?” and “What is your postcode?”. For example, the dialogue manager162asks for information corresponding to one user data field for each dialogue turn, where the set of responses comprises a request corresponding to each user data field. The rules may be applied in a set order, so that the system requests information for the user data fields in a set order. In some examples, the rules may be conditioned on the information stored in the dialogue state, so that the system requests information for a user data field which does not have a value stored in the dialogue state. The system outputs may be provided to the user as text or as speech. If the outputs are to be provided as speech, the dialogue manager162provides the system output text to a text-to-speech module120, which converts it to speech. In some examples, the dialogue manager162may receive an intent from the NLU module140. The dialogue manager162may use the intent in selecting the system response. For example, the intent may indicate that the user wishes to speak to a (human) manager. The dialogue manager162may then provide a system output “transferring you to a manager” and may cause the user to be connected to a manager. As another example, the intent may correspond to a request by the user to input a certain user data field. The dialogue manager162may then provide a system response requesting that the user provide an input for this user data field. The dialogue manager162receives information including the proposed value(s) for a user data field from the natural language understanding module140each dialogue turn. The dialogue manager162may further receive the ASR output, in this example the N-best list of orthographic text outputs corresponding to the dialogue turn. The dialogue manager162may further receive information indicating the user intent. The dialogue manager162maintains the dialogue state164. The dialogue state164comprises stored information, including the proposed values received from the natural language understanding module140. The dialogue state164may further store the ASR output and/or the intent information. The dialogue manager162stores and tracks the proposed values output from the NLU module140in the dialogue state164. The dialogue state164may be stored using a map data structure, e.g. a key-value map, which maps the name of or another identifier for a user data field (the key) to one or more proposed values for that user data field. Examples of map data structures include Python dictionaries and Java HashMaps. For an identification task, as the proposed values are received from the NLU module140, the dialogue manager162also issues API calls to query the user database130regarding registered users. Any returned possibilities are also stored in the dialogue state164, as a list of candidate users. For example, any registered users having values for one or more user data fields which are the same as or similar to the proposed values are taken. For example, all users either having first names similar to ‘John’ or located in “Cambridge” may be obtained. The user database130may be queried using an information retrieval system, framework or library. For example, an API using a fuzzy query may be used to query the user database130. Alternatively, all of the registered users are obtained from the user database130in this step. Greater recall may be achieved by obtaining more users from the user database130as candidate users. Obtaining more users from the user database130may increase the amount of computational resources used however, e.g. the amount of computational processing power, memory and/or bandwidth used. In an example, 100 or fewer users may be obtained from the user database130as the list of candidate users. Obtaining a maximum of approximately 100 users may facilitate high recall while limiting the amount of computational resources used. While 100 users is provided as an example, the number of users obtained may be set according to the amount of computational resources available. For example, where the amount of computational resources is greater, the maximum number of users may be in the range 101-1000, e.g. in the range 200-300, 250-500, or 500-750. For a verification task, the dialogue manager162may be provided separately with information identifying the registered user for which the verification is being performed. For example, the registered user may be identified using caller identification of the phone number from which a user of the IVR system is calling. As another example, the IVR system may be identified from the user account using which the user is interacting with the IVR system, e.g. the user may interact with the IVR system using a dedicated application, a VoIP application, a messaging application with VoIP functionality, and/or a web application with VoIP functionality, for which the user has a user account. The registered user information is stored in the dialogue state164as the single candidate user. Alternatively, an identification task may be performed together with the verification task, in order to first identify the registered user against which the user is to be verified. The dialogue state164may be populated with the candidate user information prior to, during or after the dialogue flow. After the dialogue manager162has collected the information from the user dialogue (comprising multiple turns) and the user database130, the identification and verification process is performed using the identification and verification module170. Once the dialogue state164comprises at least one proposed value for each user data field used by the identification and verification module170, the dialogue manager162determines that the identification and verification process can be performed. The dialogue manager162formulates the proposed values stored in the dialogue state164into a set of candidate hypotheses. In this example, the set of candidate hypotheses comprise the N-best list of raw transcripts output from the ASR module110, and the corresponding list of extracted and parsed values for the user data field. Candidate hypotheses are generated during the dialogue, i.e. during the conversational exchange with the user. The candidate hypotheses may be stored in an ordered list, where the order is from the most likely to the least likely. The ordered list may comprise the parsed values in order of the corresponding ASR N-best list, followed by the extracted values in order of the corresponding ASR N-best list, followed by the ASR N-best list output. In this example, a set of one or more candidate hypotheses is generated for each user data field, where the set comprises the parsed values in order of the N-best list, followed by the extracted values in order of the N-best list, followed by the ASR N-best list output for the user data field. However, the hypotheses may comprise only extracted values or only proposed values for example. In some embodiments, a pre-built library is used to formulate the candidate hypotheses from the proposed values, alternatively to or in conjunction with the dialogue manager162. The set of candidate hypotheses include one or more proposed values for each of a plurality of user data fields. The proposed values are received from the natural language understanding module140. There may be multiple proposed values for a given user data field, for example because a value for the user data field may have been extracted from each of multiple ASR outputs. The candidate hypotheses may further comprise the ASR N-best list. The system also formulates the list of one or more candidate users stored in the dialogue state164into candidate references. Candidate references comprise the reference values for the user data fields for each candidate user. Where (only) verification is being performed, the reference values correspond to a single candidate user to be verified. Where identification is being performed, the reference values correspond to a plurality of candidate users. The reference values are obtained from the user database130. The phonetic processing module150is a module for translating graphemes to phonemes, for example translating orthographic text to phonetic text. The phonetic processing module150receives text from the Identification and Verification Module170. For example, the Identification and Verification Module170may transmit the N-best ASR outputs to the phonetic processing module150. The phonetic processing module generates phonetic text corresponding to the orthographic text for each entry in the N-best list. The output of the phonetic processing module150is provided to the identification and verification module170. The phonetic processing module150may output one or more phonetic texts for each input orthographic text. For example, the phonetic processing module150may output a plurality of phonetic texts for a given orthographic text, as there may be multiple valid phonetic translations of the orthographic text, e.g. there may be multiple valid pronunciations of the orthographic text. The plurality of phonetic texts for the given orthographic text may be an M-best list of phonetic texts for the given orthographic texts, where M is greater than or equal to N. The phonetic processing module150may translate graphemes to phonemes using any suitable method. For example, the phonetic processing module may translate graphemes to phonemes using a Hidden Markov Model-based grapheme-to-phoneme model, a neural machine translation-based grapheme-to-phoneme model, a phonetic dictionary-based method, a rules-based method or any combination thereof. The phonetic processing module150is an optional module, and in some examples this module is omitted. For example, where the ASR module110additionally outputs a phonetic text signal or where phonetic comparison is not performed, the phonetic processing module150may be omitted. Where the ASR module110additionally outputs phonetic text, the phonetic text is provided to the dialogue manager160, which in turn provides the phonetic text to the identification and verification module170. Translating graphemes to phonemes may be computationally expensive. In some examples, the phonetic processing module150may cache the one or more phonetic texts for a given input orthographic text. When the phonetic processing module150receives a future request for phonetic text(s) for the given input orthographic text, the phonetic processing module150may provide these cached one or more phonetic texts. The cache may have a limited capacity, e.g. the cache may be limited to storing a maximum number of phonetic texts and/or using a maximum amount of memory. In this situation, the input orthographic texts for which phonetic text(s) are stored may be determined based on the frequency with which phonetic texts for that orthographic text are requested, e.g. the phonetic texts for frequently requested orthographic input texts may be kept in the cache whereas the phonetic texts for rarely requested orthographic input texts may be discarded from the cache. For example, phonetic texts may be frequently requested for orthographic texts corresponding to common names of users, e.g. “Oliver”, “Sarah”, “John”, and “Emma” in the United Kingdom. As another example, phonetic texts may be frequently requested for orthographic texts corresponding to date components, e.g. “First”, “November” and “Twenty twenty one”. After the system100has collected all the information from the user dialogue and the user database130, the identification and verification process is performed. The information from the user dialogue comprises one or more hypotheses for each of a plurality of user data fields. The hypotheses comprise one or more proposed values for each of the plurality of user data fields. The hypotheses may further comprise one or more ASR outputs for each of the plurality of user data fields. The information from the user database130comprises one or more reference values for each of the plurality of user data fields for each of one or more candidate users. Thus after obtaining the set of hypotheses for all of the user data fields to be used by the verification or identification process, the dialogue manager162provides the hypotheses and reference values to the identification and verification module170. The dialogue manager162may receive an indication of the outcome of the identification and/or verification from the identification and verification module170. In response to receiving the confirmation of a successful identification and/or verification, the dialogue manager162may provide a system output confirming that the user has been identified and/or verified. The dialogue manager162may then provide the user access to an authenticated system. The authenticated system may be provided by or using another component or a distinct system for example. In response to receiving an indication of an unsuccessful identification and/or verification, the dialogue manager162may provide a system output information to the user. The dialogue manager162may then not provide the user access to the authenticated system. The identification and verification module170may verify users using steps330to350of the example method300described in relation toFIG.3for example. The identification and verification module170may identify users using steps S430to S450of the example method300described with respect toFIG.4for example. As described above, the identification and verification module170may obtain proposed phonetic values and phonetic text to be compared with reference phonetic values. If this phonetic text comparison is to be performed, the phonetic processing module150may extend the proposed orthographic text values, ASR output, and candidate reference orthographic text values received from the dialogue manager162with their phonetic transcriptions. In this case, the identification and verification module170may provide the proposed orthographic text values generated by the NLU module140and the ASR output to the phonetic processing module150to generate the proposed phonetic values and phonetic text. In this manner, the identification and verification module170obtains proposed phonetic values for the user data fields by translating the orthographic text values of the user data fields into phonetic text values using the phonetic processing module150. The identification and verification module170also obtains phonetic text corresponding to the ASR N-best list output by providing the ASR output to the phonetic processing module150. There may be more proposed phonetic values for at least one of the user data fields than proposed orthographic text values, because multiple phonetic translations of a proposed orthographic text value may be received from the phonetic processing module150. Alternatively or additionally, the proposed phonetic values and phonetic text may be obtained or have been obtained using the ASR module110, and the phonetic processing module150may be omitted. For example, the proposed phonetic text values may form part of the proposed values provided to the identification and verification module170by the dialogue manager162. Alternatively or additionally, the reference phonetic values may be retrieved from the user database130by the identification or verification module170or may have been retrieved by the dialogue manager162and form part of the reference values provided to the identification and verification module170. The user database130may store one or more reference phonetic values corresponding to each reference orthographic value for the user data field. There may be more reference phonetic values for at least one of the user data fields than reference orthographic text values, because multiple phonetic translations of a reference orthographic text value may be stored. The identification and verification module170includes a user data field score calculation module172and a fuzzy logic module174. The user data field score calculation module172may calculate user data field scores using any or any combination of the methods described in relation to the step334ofFIG.3, step434ofFIG.4, the method500described in relation toFIG.5, and/or the method600described in relation toFIG.6. The user data field score for a given candidate user may be calculated by comparing the reference value(s) for the user data field for the candidate user with the one or more hypotheses for that user data field. The calculated user data field score may be based on the closeness of the match. Fuzzy matching may be performed. The calculated user data field score may be from zero to one, with zero indicating strict dissimilarity and one indicating complete similarity. The user data field score calculation module172may apply a threshold to the user data field score such that calculated user data field scores below a specified value are set to zero. Examples of applying such thresholds are described below in relation to the step334ofFIG.3, step S434ofFIG.4, the method500described in relation toFIG.5A, and/or the method600described in relation toFIG.6. The user data field score calculation method may calculate user data field scores based on any, any combination or all of the hypotheses for a given user data field. For example, the user data field score for a given user data field may be calculated based on fuzzy comparisons of the proposed parsed value(s) (where parsing is performed for that user data field) with the reference value(s), fuzzy comparisons of the proposed extracted value(s) with the reference value(s), and fuzzy comparisons of the ASR outputs for the user data field with the reference value(s). The comparisons may occur in this order, e.g. the proposed parsed value(s) (where present) may be compared first, then the proposed extracted value(s) may be compared, then the ASR outputs for the user data field may be compared. The fuzzy logic module174performs one or more fuzzy logic operations on the user data field scores for the candidate user to derive a score for the candidate user. Examples of fuzzy logic operations include fuzzy AND, fuzzy OR and fuzzy NOT operations. In this example, the one or more fuzzy logic operations are performed by applying one or more Zadeh operators. The Zadeh operators are as follows. The fuzzy AND of two inputs is the minimum of the two inputs: x AND y=MIN (x, y). The fuzzy OR of two inputs is the maximum of the two inputs: X OR Y=MAX (x, y). The fuzzy NOT of an input is one minus the input: NOT(x)=1−x. These operators can facilitate use of “early stopping” procedures, as will be described below. In some alternative examples, the one or more fuzzy logic operations may be performed by applying one or more product fuzzy logic operators. The product fuzzy logic operators are as follows. The fuzzy AND of two inputs is the product of those inputs: x AND y=xy. The fuzzy NOT of an input is one minus the input: NOT(x)=1−x. The fuzzy OR of two inputs is derived based on the fuzzy AND and fuzzy NOT operators as x OR y=NOT(AND(NOT(x), NOT(y))=1−(1−x)*(1−y). As described above, the dialogue manager162selects an appropriate system response following the latest user input. The system outputs may be provided to the user as text or as speech. If the outputs are to be provided as speech, the dialogue manager162provides the system output text to a text-to-speech module120. The text-to-speech (TTS) module120receives an orthographic text signal from the dialogue manager162and generates speech data based on the text signal. The generated speech data may represent speech audio wherein the words and/or characters of the orthographic text are pronounced in accordance with the pronunciation rules of the relevant natural language. The generated speech data may be one or more audio data items or one or more audio data streams. The TTS module120may provide text-to-speech functionality using any suitable method. For example, the text-to-speech functionality may be provided using any of concatenative synthesis, formant synthesis, Hidden Markov Model-based synthesis, and/or Deep learning-based synthesis. The TTS module120is an optional module which may be omitted. For example, where the system output is provided to the user as text, the TTS module120may be omitted. The TTS module120may be implemented on a server computing device with speech audio being output on a client computing device. The speech data may be sent by the server computing device to the client computing device. The client computing device may receive the speech data and output the speech audio represented by the speech data using an audio output device forming part of the client computing device or connected thereto. The speech data may be output to the user as speech audio using an audio output device, e.g. a speaker or headphones or as text using a visual output device, e.g. a display. The output device may be connected to or form part of the computing device implementing the text-to-speech module120, e.g. the output device may form part of the client computing device or be connected thereto. The modules110,120,140,150,160,170may be implemented as one or more computer programs. For example, all of the modules110,120,140,150,160,170may be components of a single computer program. As another example, each of the modules110,120,140,150,160,170may be implemented as individual computer programs communicating as to provide the desired functionality. As another example, a subset of the modules may be implemented as one computer program and the others of the modules may be implemented as one or more other computer programs. Any of the modules110,120,140,150,160,170may also be implemented as a plurality of computer programs. The modules110,120,140,150,160,170may be implemented on a single computing device or may be implemented across a plurality of computing devices. For example, all of the modules110,120,140,150,160,170may be implemented on one or more server computing devices, e.g. one or more local server computing devices and/or one or computing devices of a cloud computing service. All of the modules110,120,140,150,160,170may be implemented on one or more client computing devices. Examples of client computing devices include: smartphones, feature phones, tablet computers, laptop computers, and/or desktop computers. One or more of the modules110,120,140,150,160,170may be implemented on one or more client computing devices and the other(s) of the modules implemented on one or more server computing devices. For example, the automatic speech recognition module110and the text-to-speech module120may be implemented on a client computing device while the modules140,150,160and170may be implemented on one or more server computing devices. In some examples, the system100is configurable and extensible. For example, the system100may use configurable modules for comparison of different types of values, for example text or dates. The system100can be extended by including modules configured for new data types. The system100can also be configured for various languages. In order to extend the system100to a new language, the ASR module110and TTS module120for the current language may be replaced with ASR module110and TTS module120configured for the new language. The NLU module140may be replaced with an NLU module configured for the new language, or an additional translation module may be provided, to translate the output of the ASR module110to the previous language before it is provided to the NLU module140. A natural language generation module may be included, to translate the scripted questions provided by the dialogue manager162to the new language. FIG.2illustrates an example dialogue flow200in which a user is identified and verified in accordance with an embodiment. The system responses in the example dialogue flow may be generated by the system100, as will be described below. The dialogue flow200includes question asking steps210-214, answer receiving steps220-224and confirmation step240. The dialogue state164is populated during the dialogue flow200. The dialogue state164includes hypotheses242and references244. The answers received in the answer receiving steps220-224are used to populate the hypotheses242. The answers are in the form of speech data, for example an audio signal received from a user device. Details of registered users234-1,234-2,234-3are retrieved from the user database130and are used to populate the references244. The populated dialogue state164is received by the identification and verification module170. The identification and verification module170identifies and verifies the user based on the dialogue state164. The confirmation step230is then performed. In step210, the user is asked a question of “What is your postcode?” by the system. In step220, a response from the user to the question of210is received. In the illustrated example, the response “C B one three P Q” is received. This is transcribed by the ASR module110as “C B one three P Q”. From this, a proposed value for the user data field “postcode” of “CB1 3PQ” is determined using the natural language understanding module140. This proposed value for the postcode user data field is stored as part of the hypotheses information242. In step212, the user is asked a question of “What is your full name?”. In step222, a response from the user to the question of212is received. In the illustrated example, the response “John Smith” is received. This is transcribed by the ASR module110as “John Smith”. From this, a proposed value for the user data field “name” of “John Smith” is determined using the natural language understanding module140. This proposed value for the name user data field is stored as part of the hypotheses information242. In step214, the user is asked a question of “What is your date of birth?”. In step224, a response from the user to the question of214is received. In the illustrated example, the response “Thirtieth of November eighty nine” is received. This is transcribed by the ASR module110as “Thirtieth of November eighty nine”. From this, a proposed value for the “date of birth” user data field of “30/11/1989” is derived using the natural language understanding module140. This proposed value for the date of birth user data field is stored as part of the hypotheses information242. As described in relation toFIG.1above, as the proposed values are received from the NLU module140, the dialogue manager162also issues API calls to query the user database130regarding registered users. Any possible matches are also stored in the dialogue state164, as a list of candidate users. The references information244is therefore populated with a list of candidate users, which in this example comprises User 1244-1. User 2244-2and User 3244-3. The dialogue state164is then sent to the identification and verification module170which identifies and verifies the user based on the hypotheses information242and the reference information244. A non-greedy approach is used, in which the dialogue manager162first collects all information relevant to identification and/or verification and then provides the information to the identification and verification module170. The information is provided to the identification and verification module170after the question asking steps are performed. The system executes the identification and/or verification after all relevant items have been collected. This non-greedy approach can increase recall and can also allow for dynamic domain adaptation, for example by biasing the ASR module110for candidate usernames, addresses, etc. The dialogue manager162also maintains N-best lists from all sub-components, including the ASR module110, the phonetic processing module150(where used), the API results from the user database130for an identification task, and the value extraction module142for example. This can help to increase recall. In the present case, the user is identified and verified as being ‘User 2’244-2. In the confirmation step230, “I have identified and verified you as user 2” is output by the system. FIG.3illustrates a method300for verifying a user according to an embodiment. The example method may be implemented as one or more computer-executable instructions executed on one or more computing devices, e.g. the computing device700described in relation toFIG.7. The example method may be performed by the identification and verification system100described in relation toFIG.1. Iterations are indicated via dotted lines. For a verification task, the user is compared with a single candidate user, being the registered user that the system needs to verify against in order to perform some subsequent steps, which might be providing access to an authenticated system such as a user account for example. At step310, input data is received. The input data comprises speech data from a user. At step320, one or more hypotheses for each of a plurality of user data fields are derived from the speech data. The one or more hypotheses may be derived from the voice data using any of the methods described in relation to the system100ofFIG.1. For example, the one or more hypotheses for a given user data field may be derived from the speech data by performing automatic speech recognition, extracting values, and/or parsing the extracted values, for each user utterance. As described in relation to the natural language understanding module140ofFIG.1, multiple proposed values for a given user data field may be derived from the speech data because a value may be extracted for the given user data field for each of multiple ASR outputs an input user utterance. Multiple proposed values for a given user data field may alternatively or additionally be derived from the speech data because there may be multiple valid transformations of a value extracted from the speech data, as described in relation to the parser144ofFIG.1. For each user data field, a set of one or more hypotheses is stored, comprising the proposed value(s) and the ASR output(s) for example. Steps310and320may be iterated several times, with the dialogue manager162providing an output to the user each iteration, requesting information to populate the user data fields. Once all of the user data fields required for verification are populated, the method moves to step S330. For each candidate user and for each user data field, the system performs a comparison with all hypotheses. As has been described previously, the set of hypotheses includes one or more proposed values. The one or more proposed values include one or more extracted values. The one or more proposed values may additionally include one or more parsed values. The set of hypotheses may additionally include one or more ASR outputs. Then, the system assigns the user data field a score corresponding to a float value in [0,1] for each candidate user, that indicates its similarity to the best matching hypotheses. A score of 1 indicates exact similarity; a score of 0 indicates strict dissimilarity; scores with values in between indicate levels of approximate similarity. The final score for each candidate user is calculated by evaluating a logical expression for all user data fields according to a fuzzy logic algebra. At step330, a score is calculated for the candidate user. Step330includes a user data field scores calculation step332and a fuzzy logic operations performance step336. In this example, the candidate user is the registered user against which the user is to be verified. In other words, the method determines whether the user is verified as the registered user. In step332, a plurality of user data field scores are calculated. As part of step332, a step334is performed for each of the plurality of user data fields. In step334, a user data field score is calculated using the one or more hypotheses for the data field. Where, for the user data field, there is one proposed value and one reference value for the candidate user then the user data field score may be calculated by performing a fuzzy comparison between the proposed value and the reference value for the candidate user. The user data field score may be the result of the fuzzy comparison, e.g. a fuzzy comparison score. The fuzzy comparison may be performed by applying a fuzzy comparison operator. Fuzzy comparison operators include fuzzy extensions of binary comparison operators. A binary comparison operator provides a binary output, for example a 1 (which indicates a match) and a 0 (which indicates a non-match). A fuzzy comparison operator provides an output with more than two possible states, indicating the degree of match, for example a value from 0 to 1. For example, an equality operator is a binary comparison operator, which outputs a 1 if the input text is the same as the reference text, and a 0 if the input text is not the same as the referenced text. The similarity operator, such as an edit distance, may be considered as a fuzzy extension of the equality operator, where the similarity operator outputs a value from 0 to 1 indicating the similarity between the input text and the reference text. As another example, the Python ‘in’ operator or the Java String ‘contains’ operator are binary comparison operators, which output a 1 if the reference text is contained within the input text and a 0 if the reference text is not contained within the input text. A fuzzy containment operator may be considered as a fuzzy extension of the Python ‘in’ operator, where the containment operator outputs a value between 1 and 0 indicating the extent to which the reference text is contained within the input text. In this example, the input text is the proposed value, however the input text may alternatively be the ASR output for example. The reference text is the reference value stored for the user data field. Examples of fuzzy comparison operators include fuzzy text similarity operators, fuzzy text containment operators, and fuzzy date comparison operators. The result of applying a fuzzy comparison operator may be a fuzzy comparison score. A threshold may be applied to the fuzzy comparison score to obtain the user data field score, whereby the user data field score is set to zero if the fuzzy comparison score is below a threshold value. The threshold value may be different for different user data fields. For example, the threshold value for a name user data field may be in the range 0.5-0.66. Using a threshold value of 0.5 may provide greater recall whereas using a threshold value of 0.66 or higher may provide greater security. One or more configurable fuzzy comparison operators may be provided. For example, the system may store a library of fuzzy comparison operators which may be suitable for different user data fields. For example, text comparisons based on similarity or containment, and based on orthographic text or phonetic text, can be included. Date comparisons based on orthographic text comparisons, phonetic text comparisons, or temporal comparisons can be included. Use of fuzzy logic algebra (AND, OR, NOT) can be used to combine the scores. A fuzzy text similarity score is a measure of the similarity of the input text string to the reference value text string. A fuzzy text similarity operation may comprise a determination of an edit distance. An edit distance measures the closeness or similarity of the input text to the reference text in terms of the number of operations required to convert the input text into the reference text. The operations may comprise: insertion of a character, deletion of a character, or substitution of a character for example. An example fuzzy text similarity operation involves calculating the Levenshtein edit distance between the input text and the reference text value using the Wagner-Fisher algorithm, normalizing the Levenshtein edit distance by the length of the longer of the two texts to obtain a dissimilarity score, and subtracting the dissimilarity score from one to obtain a fuzzy similarity score. A score between 0 and 1 is generated, with a score closer to 1 indicating higher similarity. The fuzzy text similarity score may be used as a fuzzy comparison score. Although the Wagner-Fischer algorithm is described here as an example, alternative methods of performing an edit distance calculation may be used. Calculating the edit distance is computationally expensive. The calculated edit distance and/or fuzzy text similarity score for a given input text string and a given reference value text string may be cached during or after the calculation of the fuzzy text similarity score. Thus, if a fuzzy text similarity score for these same two texts is requested then the cached edit distance and/or cached fuzzy text similarity score may be used. Hence, the edit distance and/or the fuzzy text similarity score is not recalculated and computational resources are consequently saved. The cache may have a limited capacity, e.g. the cache may be limited to storing a maximum number of edit distances and/or fuzzy text similarity scores, and/or using a maximum amount of memory. In this situation, the combinations of proved value text string and given reference value text string for which the fuzzy text similarity score is frequently requested, e.g. the edit distances and/or fuzzy text similarity scores frequently requested combinations may be kept in the cache whereas the edit distances and/or fuzzy text similarity scores for rarely requested combinations may be discarded from the cache. A fuzzy text containment operator is a measure of the extent to which the reference value is contained within the input text (the hypothesis). Applying an example fuzzy text containment operator to two values including an input text and a reference value involves: performing a modified version of Seller's variant of the Wagner-Fischer algorithm to obtain an optimal Levenshtein edit distance and the span that would produce it; normalizing the Levenshtein edit distance by the maximum of the length of the span and the length of the reference value to get a ‘not-containment’ score; and subtracting the not-containment score from one to get a fuzzy text containment score. The fuzzy text containment score may be used as a fuzzy comparison score. The fuzzy text containment operator may comprise an approximate string search. Although Seller's algorithm is described here as an example, alternative methods of performing approximate string matching may be used. The fuzzy text containment operator may be likely to return a higher value for a comparison of the reference value to an ASR output from the N-best list for example. For example, a user's name may be ‘John Swymsphast’, the ASR hypothesis may be “My last name is swims fast”, and the name value extracted from the ASR hypothesis by the value extractor may be “Swims”. A fuzzy comparison of the extracted name value with the reference value will have a low fuzzy comparison score. However, performing a fuzzy text containment operation on the user's name and the ASR hypothesis would result in a significantly higher fuzzy comparison score, as many characters of the user's name are contained in the ASR hypothesis. Performing a fuzzy text containment operation on a phonetic transcription of the user's name and a phonetic transcription of the ASR hypothesis may result in an even higher fuzzy comparison score as ‘Swymsphast’ and “swims fast” may be given similar or identical phonetic transcription. In Seller's variant of the Wagner-Fischer algorithm, an m×n table is generated in the determination of the minimum edit distance between spans of the input text and the reference value as well as the length of the span from which this minimum edit distance was derived. m is the length of the reference value and corresponds to the number of rows in the table. n is the length of the input text and corresponds to the number of columns in the table. The entries in the first row of the table are set to zero. The rest of the table is populated in the same manner as defined in the Wagner-Fisher algorithm. The lowest value entry in the bottom row corresponds to the minimum edit distance. Where two entries in the bottom row have the same value, the entry corresponding to the shorter span can be selected for example. The selected entry in the bottom row gives the end-position of the match, and the edit distance. To determine the span corresponding to the match, the number of insertions and deletions is also stored during generation of the table. The Seller's variant of the Wagner-Fischer algorithm outputs the edit distance and the span length corresponding to the edit distance. The modified version of Seller's variant of the Wagner-Fischer algorithm additionally generates a second m×n table, which stores the operations used to obtain the edit distance. Using the second m×n table, the exact span of the input text having the minimum edit distance may be obtained, making the calculated fuzzy containment score explainable. Calculating the optimal edit distance is computationally expensive. The calculated optimal edit distance and/or the fuzzy text containment score for a given input and a given reference value string may be cached during or after the calculation of the fuzzy text containment score. Thus, if a fuzzy text containment score for these same two texts is requested then the cached optimal edit distance and/or cached fuzzy text containment score may be used. Hence, the optimal edit distance and/or the fuzzy text containment score is not recalculated and computational resources are consequently saved. The cache may have a limited capacity, e.g. the cache may be limited to storing a maximum number of optimal edit distances and/or fuzzy text containment scores, and/or using a maximum amount of memory. In this situation, the combinations for which the fuzzy text containment score is frequently requested, e.g. the optimal edit distances and/or fuzzy text containment scores for frequently requested combinations may be kept in the cache whereas the optimal edit distances and/or fuzzy text containment scores for rarely requested combinations may be discarded from the cache. The fuzzy text similarity operator and/or fuzzy text containment operator may have one or more parameters. Various parameters of these operators may be configurable, for example max global errors, max errors in window, and a threshold. For example, the parameters may specify that at least every other character must match, or the comparison score for the input text and reference value is set to zero. As another example, if there are more than a maximum number of character differences between the input text and reference value then the comparison score is set to zero. As a further example, if an initially calculated comparison score does not meet a threshold then the resulting comparison score is set to zero. As an additional example, if within a given window, e.g. a continuous sequence of characters, of the reference value text and/or the input text, there are a number of differences (e.g. errors) exceeding a threshold then the resulting comparison score is set to zero. Fuzzy date comparison operators may compare two dates based on the temporal difference between the two dates. For example, the fuzzy date comparison operator may calculate a fuzzy date similarity score based on a number of days, months, and/or years between the two dates. Fuzzy comparison operators for date comparisons are described in relation toFIG.8. Where dates are represented as an alphabetic, a numeric, or an alphanumeric string, fuzzy text comparison operators such as those described above may also be used for comparing dates. Any combination of the above fuzzy comparison operators and any other fuzzy comparison operators can be used. Where, for a user data field, multiple fuzzy comparison operators are used, the user data field score may be the maximum of the fuzzy comparison scores determined for the user data field. This is equivalent to performing an OR operation across the fuzzy comparison scores using Zadeh fuzzy logic. For example, where there is one proposed value and one reference value for a candidate user for the user data field, and a fuzzy text similarity and fuzzy text containment score are determined, then the user data field score is taken as the maximum of the fuzzy text similarity score and the fuzzy text containment score. Further constraints may be applied when determining the user data field score. An example of a further constraint might be that if the fuzzy text similarity score is below a predetermined threshold, the comparison score for the comparison of the proposed value and reference value is 0, regardless of the values of any other fuzzy comparison scores. Where, for a user data field, there are multiple hypotheses and/or multiple reference values for the candidate user, a fuzzy comparison between each of the relevant value combinations is performed. For example, a fuzzy comparison between each of any proposed orthographic values and each of any reference orthographic values for the candidate user is performed. The user data field score is then taken as the maximum of these fuzzy comparison scores. Finding the maximum of these fuzzy comparison scores is equivalent to performing an OR operation using Zadeh fuzzy logic across these fuzzy comparison scores. An example of how a fuzzy comparison between any phonetic values may be performed will be described in more detail in relation toFIG.6below. For example, where there is one proposed orthographic value and multiple reference orthographic values for the candidate user, a fuzzy comparison between the proposed value and each of the one or more reference values is performed. The user data field score is taken as the maximum of these fuzzy comparison scores. An example method for the calculation of a user data field score in a case where there are multiple proposed values is further described in relation to the example method500ofFIG.5. Where there are multiple hypotheses, the fuzzy comparisons may be performed in order, for example in the order of the proposed values in the N-best or M-best list, e.g. the proposed value having the highest likelihood may be compared with the reference values first. An early stopping procedure may be included in the determination of the user data field score where the Zadeh OR operator is used for combining the fuzzy comparison scores for the user data field. If one of the fuzzy comparison scores for the user data field is 1, it is not necessary to calculate the remaining fuzzy comparison scores, since only the maximum score will be used. Thus, an implementation using early stopping may be employed, in which if one of the fuzzy comparison scores for the user data field is 1, the remaining fuzzy comparison scores for the user data field are not calculated. This may save computational resource. By ordering the values such that the most likely hypotheses are compared first, it is more likely that a hypothesis for which the fuzzy comparison score is 1 is compared before others. Hence, early stopping is likely to occur earlier in the process of calculating the fuzzy comparison scores saving further computational resources. For example, in an implementation using early stopping, a fuzzy comparison is performed between a proposed value for a user data field and a reference value for the candidate user to calculate a fuzzy comparison score. If the fuzzy comparison score is 1, then this fuzzy comparison score is used as the user data field score and no further fuzzy comparison scores are determined for the user data field for the candidate user. Otherwise, further fuzzy comparison scores may be determined, for example using different fuzzy comparison operations, other hypotheses for the user data field or other reference values for the candidate user for the user data field for example. If each of the fuzzy comparisons are performed for each of the relevant combinations of the hypotheses and reference values, and none of the resulting fuzzy comparison scores are 1, then the maximum of the fuzzy comparison scores is taken as the user data field score. In step336, one or more fuzzy logic operations are performed on the plurality of user data field scores for the candidate user. The one or more fuzzy logic operations may be any of the fuzzy logic operations described in relation to the fuzzy logic module174ofFIG.1. The result of performing the one or more fuzzy logic operations on the plurality of user data field scores is a score for the candidate user. Early stopping may be used in determining the score for the candidate user where the Zadeh operators are being used for performing fuzzy logic. This may save computational resources. An example implementation using early stopping and the Zadeh operators for fuzzy logic will now be described. The score for the candidate user is determined by performing one or more fuzzy OR operations and/or one or more fuzzy AND operations on the user data field scores. A fuzzy OR operation is applied to two inputs. The two inputs may be two user data field scores; two results of fuzzy logic operations on user data field scores; and/or a user data field score and a result of a fuzzy logic operation on user data field scores. One of these inputs may be calculated before the other, e.g. a user data field score may be calculated before another user data field score or one fuzzy logic operations may be performed before another. If the first computed of the two inputs is 1, then the result of the OR operation (the maximum of these values) will be 1. Hence, the first of these computed inputs may be used as the result of the fuzzy OR operation without computing the other input. A fuzzy AND operation is applied to two inputs. The two inputs may be two user data field scores; two results of fuzzy logic operations on user data field scores; and/or a user data field score and a result of a fuzzy logic operation on user data field scores. One of these inputs may be calculated before the other, e.g. a user data field score may be calculated before another user data field score or one fuzzy logic operations may be performed before another. If the first computed of the two inputs is 0, then the result of the AND operation (the minimum of these values) will be 0. Hence, the first of these computed inputs may be used as the result of the fuzzy OR operation without computing the other input. As was described previously in relation to step334, the user data field score calculation module172may apply a threshold to the user data field score such that calculated user data field scores below a specified value are set to zero. If this has occurred, then early stopping may take place during calculation of the score for the candidate user. A further example of early stopping is described below. In this example, the logical expression for calculating the total candidate user score is: (Score(Name) AND Score(Postcode)) AND Score(Telephone Number) Score (Name) is the score for the ‘Name’ user data field. Score (Postcode) is the score for the ‘Postcode’ user data field. Score (Telephone Number) is the score for the ‘Telephone Number’ user data field. If the (Score (Name) is 0 then the total user score must be 0, as the total score is the minimum of the user data field scores. In this case, computation of the logical expression may be skipped, and (Score (Name) The resulting candidate user score indicates the confidence level that the information provided is adequate to verify the user as the registered user. To provide a binary output of whether the user is verified or not, a further step of checking whether the candidate user score is above or below a threshold may be performed, as described below. At step340, it is determined whether the score for the at least one candidate user meets a verification threshold. In some implementations, the score for the at least one candidate user may meet the verification threshold if it is greater than the verification threshold. In some implementations, the score for the at least one candidate user may meet the verification threshold if it is greater or equal to the verification threshold. The verification threshold may be a value greater than the minimum possible score and less than the maximum possible score. For example, the score may be in the range 0 to 1 (inclusive) with 0 being the minimum possible score and 1 being the maximum possible score, and the verification threshold may be a value between 0 and 1. The verification threshold may be set according to security requirements and/or usability requirements, e.g. security requirements and/or usability requirements of a system to which authenticated access is being provided. The threshold can be adjusted to choose a desired operation point for a trade-off between a security level and a level of recall. For example, the security requirements for a telephone help IVR system may be lower than that for an e-banking IVR system, so the verification threshold used for a telephone help system may be lower than that used for an e-banking IVR system. Higher verification thresholds provide greater levels of security but may impair usability of the system, as non-exact matches resulting from ASR errors are more likely to cause verification to fail. Thus, the legitimate user may need to reattempt verification multiple times to gain access to the system or, at worst, may not be able to gain access to the system. Lower verification thresholds provide lower levels of security but improve the usability of the system as minor ASR errors are less likely to cause verification to fail. Thus, the user may be able to access the system in one attempt or at least in fewer attempts as compared to a system using a higher verification threshold. Where the score is in the range 0 to 1, the verification threshold may be in the range of 0.66-0.9 for example. A verification threshold of 0.66 may provide enhanced usability, whereas a verification threshold of 0.9 may provide enhanced security. In addition to determining whether the score for the at least one candidate user meets the verification threshold, it may be determined whether one or more further verification thresholds are met with a level of confidence in the verification depending on whether the one or more verification thresholds. The one or more further verification thresholds may have values greater than the verification threshold and may correspond to higher levels of confidence. At step350, in response to determining that the score for the candidate user meets the verification threshold, it is verified that the user is the candidate user. Where one or more further verification thresholds are used, a level of confidence for the verification may be assigned For example, if the score does not meet the verification threshold then the user has not been verified; if the score meets the verification threshold but not the one or more further verification thresholds then the user has been verified with a moderate confidence; and if the score meets the one or more further verification thresholds then the user has been verified with a higher confidence. Where the one or more further verification threshold are a plurality of further verification thresholds then the score meeting those of the plurality of further verification thresholds with greater values may correspond to higher levels of confidence in the verification. The level of confidence in the verification may be stored for auditing and/or debugging purposes. At optional step360, a confirmation that an identity of the user has been verified is outputted to the user. The candidate user score or level of confidence in the verification may be outputted to the user. At optional step370, access is provided to an authenticated system. The authenticated system may be a further IVR system. For example, the authenticated system may be a voice-based search IVR system, a recommendation IVR system, a booking IVR system, a telephone help IVR system, a customer service IVR system, and/or an e-banking IVR system. If it is determined that the score for the candidate user does not meet the verification threshold, it is not verified that the user is the candidate user. An optional step of informing the user that the identity of the user has not been verified may be performed. Access is not provided to the authenticated system. The system may take some further action, such as handing over to a human operator. Where one or more further verification thresholds are used, the degree of access provided to the authenticated system may depend on the level of confidence in the verification. For example, in an e-banking IVR system, the balance of a bank account may be provided where there is a medium level of confidence or greater in the verification, e.g. the verification threshold is met, but, to perform a money transfer, a greater level of confidence in the verification may be required, e.g. the one or more further verification thresholds may need to be met. If the verification is not at the required level of confidence to perform the operation then the verification process or parts thereof may be repeated such that the user may be verified at the required level of confidence, or, the user may be transferred to a human operator who can perform the verification manually. In the above described method, fuzzy logic allows soft verification with approximately matching information. Fuzzy logic quantifies the uncertainty of the approximate matches, so that a verification threshold representing the desired recall/security trade-off for the application can be used. FIG.4illustrates an example method1400for identifying a user according to an embodiment. The example method may be implemented as one or more computer-executable instructions executed on one or more computing devices, e.g. the computing device700described in relation toFIG.7. The example method may be performed by the identification and verification system100described in relation toFIG.1. Iterations are indicated via dotted lines. At step S410, input data is received. The input data comprises speech data from a user. At step S420, one or more hypotheses for each of a plurality of user data fields are derived from the speech data. The one or more hypotheses for each of the plurality of user data fields may be derived using any of the methods described in relation to step320ofFIG.3. Once the proposed values have been collected, for each candidate user of a plurality of candidate users, step S430is performed. In step S430, a score is calculated for the respective candidate user. Step S430includes a user data field scores calculation step S432and a fuzzy logic operations performance step S436. In step S432, a plurality of user data field scores are calculated for the candidate user. As part of step S432, a step S434is performed for each of the plurality of user data fields. In step S434, a user data field score is calculated using the one or more proposed values for the user data field, the ASR output, and the one or more reference values for the at least one candidate user for the user data field. User data field scores may be calculated using any of the methods described in relation to step334ofFIG.3. As described in relation to step334, a threshold may be applied to a fuzzy comparison score to obtain the user data field score, whereby the user data field score is set to zero if the fuzzy comparison score is below a threshold value. In step S436, one or more fuzzy logic operations are performed on the plurality of user data field scores. The one or more fuzzy logic operations performed on the plurality of user data field scores may be any of the fuzzy logic operations described in relation to step336ofFIG.3and may be performed in the same or a similar manner. At step S440, a candidate user of the plurality of candidate users having a maximum score is determined. At step S450, the user is identified as the candidate user having the maximum score. An early stopping implementation may be performed, in which if a score for a candidate user is determined as 1, no further candidate user scores are determined, and the candidate user is identified as the user. For the identification task, the candidate users are ranked according to their computed fuzzy scores. The ranking in the list and the score indicates the confidence that the system has identified the correct user. The top-ranking candidate user is considered as the best match for the identity of the system's user. The rest of the items can also be provided as an output if a subsequent application requires an n-best list of candidate identities for example. If the list is empty, the system concludes that the user could not be identified. An optional step of providing an output to the user indicating that they could not be identified is performed. The system may take some further action, such as handing over to a human operator. Access to an authenticated system is not provided. In tandem with or subsequent to identifying the user, verification may be performed in the manner described in relation to steps340and350ofFIG.3. For example, it may be determined whether the score of the identified user meets a verification threshold, and the user is verified if the score of the identified user meets the verification threshold. At optional step S460, a confirmation that the user has been identified is provided. Where the user is additionally being verified, the optional step S460may be performed in response to the user being verified in addition to identified, e.g. the user must be both identified and verified for the confirmation to be provided. At optional step S470, access to an authenticated system may be provided. The authenticated system may be any of the authenticated systems described in relation to step370. Where the user is additionally being verified, the optional step S470may be performed in response to the user being verified in addition to identified, e.g. the user must be both identified and verified for access to the authenticated system to be provided. FIG.5illustrates an example method500for calculating a user data field score which may be used in a method according to an embodiment. The example method may be implemented as one or more computer-executable instructions executed on one or more computing devices, e.g. the computing device700described in relation toFIG.7. The example method may be performed by the identification and verification system100described in relation toFIG.1, e.g. by the user data field score calculation module152. Iterations are indicated via dotted lines. At step510, a plurality of hypotheses are derived for the user data field. The plurality of hypotheses may be derived from the speech data using any of the methods described in relation to the system100ofFIG.1and/or any of the methods described in relation to step320ofFIG.3for example. For example, where the ASR module110outputs multiple ASR outputs corresponding to the user input, a proposed value corresponding to each ASR hypothesis may then be extracted for the user data field by the NLU module140. At step520, one or more hypothesis scores are calculated. As part of step520, a step522is performed for each of one or more of the plurality of hypotheses. The step522may be performed for each of the hypotheses in the order in which the hypotheses are provided. For example, the proposed values are provided to the identification and verification module170in the order of the N-best list output from the ASR module110. The proposed value scores are then determined in this order. Where the ASR output is also used in the determination of the user data field score, a score is also generated for each of the ASR outputs corresponding to the user data field, in the same manner as the proposed value scores are calculated. In step522, a hypothesis score is calculated. The hypothesis score may be calculated in accordance with the methods described in relation to step334ofFIG.3. In this step, a fuzzy comparison score is generated using each of the implemented fuzzy comparison operators and each of the relevant reference values. Where there are multiple relevant reference values for the user data field for the candidate user, early stopping may be performed during the calculation of the hypothesis score in accordance with methods described in relation to step334ofFIG.3. Early stopping may be employed in the calculation where the Zadeh operators are being used for performing fuzzy logic. Where a hypothesis score is 1, further hypothesis scores for the user data field are not calculated, which may save computational resources. For example, a first hypothesis score is calculated for a first of the plurality of hypotheses by performing step522. If the hypothesis score is 1 then no further hypothesis scores are computed. By calculating the hypothesis scores in the order corresponding to that of the N-best list output from the ASR module, it is more likely that early stopping may be used. Where the one or more hypothesis scores are one proposed value score, the user data field score is the one hypothesis score. Where the one or more hypothesis scores are a plurality of hypothesis scores, step530is performed. At step530, the maximum of the hypothesis scores is taken as the user data field score. This corresponds to application of fuzzy OR Zadeh operators. FIG.6illustrates an example method600for deriving a user data field score which is used in a method according to an embodiment in which a phonetic comparison is performed. The example method may be implemented as one or more computer-executable instructions executed on one or more computing devices, e.g. the computing device700described in relation toFIG.7. The example method may be performed by the identification and verification system100described in relation toFIG.1, e.g. by the phonetic processing module140. Iterations are indicated via dotted lines. In this example, orthographic fuzzy matching is complemented with phonetic fuzzy matching. In alternative examples, phonetic fuzzy matching may be performed without orthographic fuzzy matching. At step610, one or more reference phonetic values are obtained for the user data field for the candidate user. The one or more reference phonetic values may be obtained using the ASR module110or the phonetic processing module150, or the reference phonetic values may be stored for the candidate user in advance and retrieved in step610. In this example, step610includes a grapheme to phoneme conversion step612. In step612, a grapheme to phoneme conversion is performed on one or more orthographic reference values corresponding to the user data field. The grapheme to phoneme conversion may be performed using any of the methods described in relation to the phonetic processing module150in relation toFIG.1. The result of the grapheme to phoneme conversion may be a phonetic text value, e.g. a text value in a phonetic alphabet such as the International Phonetic Alphabet or Speech Assessment Methods Phonetic Alphabet (SAMPA). Step612may include performing step614for each of at least a subset of the one or more orthographic reference values. In step614, one or more phonetic reference values are derived from each orthographic reference value. A plurality of phonetic text values may be derived from each of the orthographic text values. At step620, a phonetic user data field score for the user data field is calculated based on the obtained one or more phonetic reference values and a set of one or more proposed phonetic values and/or one or more phonetic texts. Phonetic values and phonetic text are obtained in the manner described in relation toFIG.1above. The same fuzzy similarity and containment text operators described above can be used to compare phonetic text. The phonetic user data field score may be calculated using any of the techniques described in relation to the step332ofFIG.3and/or the method500ofFIG.5. Fuzzy phonetic text operators, including similarity and/or fuzzy containment operators, can be used to compare the proposed phonetic values or text with the reference phonetic values. Proposed phonetic values or text can be obtained using speech-to-phoneme based speech recognition or reconstructed using the phonetic processing module150. Various parameters of these operators may be configurable, for example max global errors, max errors in window, and the threshold as described above. For example, the parameters may specify that at least every other phoneme must match, or the comparison score for the proposed phonetic value and reference phonetic value is set to 0. As another example, if the result of the similarity or containment operator is a score below a threshold then the comparison score is set to 0. For example, for a phonetic name field, the threshold may be a value in the range 0.5-0.66. At step620, an orthographic user data field score for the user data field is also calculated based on the one or more orthographic reference values and the proposed one or more orthographic values and ASR output. The orthographic user data field score for the user data field may be calculated using any of the techniques described in relation to the step332ofFIG.3and/or the method500ofFIG.5. At optional step630, a fuzzy logic operation is performed on the phonetic user data field score and the orthographic user data field score. The fuzzy logic operation may be a Zadeh fuzzy OR operation. The result of applying the fuzzy OR operator to the phonetic user data field score and the orthographic user data field score is the maximum of these user data field scores. In the above described example, one or more proposed phonetic values and/or phonetic texts are included in the set of hypotheses. The hypotheses comprising phonetic text or values are used to determine a phonetic user data field score and the hypotheses comprising orthographic text or values are used to determine an orthographic user data field score. The proposed phonetic values may be generated from the proposed orthographic values. The phonetic texts may be generated from the orthographic ASR outputs, or the ASR module may alternatively extract the phonetic texts directly from the user input. An example of the calculation of candidate user scores in an identification and verification scenario is now described. In the example scenario, details of two registered users have been obtained from a user database in response to the obtained proposed values. The first registered user has an orthographic reference value for the “first name” user data field of ‘John’ and an orthographic reference value for the “surname” user data field of ‘Smith’. The second registered user has an orthographic reference value for the “first name” user data field of ‘Joan’ and an orthographic reference value for the “surname” user data field of ‘Schmidt’. References are built for each of these users from information in the user database130. A reference User_1 is built for the first user. A reference User_2 is built for the second user. This may be represented using pseudo-code as follows:User 1={name: John, surname: Smith}→[Reference(first_name=John), Reference(surname=Smith)]User_2={name: Joan, Surname: Schmidt}→[Reference (first_name=Joan), Reference (surname=Schmidt)] The system then builds hypotheses based on the proposed values extracted from the dialogue flow with the user using the techniques previously described. In the example scenario, the user utterance is “My name is John Smith”, and the output of the ASR is “My name is Jo Smith”, where in this example, N=1. From this the proposed value ‘Jo’ is extracted for the user data field “name” and the proposed value ‘Smith’ is extracted for the user data field surname by the NLU module140. A hypothesis is built from these extracted values. This may be described using pseudo-code as follows:Utterance=“My name is Jo Smith”→[Extracted(first_name=Jo), Extracted (surname=Smith)]→[Hypothesis (first_name=Jo), Hypothesis (surname=Smith)] In this example, N=1 and therefore a single proposed orthographic value is extracted for each of the two user data fields used by the identification and verification. Phonetic comparison is not used, and therefore a single proposed orthographic value is extracted for each user data field. Comparison of the ASR output is also not used. The system then evaluates user data field scores for each user data field and each candidate user. Hence, ‘first name’ and ‘surname’ user data field scores are calculated for both of the candidates User_1 and User_2. In the example scenario, the ‘first name’ user data field score is 0.5 for User_1 and 0.5 for User_2; and the ‘surname’ user data field score is 1.0 for User_1 and 0.2 for User_2. This may be described using pseudo-code as follows:score_first_name(User_1)=best_score(first_name hypothesis matches first_name reference for User_1)=0.5 (score of ‘Jo’ hypothesis matching with ‘John’ reference)score_first_name(User_2)=best_score(first_name hypothesis matches first_name reference for User_2)=0.5 (score of ‘Jo’ hypothesis matching with ‘Joan’ reference)score surname(User_1)=best_score(surname hypothesis matches surname reference for User_1)=1.0 (score of ‘Smith’ hypothesis matching with ‘Smith’ reference)score surname (User_2)=best_score(surname hypothesis matches surname reference for User_2)=0.2 (score of ‘Jo’ hypothesis matching with ‘Joan’ reference) The system then combines the scores for the user data fields to get a total score for each user. In the example scenario, the logical expression comprises applying a fuzzy ‘AND’ operator to the ‘first name’ user data field score and the ‘surname’ user data field score. This may be represented in pseudo-code as follows:score(user)=score_first_name(user) AND score surname(user) The system then applies fuzzy logic algebra to evaluate the result for each user. In the present example, a score of 0.5 for User_1 and a score of 0.2 for User_2 is obtained. This may be represented in pseudo-code as follows:score (User_1)=0.5 (first name score) AND 1.0 (surname score)=0.5score (User_2)=0.5 (first name score) AND 0.2 (surname score)=0.2 The derived scores may then be used for identification and/or verification. In the case of verification, in this example the verification threshold has been set to 0.3. In this case, User_1 would be verified as its score of 0.5 exceeds the threshold of 0.3 but User_2 would not be verified as its score of 0.2 does not meet the threshold of 0.3. This may be represented in pseudo-code as follows:Verify(User_1)=bool (0.5>threshold)=True (verified)Verify(User_2)=bool (0.2>threshold)=False (not verified) In the case of identification, the user would be identified as the one of the users User_1 and User_2 having the maximum score. User_1 has a score of 0.5 while User_2 has a score of 0.2, so User_1 would be identified as the user. FIG.7shows a computing device700on which the embodiments described herein may be implemented. The computing device700includes a bus710, a processor720, a memory730, a persistent storage device740, an Input/Output (1/O) interface750, and a network interface760. The bus710interconnects the components of the computing device700. The bus may be any circuitry suitable for interconnecting the components of the computing device700. For example, where the computing device700is a desktop or laptop computer, the bus710may be an internal bus located on a computer motherboard of the computing device. As another example, where the computing device700is a smartphone or tablet, the bus710may be a global bus of a system on a chip (SoC). The processor720is a processing device configured to perform computer-executable instructions loaded from the memory730. Prior to and/or during the performance of computer-executable instructions, the processor may load computer-executable instructions over the bus from the memory530into one or more caches and/or one or more registers of the processor. The processor720may be a central processing unit with a suitable computer architecture, e.g. an x86-64 or ARM architecture. The processor720may include or alternatively be specialized hardware adapted for application-specific operations. The memory730is configured to store instructions and data for utilization by the processor720. The memory730may be a non-transitory volatile memory device, such as a random access memory (RAM) device. In response to one or more operations by the processor, instructions and/or data may be loaded into the memory730from the persistent storage device740over the bus, in preparation for one or more operations by the processor utilising these instructions and/or data. The persistent storage device740is a non-transitory non-volatile storage device, such as a flash memory, a solid state disk (SSD). or a hard disk drive (HDD). A non-volatile storage device maintains data stored on the storage device after power has been lost. The persistent storage device740may have a significantly greater access latency and lower bandwidth than the memory730, e.g. it may take significantly longer to read and write data to/from the persistent storage device740than to/from the memory730. However, the persistent storage740may have a significantly greater storage capacity than the memory730. The I/O interface750facilitates connections between the computing device and external peripherals. The I/O interface750may receive signals from a given external peripheral, e.g. a keyboard or mouse, convert them into a format intelligible by the processor720and relay them onto the bus for processing by the processor720. The I/O interface750may also receive signals from the processor720and/or data from the memory730, convert them into a format intelligible by a given external peripheral, e.g. a printer or display, and relay them to the given external peripheral. The network interface760facilitates connections between the computing device and one or more other computing devices over a network. For example, the network interface760may be an Ethernet network interface, a Wi-Fi network interface, or a cellular network interface. FIG.8illustrates examples of temporal date comparison operators that may be used in the above described methods. For each of the temporal date comparison operators illustrated, a date is compared to a reference date. The date being compared is derived from a voice input, the reference date is a date for a user data field for a candidate user. In each of the graphs, the X-axis indicates the number of days from the reference date with 0 indicating the reference date, n indicating n days after the reference date and—n indicating n days before the reference date. For example, 5 represents five days after the reference date, whereas −5 represents five day before the reference date. In each of the graphs, the Y-axis represents a comparison score having a value between one and zero. Graph810illustrates a first binary-valued temporal date comparison operator. This first temporal binary-valued date comparison operator is a step temporal date comparison operator, where dates which are the same as the reference date and dates within a given number of days of the reference date are given a score of one and other dates are given a score of zero. In the specific example illustrated in graph810, dates within five days of the reference date are given a score of 1, whereas dates more than five days from the reference date are given a score of 0. Graph820illustrates a second temporal fuzzy date comparison operator. This second temporal fuzzy date comparison operator is a triangular temporal fuzzy date comparison operator, where dates which are the same as the reference date are given a score of one with the score linearly decreasing to zero at a specified number of days from the reference date. Dates more than the specified number of days from the reference date are also given a score of zero. In the specific example illustrated in graph820, the score linearly decreases from one at the reference date to zero at five days from the reference days, e.g. at five days after the reference date and five days before the reference date. Graph830illustrates a third temporal fuzzy date comparison operator. This third temporal fuzzy date comparison operator is a bell curve temporal fuzzy date comparison operator. The illustrated bell curve temporal fuzzy date comparison operator may be a rescaling of a probability density function normal distribution with a mean of zero, e.g. centred at the reference date, a specified standard deviation, and rescaled such that its maximum value is one. The specific example illustrated in graph830has a standard deviation of two. While the above described temporal date comparison operators are symmetric with respect to the reference date, e.g. having the same score for n days after the reference date and n days before the reference date, temporal fuzzy date comparison operators that are asymmetric with respect to the reference date are also envisaged. An example of an asymmetric temporal comparison operator is a step temporal date comparison operator where dates which are the same as the reference date and dates up to a first number of days after the reference date are given a score of one, dates up to a different, second number of days before the reference date are also give a score of one, and other dates are given a score of zero. Another example of an asymmetric temporal fuzzy comparison operator is a triangular temporal fuzzy comparison operator, where dates which are the same as the reference date are given a score of one with the score linearly decreasing to zero at a first number of days after the reference date, and the score linearly decreasing to zero at a second, different number of days before the reference date. Dates more than the first number of days after the reference date and more than the second, different number of days before the reference date are also given a score of zero. A threshold may be applied to any score calculated using the above described temporal fuzzy date comparison operators, whereby any scores below the threshold are set to zero. For example, a threshold may be applied to a score calculated using the operator illustrated in graph830such that any scores less than 0.5 are set to zero. In the above described examples, the dialogue manager receives information including the proposed value(s) for a user data field from the natural language understanding module. The dialogue manager generates a set of one or more candidate hypotheses. For example, the set may comprise a candidate hypothesis corresponding to each proposed value, in the order of the ASR list from which the proposed values were determined. As described previously, the proposed values may comprise extracted values and parsed values. The set of hypotheses may comprise the parsed values listed above the extracted values. In some examples, the dialogue manager additionally receives the ASR output for the user data field. The set of hypotheses may further comprise the ASR output, listed after the proposed values, in the order of the ASR output (the N-best list). The set of hypotheses is provided to the identification and verification module. In some examples, the identification and verification obtains phonetic text or values corresponding to one or more of the hypotheses, and includes the obtained phonetic text or values into the set. A user data field score is then obtained as the maximum comparison score between a hypothesis in the set and a reference value. In some alternative examples, the value extraction module is omitted, and the set of hypotheses comprises the ASR output without any proposed values. The above described system implements a task of voice-based, non-biometric, knowledge-based identification and verification. Identification and/or verification can be used as a component of a task-oriented spoken dialogue system that offers personalised and privacy-focused services. The system can identify and verify a user against a known database of registered users, based on information collected through dialogue interaction. The information might include one or more of: the user's postcode, name, date-of-birth, passphrase etc. The system collects the user information through a multi-turn dialogue interaction, and compares it against reference user information contained in a database. The system populates reference values for candidate users from API calls to query a database of candidate users. Grapheme to phoneme transformation may be performed where phonetic matching is used. Proposed values are obtained using ASR and NLU, and grapheme to phoneme transformation where phonetic matching is used. The system tracks n-best lists for the various modules, including ASR110, NLU140, and the phonetic processing module150. In some examples, the system is configurable. The rules for identification and verification are defined in the logical expression syntax, and can be modified depending on the application. In particular, the logical expressions that relate the user data fields (e.g. postcode AND (first name OR last name)) can be modified for different applications. In order to configure the system for a particular application, a list of user data fields that will act as identification and verification criteria is defined. The parameters of the comparison mechanism can also be selected, for example whether to use similarity and containment matching, or whether to use phonetic matching. A library of parameterisable fuzzy comparison functions for each user data field can be provided, including phonetic matching and approximate string matching. Fuzzy logic algebra is used to combine the individual user data field scores into a user score for each candidate user. This score can then be used for the purposes of verification, by comparing to a pre-defined threshold that controls the security/recall trade-off, or identification, by ranking the candidate users. The identification and verification tasks are performed using a logical expression to be evaluated according to fuzzy logic. While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed the novel methods and apparatus described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of methods and apparatus described herein may be made. | 111,599 |
11861522 | DETAILED DESCRIPTION The following practical embodiment is of a simple implementation for delivery using delivery vans with drivers (parent agents) operating out of a depot in an industrial estate location and delivering to an inner-city location, where walkers with trolleys or trolley bikes and bikers (delivery agents) are used for high-density deliveries. One child cluster is allocated per walker/biker at a time. For simplicity, a single van and thus a single parent cluster only is considered, as well as a single distributor. Delivery takes place daily in this example, but the same methodology applies for time-critical delivery. The van has a shuttle run between the depot and the hub, or multiple hubs. Hence it may return with a further cluster for a walker. It may also deliver uglies and outliers and other items that have been filtered from the walker clusters. A digital manifest from the distributor arrives at the depot for the items to be delivered on a certain day. Each line of the digital manifest is a record for one item (or parcel). The line may be referred to as a line item. The digital manifest is loaded into the software (referred to generally as MoDe:Link) which is a cloud-based set of algorithms for routing, scheduling, customer communications, data analytics and optimisation. The digital manifest may be matched against the items that arrive for delivery on the given day, so that records for items which have not arrived can be deleted. This may take place by way of an initial scan of barcodes on the items received. Records for items which could not be delivered in the previous delivery can be added, perhaps also by way of scanning. The initial scan may use a portable device such as a smartphone and an application from the MoDe:Link software suite. The application (or app) may give a green banner for parcels which are on the manifest and a red banner and haptic or audio feedback for a parcel not on the manifest. At this stage, uglies may be manually designated using the app, so that they are allocated to the van driver for delivery and/or automatically designated using the physical information and/or comments in the manifest. As a next step, the schedule is generated, as explained in more detail elsewhere. The method determines which cluster is assigned to which user. The assignment may be random, follow a strict system or allow manual assignment, or take into account the abilities/location etc. of the users. For example, some users may be qualified as delivery agents but not as parent agents. The schedule, or relevant individual parts thereof, is transferred to applications running on mobile devices of the van driver and walkers. The schedule may be generated using either a standard or a corridor method. In the standard method the delivery van drops off parcels at a single central location. Couriers/walkers meet the van and set off on their deliveries from this location. In the corridor method, the delivery van drops loaded bags at a safe place, such as a storage locker for the couriers to pick up. This removes the need to meet the van and gives extra flexibility in scheduling but is more complex to organise because it requires centrally located urban storage lockers or manned premises. In the next step, each parcel is rescanned, and the app indicates a bag (compartment) letter corresponding to an individual bag (A-Z) and a number (1 to be packed first and delivered last, 2 to be packed second etc. or vice versa). Currently the number and letter are manually written or printed on the parcel. However, in an automated method the re-scanning and manual labelling steps may be omitted. The parcels are loaded in order into the bags. It is worth mentioning here that more than one bag might be assigned to a single walker. For example, the van may return to re-load more bags for a second trip or it may drive on to a further hub location when it can pick up an empty bag and give a replacement bag to a walker. The uglies are numbered, but not allocated to a child cluster or given a letter—they are allocated to the van driver for delivery. The same is true of any outliers. These van deliveries can occupy the van and driver while the walkers are delivering, before a second bag is given to a courier. FIG.1is an overview flowchart of schedule generation according to a general embodiment. In this method, firstly items received at a distribution centre are processed to provide a record for each item, the record including item identification and a recipient location (S2). In S4, the records are clustered according to the location of the recipient into one or more parent clusters, each parent cluster being allocated to a different parent agent for a first transport stage. In S6, a hub position for each parent cluster is located according to recipient locations in the parent cluster, as indicated by the records. S8involves further clustering the records in each parent cluster according to recipient location into one or more child clusters for a second transport stage. In step S10, for each child cluster, the records in the child cluster are allocated to (a compartment of) a selected delivery agent which operates from the hub position of the parent agent and delivers at recipient locations: a child cluster delivery route and delivery order of the items may at this stage be calculated for the selected delivery agent. S12creates an individual schedule for each parent and delivery agent with events and timings for the events, wherein the events include travel events in which the agent is travelling along a route and stop events in which the agent is carrying out any of the following stop actions: delivering items, transferring items between agents and waiting. For example, a simple parent agent (van) schedule may include a travel event to drive along its route to the hub, a stop event to transfer compartments to a delivery agent (walker) and a further travel event to drive to its own delivery along a continuation of the route. A corresponding simple delivery agent (walker) schedule may include a stop event to transfer a compartment from the parent agent, and then a succession of alternating travel events to delivery locations and stop events to deliver the items in the compartment. Finally, in S14, the parent agent and delivery agent are instructed to deliver the items by providing individual schedule information to each parent and delivery agent. Physical delivery follows. Steps S12to S14may be more or less automatic, depending on the degree of automation of the delivery. In a more manual method, the items may be marked manually (for example by a user based on an instruction from a smartphone or other mobile device), the transfer take place manually and the schedule information is provided to human parent and delivery agents. In the most automated method, the marking may be automatic, either by automatic labelling or simply by adding data to the records; the transfer may be by a suitably programmed robot or conveyor system and the navigation may be of self-guided vehicles/drones etc. FIG.2is a flowchart of schedule generation according to a particular embodiment. Step S204validates the input datasets. The input datasets are the digital item manifest, the list of available vehicles, the list of available users (the people involved in the physical delivery via the van and local delivery), the user-vehicle assignments, the warehouse address, and the schedule description (name of schedule). These datasets may be pre-stored in or imported into the system and thus this step may not require manual input. Additionally, a central user may input variables (options) to set the preferences for schedule generation. For example, the user may specify at this stage that they require a van to be used to deliver some of the items. Additionally, for example, the user may specify that they wish to optimise the delivery for a particular set of assets that that have available on a particular day—for instance, a user may manually input the available vehicles and users and the MoDe:Link system will run through the scenarios to find the combination which would create the cheapest or the shortest delivery schedule. In the ‘generate schedule’ tab of a GUI there may be an option to have strict assignment which is auto selected (as a default) and in which each person is assigned to a particular role/vehicle. If this option is removed, then a depot manager can list the total number of assets that they have available on a particular day, say 1 van, 3 bikes, 10 walking bags, along with 8 people, and MoDe:Link may run through the scenarios to find the combination which would create the cheapest or the shortest delivery schedule. Step S206forms parent clusters, for example using DBSCAN as previously described. This clustering procedure results in a number of clustered sets of items (parent clusters) and potentially outliers, which are determined to be so distant from the other items in the parent clusters that they are to be handled separately via the parent agent (i.e. not put into a child cluster, but delivered by the van). Step S208filters the items in each parent cluster. SeeFIG.3for a description of this procedure according to one embodiment. Step S210determines if there are any items designated as van items (those to be delivered by the van) in any parent cluster. If not, the process continues with step S216. If there are van items in any parent cluster, a TSP algorithm finds the optimal delivery paths of the van in each of the relevant parent clusters (S212) before the user who is designated as driving the van is marked as unavailable (S214) (because they will be occupied delivering the van items). In this way, the van driver is not considered as a delivery agent capable of delivering items via bike or on foot. Step S216determines if there are bike items in any parent cluster (that is, an item to be delivered via bike). If there are not, this suggests that only van drivers are required to perform deliveries and the method terminates (S218). If there are bike items in any parent cluster, child clusters within each parent cluster are created (S220). SeeFIG.4for a description of this procedure according to one embodiment. Step S222allocates each child cluster to an individual user who is to be responsible for delivering all items within the child cluster. SeeFIG.10for a description of this procedure according to one embodiment. Step S224“packs” the child clusters, i.e. allocates them to compartments. For example, one child cluster may require two compartments and thus two trips for a delivery agent, with a reload location between the two trips. The necessary reload locations (if any) for each bike user are determined. SeeFIG.11for a description of this procedure according to one embodiment. Step S226uses a TSP algorithm to find the optimal trips to be taken by the delivery agents for each child cluster. One trip corresponds to one compartment, so a delivery agent may have one or more trips (within a single child cluster allocated to that delivery agent). The arrival times at each stop on each trip is estimated, potentially using a simple time estimation based on mapping software (S228). Step S226may take place before S224, particularly if the packing of the cluster items depends on the order of delivery. Step S230modifies the van path to account for the need for the van to meet a delivery agent during a reload event. That is, in the event that all items corresponding to a single child cluster do not fit into one delivery agent's bundle of items (one compartment), the delivery agent must meet with the parent agent to reload/replace their bundle with one stored in the van. SeeFIGS.12aand12bfor a description of this procedure according to one embodiment. In another way of handling reloads, it may be that there are predetermined safe locations in the local area where the parent agent may leave bundles of items; the delivery agents' routes may then incorporate stops at such locations at appropriate times. Step S232creates events. This event creation procedure takes the vehicles that each user is operating at each stage of the logistics procedure into consideration, and generates events for all users. SeeFIG.13for a description of this procedure according to one embodiment. Step S234creates the complete schedule. This schedule contains details of all events that all users are to perform during the delivery of items. Step S236calculates the anticipated duration of each of these scheduled events. SeeFIGS.15a,15band15cfor a description of this procedure according to one embodiment. Step S238produces a summary of the schedule for display. FIG.3is a flowchart outlining the filtering of the items in each parent cluster. The filtering marks each item to be delivered as a van item, a bike item, or an undeliverable item. Incidentally, here and elsewhere, the terms “bike” and “bike item” denote a child agent and an item to be delivered by a delivery agent, respectively, and are not limited solely to bicycles or items to be contained in a bicycle compartment. The delivery agent may be a walker with a trolley or scooter, or a mixture of walkers and bikers may be used or any other local delivery agent. Steps S304-312consider each item in each parent cluster. Step S304starts with the next parent cluster. In the event that there are no parent clusters left to consider (S304, no), the filtering terminates (S306). Otherwise, the next parent cluster is considered (S304, yes). Step S308obtains the parent cluster and step S310checks for the presence of a next item. In the event that there are no items left to consider in the current parent cluster (S310, no), the process continues with the next parent cluster question in S304. If there is an item (S310, yes), step S312obtains the record of the item and step S314assesses if the item's dimensions are suitable for fitting into any compartment. In the event that no compartment is suitable in size to handle the item, step S316marks the item is marked as undeliverable (returned) indicating that the (digital record of the) item must be processed as an exception. If the item does fit into a compartment (S314, yes), step S318questions if that compartment is of/for a bike; if affirmative, step S320marks the item as a bike item. If the compartment is not of/for a bike (S318, no), step S322questions if that compartment if associated with a van; if affirmative, step S324marks the item as a van item. In the event that the compartment is neither a bike nor a van compartment, step S326creates (and potentially presents to the user) an error. FIG.4is a flowchart outlining part of the creation of child (bike) clusters. The logic presented determines the number (k) of child clusters to be created. Step S404starts with the next parent cluster. In the event that there are no parent clusters left to consider (S404, no), the logic terminates (S406). Otherwise, step S408gets the parent cluster and step S410gets the bike items within the parent cluster. Steps S404-410obtain details of all bike items in each parent cluster. When all parent clusters have been handled, the creation of child clustering terminates (S406). Once records of the bike items in a particular parent cluster are obtained, step S412questions if the user has manually opted for a van cluster (this may be due to the personal preference of a van driver as indicated in the options/preferences submitted/validated in previous step S204). If a van cluster has been requested (for example, in the input parameters) (S412, yes), step S414questions if the van has already been assigned a non-bike item (for example, one that is too large/heavy for the bike's compartments). If the answer to the query is negative (S414, no), step S418compares the total number of non-van items to the number of available couriers (bike, walk and van agents): if there are more items than available couriers (S418, yes), step S420sets the value of k as the total number of available couriers. If there are fewer items than available non-van couriers (S418, no), step S422sets the value of k as the number of non-van items (this suggests that each item is to be assigned to its own child cluster). Alternatively, if the van has already been assigned a non-bike item (S414, yes), step S416compares the total number of non-van items to the number of non-van couriers (that is, the total number of available couriers minus the courier(s) responsible for driving the van(s) and delivering the non-bike item(s)). If there are more items than available non-van couriers (S416, yes), step S424sets the value of k as the number of bike and walk couriers. If there are fewer items than available non-van couriers (S416, no), step S422sets the value of k as the number of non-van items. If the user has not requested a van cluster (S412, no), the logic continues from step S416(comparing the total number of non-van items with the number of non-van couriers). In effect, steps S412to S424are one way of finding the number of child clusters. In normal circumstances, the number of child clusters, k, is one per bike (in general, one per delivery agent). If there are fewer items to deliver than bikes, then some bikes will not have anything to deliver and this logic may come into play. Using the predetermined value of k, child clusters of items are created (S426). The child clusters may, for example, be created using a k-medoids algorithm, which divides a set of data points (delivery locations) into k subsets (child clusters) so that the subsets minimize the sum of distances between a data point and a centre of the data point's child cluster. In k-medoids the centre of the data point's child cluster corresponds to a data point in the cluster (as opposed to a k-means algorithm where the centre point may not correspond to an accessible location). Alternatively, a balanced child clustering algorithm may be used, as explained in the following. The whole process iterates through all parent clusters (S404) before the clustering procedure terminates (S406). FIG.5is a flowchart outlining a high-level overview of the creation of child clusters using balanced child clustering (as referenced inFIG.4, step S426). Balanced child clustering evens out the distribution of items between delivery agents, whereas k-medoids may allocate unevenly (i.e. one delivery agent may have a far higher number of deliveries to process relative to another delivery agent). Initially, step S504determines the number of required child clusters, which can be thought of as an optional refinement of the value of k previously determined to handle situations in which there are very few items for delivery. Further details of one embodiment of this determination may be seen inFIG.6. Step S506then assigns initial points (item delivery locations) for each child cluster; an example of one embodiment of this process may be seen inFIG.7. Step S508then assigns all other items to the child clusters. Further details of one embodiment of this process may be seen inFIG.8. The resultant child clusters and details of any items that are deemed to be undeliverable via this logistics method are then returned (S510) and the child-clustering process terminates (S512). FIG.6is a flowchart detailing the refinement of the number of required child clusters. The logic presented here is directed at handling scenarios in which there are fewer items to deliver than available delivery agents (which is unexpected in most delivery situations). In this example, both bike and walker delivery agents are used. The input data is summarised in S604and includes: B, the number of bikers; W, the number of walkers; V, the number of van-based agents; T, the total number of delivery agents; and I, the number of items in the parent cluster (minus 1 in this example to account for the notion that the pre-determined centre point is an item delivery). Step S606questions if the number of items in the parent cluster is greater than the total number of available delivery agents. If the answer is no, step S608sets the total number of child clusters (k) equal to the total number of delivery agents. If the answer is yes, step S610questions if the number of van-based agents is greater than 0. This would suggest that van clusters have been turned on in the input options. If the answer here is yes, step S612assigns all items to the van-based agents. If the answer is no, step S614begins iterating through each (non-van-based) agent, switching between walkers and bikers (S616) and removing one cluster from the mode selected for the iteration (S618). In effect, this successive lowering of the number of clusters until we have as many delivery agents as items assigns an item to each non-van user by alternating between bikers and walkers. When the number of agents is lowered to a value equal to the number of items (S614, no), the refinement procedure terminates (S620). FIG.7is a flowchart detailing the assignment of initial items in each child cluster. The assignment process begins by considering each child cluster (S704). At this point there is a number k of items in the cluster, but specific items have not yet been assigned. Step S706creates an array containing the distances from all item delivery locations in the child clusters to the centre point of the parent/DBSCAN cluster. This centre point is the medoid of the set of items in the parent cluster, that is, it is the location where the sum of the distances to all of the other items is lowest. Step S708finds the index of the array containing the location that is the closest to the centre of the parent cluster. Step S710pushes this delivery location to a first cluster's list of stops, which are later used to create events. Step S712then marks this “pushed” delivery location as “assigned” in an array defining the state of the item, indicating that the item has been considered, accounted for, and will be delivered. Step S714then pushes the distance from the parent cluster centre to this stop to an array containing the distances that the delivery agent responsible for this particular child cluster will travel. Steps S706to S714are then repeated for further child clusters. When initial points for all k child clusters have been assigned (S704, done), the process terminates (S716). In effect, steps S706to S712sort the list of items/locations by their distance from the parent cluster centre, putting items closest to the centre first. Then, for each cluster in turn, the next closest item in the list of sorted items is assigned to the child cluster. Before marking the item as “assigned”, the item state (as stored in the item state array) may be null, indicating that the item is not yet assigned and—so far—it is thought that the item will be deliverable. Alternatively, the item may be of the state “undeliverable”, indicating that no one is able to deliver this item. FIG.8is a flowchart outlining the assignment of all remaining items (following the assignment of the initial items) to the child clusters. Optional step S804finds qualifying clusters, that is, clusters that may successfully include the item under consideration without causing the distance travelled by the delivery agent to be too great. Further details of one embodiment of this qualifying cluster assessment may be seen inFIG.9. Step S806considers the number of qualifying clusters for each item. In the event that there is not more than a single eligible (qualifying) cluster for a particular item (S806, no), step S808asks if there is just one qualifying cluster for that item. If the answer is no (S808, no), then there is no child cluster to which an item may be assigned and step S810marks the item as “undeliverable” in each child cluster. If the answer is yes (S808, yes), then step S812assigns the item to the only qualifying cluster. In the event that an item may be assigned to a plurality of child clusters (that is, there are multiple qualifying clusters) (S806, yes), step S813calculates the standard deviation of all of the qualifying cluster weights (each weight is based, for example, on the total time and/or distance of the delivery agent's path in each qualifying child cluster). Step S814then starts the logic to consider a single qualifying child cluster if there are still child clusters to be processed (S814, yes). Step S816creates an array containing the distances from the last point in the delivery path to every other point. Step S818finds the index of the array containing the minimum distance. Step S820calculates the weight of the cluster (total distance of the delivery path) with the addition of the delivery location of the item and step S822finds the standard deviation of the cluster weights with this new addition. Step S824calculates the difference between this new standard deviation of the cluster weights (with the addition of the item) and the old standard deviation (before the addition of the item). When steps S818to S824have been performed for all qualifying child clusters (S814, done), step S826determines the qualifying child cluster that—with the addition of the item—results in the smallest change in cluster weights standard deviation. Step S828then assigns the item to this child cluster and step S830marks the item as “assigned” in the previously described item state array. In effect, steps S816to S830calculate the standard deviation of the qualifying child cluster weights with and without the new item and add it to the cluster that causes the minimum change in standard deviation. This is directed at keeping the clusters around the same duration. This logic is repeated for each item. FIG.9is a flowchart detailing the determination of qualifying child clusters (child cluster to which an item may be assigned). Step S904considers all child clusters one-by-one. Step S906determines the cumulative path distance, D, including the distance from the closest existing point (delivery location) within the child cluster to the item to be assigned. Additionally, step S908loads the maximum distance for the current child cluster mode, Md. For example, a bike cluster will have a larger maximum distance than a walking cluster. This value may be stored in the previously described list of vehicles. Step S910questions if the distance D is less than (or equal to) the maximum permitted distance Md. If the answer is yes (S910, yes), step S912marks the child cluster as a qualifying cluster within a qualifying cluster array. If the answer is no (S910, no), the cluster is not included in this array. Following S910-S912, the next child cluster is considered. When all child clusters have been assessed regarding their potential as a qualifying cluster (S904, done), step S914asks if the number of qualifying clusters (or, equivalently, the length of the qualifying clusters array) is 0. If the answer is no (S914, no), the process terminates and the logic presented inFIG.8resumes (S922, returning to S804with one or more qualifying child clusters). If the answer is yes (S914, yes) step S916questions if there is a van cluster. If the answer is yes (S916, yes), step S918returns the van cluster as a qualifying cluster. This, in effect, assures that outlying item delivery locations (that are too far from all other delivery locations to be included in a standard child cluster) are still accessed using the van. If the answer however is no (S916, no), there is no means of delivering this item using this method and step S920returns an empty qualifying clusters array. Following both S918and S920, the process terminates (S922). For each cluster, the logic is directed at picking out the item that is closest to the item that was chosen previously in the cluster (as the logic is picking out the next stop). Each cluster could have a different potential next item and/or some clusters might be looking at the same potential next item. If that closest item is too far, the cluster is not included in the qualifying clusters array. In effect, if there are no bike/walk clusters close enough for the delivery of an item, steps S904-S922will return the van cluster if the van cluster is enabled, otherwise they will return nothing (an empty array). FIG.10is a flowchart detailing an optional process whereby users are assigned to bike clusters. Step S1004loads the next cluster. Step S1006gets the child clusters associated with the loaded parent cluster. Step S1008then sorts all available bike users by the bike compartment capacity. The available biker users may be determined from the list of users as previously described and may, for example, be supplied at the input stage of proceedings. Further, the bike compartment capacity may be supplied in the list of vehicles as previously described. The link between the bike and the bike user may be determined from, for example, the user-vehicle assignment as previously described. Step S1010sorts the items within the child cluster by physical size. Step S1012then asks if optimisation is required. If it is deemed that optimisation is necessary (S1012, yes), which may be, for example, due to the limited capacity of particular bike compartments or due to the presence of an irregularly large item, step S1014obtains the user who is operating the vehicle with the largest compartment capacity. Step S1016then loads this first user (U). Step S1018sorts the child clusters in order of the largest item that each child cluster contains. Step S1020then gets the first cluster (C), which is the cluster capable of containing the largest item. Step S1022then allocates the user U to the cluster C. (The loop stepping through all the child clusters is omitted, for simplicity.) If, however, step S1012determines that optimisation is not necessary (in the event that, for example, all available bikes have the same capacity), step S1024simply assigns a user to the cluster. This assignment may simply be, for example, a random allocation. Following S1022or S1024, the next parent cluster is considered; when all clusters have been assigned to a user (S1004, done) the assigning procedure terminates (S1026). FIG.11is a flowchart illustrating the packing of items into compartments and the determination of necessary reload locations between trips. Step S1104initialises a counter variable (i) that will loop through all parent clusters. Step S1106questions if the current counter value (i) is less than the total number of parent clusters. If the answer is yes (indicating that there are still parent clusters to consider) (S1106, yes), step S1108obtains all child clusters from the current parent cluster and step S1110initialises a new counter variable (k) that will loop through all child clusters of the current parent cluster. Step S1112questions if the current child cluster counter value (k) is less than the total number of child clusters within the parent cluster. If the answer is yes (indicating that there are still child clusters to consider) (S1112, yes), step S1114loads the records of all items in the current child cluster. Additionally, optional step S1116loads the details of the compartments of the operational vehicle of the child cluster. Step S1118then sends details of the items and the compartment sizes to a bin-packing algorithm to determine a method of packing the items in the available container (for example by order of delivery or by size, for example if the compartment sizes are provided/different). Step S1120assigns the packed compartments (following the bin-packing algorithm) to the parameter “Trips”. Step S1122questions if the length of “Trips” is greater than 1. If the answer is yes, this indicates that a child cluster must comprise multiple smaller subdivisions (trips) that will require the delivery agent to reload at an intermediate location in the child cluster route. Step S1124initialises a new counter variable (j) that will loop through all the trips of the current child cluster. Step S1126questions if the current trip counter value (j) is less than the total number of trips within the child cluster. If the answer is yes (indicating that there are still trips to consider) (S1126, yes), step S1128marks the first stop of the trip currently under consideration as a reload location, meaning that the van has to meet the biker/walker at this location in order to reload the compartments with items to be delivered. The trip counter (j) is then increased by a value of 1 (S1130) and the determination of reload proceeds from step S1126. When all necessary reload locations are marked (S1126, no) or in the event that no reload locations for a child cluster are required (as all items may be delivered in one trip) (S1122, no), step S1132increases the child cluster counter (k) by a value of 1 and the bin packing procedure from step S1112proceeds for the next child cluster. When the current value of k matches the number of child clusters, indicating that all child clusters have been handled (S1112, no), step S1134increases the parent cluster counter (i) by a value of 1 and the acquisition of child clusters (followed by the bin-packing of these clusters) proceeds from step S1106. When the current value of i matches the number of parent clusters, indicating that all child clusters within all parent clusters have been handled (S1106, no), step S1136terminates the child cluster bin-packing and trip reload marking procedure. In effect, the logic presented inFIG.11loops through all of the child clusters of all of the parent clusters and passes the child cluster's items through a bin packing algorithm (potentially along with details of the compartment's size). The bin packing returns details of trips that need to be performed in order to deliver all items within the child cluster. The first item of each trip is then marked as a reload location, indicating that the van must meet the delivery agent at this position in order to reload the delivery agent with more items for delivery. In this way, or by using another scheduling method, the delivery agent's stops, including reload locations, are put in an order along a path. This may take place before inserting these reload locations into the van schedule, for example as explained below. FIG.12(divided intoFIGS.12aand12b) is a flowchart describing one embodiment of adjusting the van (parent agent) scheduling for reloading of bikes (or other delivery agents). The bike scheduling/reloading works by comparing the estimated arrival time of the van at its next stop with the estimated potential arrival time of the van at each of the bikes' reload locations and the arrival time of the bikes at the reload location. On the basis of this comparison, the process may insert this location into the van's schedule before the van performs its own stop. Step S1204questions if there are any more parent clusters left to handle. If yes, step S1206loads the parent cluster. Step S1208then asks if the parent cluster has a van path associated to it. If the answer is no (S1208, no), step S1210finds the child cluster to which the van is assigned and step S1212loads the van path associated with this child cluster. Alternatively, in the event that the van does already have a path to follow (S1208, yes), step S1212loads this van path. Step S1214loads details of all child clusters within the parent cluster. Step S1216questions if there are any child clusters left to consider. If the answer is no (indicating that all reload locations have been handled within the current child cluster) (S1216, no), the next parent cluster is considered (S1204, yes). Alternatively, if the answer is yes (S1216, yes), then step S1218loads the records of the current child cluster. Step S1220determines all bike stops within the child cluster that are reload locations (R). This process is repeated for all child clusters. FIG.12aeffectively considers the van path (if there is one) and all of the bike reload locations. InFIG.12b, step S1222questions if there are any further reload locations to consider. If yes, step S1224creates potential next stops for the van, which are where the van is already planning to go, plus the next biker/walker reload locations. Step S1226finds the van's estimated arrival time at each of the potential next stops and step S1228finds the bike's (or any other delivery agent's) estimated arrival time at each reload location. Step S1230finds the difference between the two estimated arrival times for each potential next stop and S1232determines the stop corresponding to the smallest arrival time difference. Step S1234then questions if this is a bike/walker stop. In this example, a small or negative time difference is desirable in order to avoid the van wasting any time by remaining stationary for extended periods of time (awaiting the arrival of the empty bikes). In the event that the stop with the smallest time difference is a bike/walker reload stop (S1234, yes), step S1238finds the additional time required for the van in its own schedule to include the reload location as the next stop. Step S1240assigns this reload stop as the van's next stop (in the van's route). Step S1242removes this reload stop from the list of potential next stops in order to ensure that the reload location is not chosen again. Step S1244updates the arrival time of the bike for the affected bike path (in the event that the chosen reload location requires the previous bike path to deviate) and/or updates the arrival time of the van for the affected van path (in the event that the chosen reload location requires the previous van path to deviate). If step S1234determines that the stop with the smallest time difference is not a bike/walker stop (S1234, no), step S1236increases the van stop index by a value of 1 (i.e. bringing the next van stop into consideration). The process continues from step S1224as outlined above, only now considering a new potential van next stop. When a suitable reload location is identified (suitable in the sense that, for example, it is closer in time to the bike reload time than the next stop on the van's list), the next bike reload stop is processed (S1222). When all reload stops have been assigned a reload location, the reload scheduling (for this particular parent cluster) process terminates (S1246). When all child clusters from all parent clusters have been handled (S1204, no), the overall reload scheduling process terminates (S1248). FIG.13is a flowchart describing a process in which events are created. Events here correspond to activities performed by the users of the logistics method and include, for example, “wait”, “go-to”, “transfer”, and “deliver”. In effect, the logic ofFIG.13is directed at first putting all actions when there is no movement into a series of “stops” and “stop actions”. These stops and stop actions are then converted into events. Step S1304sets the start and end vehicles for each user at each stop. Further details of this step according to one embodiment may be seen inFIG.14. Step S1306initially loads details of a single user including their path. Step S1308then obtains details of the first stop assigned to the user on their delivery route or their van route. Step S1310creates a “go-to” events, which is an event that require the user to move from one location to the stop event (for example, a biker may be required to move from one delivery location along their assigned route to another). Step S1312then creates an event for the stop action (for example, it may be necessary for a biker to get dropped off by a van or it may be necessary for a van driver to hand over (transfer) a full compartment of items to a walker). Step S1314creates a final wait event on the stop if necessary. In this way, the actual amount of time spent in the wait (which can be initialised at zero) may be calculated later. Step S1316questions if there are more stops associated to the user to be considered. If there are (S1316, true), step S1308loads the next stop and steps S1310to S1314are repeated. If there are no more stops for this particular user (S1316, false), step S1318questions if there are any more users to consider. If there are, step S1306loads the next user and all of this user's stops are considered as previously described (step S1308to S1316). When events have been created for all users (S1318, false), the process terminates (S1320). FIG.14is a flowchart describing a process whereby the start and end vehicles are set for each user on each stop (in S1304). This process is used to determine when people need to wait. For example, if a delivery agent is on a bike at the beginning of a stop and will end up in a van, the delivery agent will need to wait for a van to arrive and later wait for a van to complete all of its events before going to the next stop. Step S1404considers the next user. Step S1406then sets the current vehicle for this user (represented by the variable “current_vehicle” in the flowchart) as their starting vehicle. Step S1408then loads details of that user's next stop and step S1410loads details of the action associated with that stop. Step1412questions if the type of stop action indicates that the user is to be picked up. If the answer is yes (S1412, true), step S1414then sets “current_vehicle” to match the vehicle in operation by the assistant. For example, the current vehicle of a bike operator, after being picked up by a van, will be changed to a van. Alternatively, if the answer is no (S1412, false), step S1416questions if the type of stop action indicates that the user is being dropped off. If the answer is yes (S1416, true), step S1418then sets the current vehicle in use by the user to the assistant's vehicle. For example, if a bike operator is being transported to the start of their child cluster via a van, the current vehicle of a bike operator, after being dropped off by the van, will be changed to a bike. Following steps S1414and S1418, step S1420questions if there are more stop actions for the user to consider. If the answer is yes (S1420, true), the next stop action is loaded by step S1410and steps S1412to S1418are repeated. If the answer is no (S1420, false), this suggests that the vehicle currently indicated by the “current_vehicle” variable is the final vehicle that the user will be operating at the end (step S1422). Step S1424questions if there are more stops for the user to consider. If the answer is yes, the next stop is loaded by step S1408and the above described setting of the final vehicle for the user for the stop (steps S1410to S1422) is repeated. If the answer is no, the next user is loaded by step S1426and the above described setting of the starting and the final vehicle for the user for all stops (steps S1406to S1424) is repeated. When all users have been considered (S1426, false), step S1428terminates the setting of start and end vehicles. FIGS.15aand15bare flowcharts detailing a method of allocating durations to all events in the schedule (for all users). Step S1504questions if there are any events yet to consider. If the answer is yes, step S1506questions if the event under consideration is a transfer event. If the answer is yes, step S1508asks if the transfer includes a vehicle (for example, a van driver may be transferring a foldable bike to a delivery agent). If the answer is no (S1508, no), step S1510does not change the value of the event duration (taken here to be initialised at a value of 0). If the answer is yes (S1510, yes), step S1512increases the value of the event duration variable by a value of 30 seconds for each vehicle (for example, the transfer of 2 bikes will be allocated 60 seconds). Following both S1510and S1512, step S1514asks if the transfer includes any items. If the answer is no (for example, the transfer may be solely of a bike), step S1516does not change the value of the event duration. If the answer is yes, step S1518increases the value of the event duration variable by a value of 10 seconds for each item (for example, step S1518would allocate 100 seconds for a user to perform the initial packing of 10 items into a compartment, or if the item is a compartment, then 10 seconds would be allowed for the transfer of the compartment). The anticipated transfer duration is then the resultant value following steps S1508to S1518. In cases where the event is not a transfer (S1506, no), step S1520asks if the event is a delivery. If the answer is yes, step S1522sets the anticipated delivery duration to 60 seconds. Alternatively, it may be that this value is calculated or adjusted based on the complexity of the delivery process (for example, additional time may be allocated for a user to climb numerous flights of stairs). In cases where the event is also not a delivery (S1520, no), step S1524questions if the event is a “pick up” (when an item is picked up by a biker/walker or a compartment by a van). If the answer is yes, step S1526sets the anticipated pick up duration to 60 seconds. This value may also be varied. In cases where the event is also not a “pick up” (S1524, no), step S1528asks if the event is a “go-to” event. If the answer is yes, step S1530determines the distance between the two locations (that is, from where the user is departing and to where the user is going). In this embodiment, this information is stored in a lookup table. The distances—in combination with the anticipated or historically observed speed of the user—may be used to determine the anticipated “go-to” event duration. In cases where the event is also not a “go-to” event (S1524, no), step S1532inFIG.15bquestions if the event is a wait event. If the answer is yes (S1532, yes), step S1534obtains all events related to the wait event (that is, related events that the wait event is waiting for). In one embodiment, related events are determined in the following way: All related events are always attached to “wait” events—i.e. they are what is being waited for Related Events are Defined at the Following Steps Event Creation Phase— Pickup Events The related event is simply the goto (travel) event of the user picking them up. Dropoff Events (of Biker/Walker) The related events are simply the transfer of items needed before they get off. Drop-Off Items Events The related events are for each user who is waiting to have items dropped off to them. Get Reloaded Locations The related events are those that the user has to wait on to get reloaded—the reloader's travel event to the reload location. Reload Locations Reload locations are determined in an earlier step. As part of van (parent agent) delivery event generation, each of these reload locations is checked. All the bikes that have to wait at this reload location are added to the van's reload event's related events. Referring again toFIG.15b, step S1536initialises a counter variable (“stuckCounter”) at 0 and sets the length of the iterator to equal schedules.length. As a general overview, the stuck counter exceeds iterators.length whenever it cannot determine wait times of all events after two passes (i.e. the iterators.length*2 assignment). The duration calculation works by iterating through the event list of each schedule (one schedule per agent and one at a time). For every event that is related to a different schedule's wait event, the software might not have yet calculated that related event's duration, so cannot determine the duration of this wait event until this is done (since it depends on the time it takes to do all other events that it is “waiting” for). So the stuck counter is incremented and the logic moves to the next schedule to try and resolve all other schedules up to this same point. Once the duration service makes its way back around to this schedule that got “stuck”, there should be sufficient information to determine the wait time. If not, something has gone wrong so an error is produced. The iterators length is determined by the number of schedules being passed on by the duration service. So for 3 bikes and 1 van, four iterators are generated. It is the structure that iterates through the event list of a schedule. Returning to the flowchart, step S1538performs the error detection procedure to check if the counter value is greater than double the length of the iterator/number of schedules. If so, there is an error (S1538, yes), and step S1550terminates the event duration calculation process. If there is currently no error (S1538, no), step S1540considers the current wait event in a schedule that it is processing and any event from the list of related events obtained in S1534and determines which of the two events ends the latest. Step S1542questions if the wait end time (the time at which the wait event is due to end or, equivalently, the time at which the related event is due to start) is greater than −1 (that is, either zero or any positive value); a negative value is used as a flag to indicate that the timing in the related schedule has not yet been calculated. In this case, the StuckCounter is incremented by 1 in S1544and the process passes on to the next agent's schedule. If the wait event is not associated to an error (S1542, yes), step S1548proceeds to set the event's start and end times and step S1504loads the next event for processing (commencing from S1506). An alternative, high-level overview of the event duration calculation is provided in steps S1590to S1598ofFIG.15c. Step S1590calculates “easy” durations, which may be seen as the durations of events that have a pre-set, standard value. Step S1592then sets the “go-to” events, which involves the calculation of the anticipated duration by considering the distances between locations. Step S1594resolves the “wait” events and associated start and end times. In effect, the operations of steps S1502to S1548may be seen as one embodiment of steps S1590to S1594. Additionally, optional step S1596may calculate the total distances that each vehicle and/or user is due to travel within the entire schedule. Step S1598then may calculate an estimate of the monetary cost of the delivery procedure; this calculation may make use of a cost per unit time value as described elsewhere. An exemplary digital manifest in shown inFIG.16. The manifest arrives in comma-separated value (.csv) spreadsheet format from the distributor and includes item identification in the form of a barcode number (“identifier”), a recipient location (“customer_address”), a customer reference (“customer_name”), a phone number (“phone_num”), dimensions (“length”, “width”, “height”), and a weight. Comments may also be added. A status may be assigned for each item, such as “arrived” (at the depot), “packed” (into a compartment), “out for delivery”, “delivered”, or “undelivered” (after a delivery attempt). This can be displayed on the schedule and used downstream in information processing. The digital manifest is processed in the MoDe:Link software to cluster the items as explained herein and to provide a schedule for their delivery using the bikers/walkers and the van driver. The digital manifest is used as an input for the schedule generation alongside a list of vehicles, a list of users, details of user-vehicle assignments, warehouse address, any other relevant addresses other than warehouse and delivery addresses (for example, reload locations that are known to be suitable to facilitate reloads), and a schedule description (which may simply be in the form of a schedule name and date). The list of vehicles contains details of the vehicles available to perform item delivery. This may include, for example: the current location of the vehicle; the maximum permitted distance of the vehicle (that is, the pre-set distance that the vehicle is permitted to travel in a single outing); the maximum permitted speed of the vehicle (for us in scheduling calculations as described elsewhere); the physical dimensions of the vehicle's compartments (compartments that items to be delivered are stored in during transport); the maximum total weight that the vehicle is permitted to carry; and the maximum number of individual items. Additionally, the list of vehicles may include an estimate for the operational cost of the vehicle per unit time, which may be used to estimate the total cost of the delivery procedure. This value may be determined, for example, by considering historical costs of deliveries. The details of user-vehicle assignments may include, for example, mappings between available users and the vehicles that they are permitted to operate. An example excerpt of a generated schedule is shown inFIG.17, as displayed in a standard spreadsheet format. Columns A to J are shown on the left of the page, and columns K to U on the right of the page. The schedule may be generated in any tabular data format. For example, the schedule may be generated as .csv file, a tab-separated value file, or a space-separated value file. The shaded, left-most column contains unique line numbers for each entry in the schedule. The shaded, top-most row contains alphabetical identifiers for the columns of the schedule. These numerical and alphabetical identifiers are not included in the tabular schedule, but are provided as a feature within many examples of spreadsheet software (for example, Microsoft's Excel). The first entry (line) in the schedule may be a header, which provides variable names indicating what the values in each column represents. In the schedule excerpt ofFIG.17, the header containing the variable names is contained in row1. For readability, only a limited number of schedule entries are displayed. The example entries correspond to the entries of the schedule from an arbitrary starting point within that schedule; that is, the examples ofFIG.17do not necessarily correspond to the very first entries of the schedule excerpt. Note that the given example addresses here and elsewhere do not correspond to real addresses but are fabricated for the purposes of providing an illustrative example. The column entitled “schedule_set_description” (column A) contains a descriptive name of the schedule. The value may be user-determined or automatically generated. This may correspond to the name of the digital manifest from which the schedule has been generated. While the schedule excerpt ofFIG.17suggests that just one digital manifest was used in the generation of this schedule (MANIFEST 1), multiple digital manifests may be used in the generation of a schedule. The column entitled “user” (column B) contains the name of the user (delivery agent) or the vehicle to which the entry refers. User A, for example, is a different individual to user B. VAN 1, in this instance, is a vehicle (parent agent). Further, a vehicle may be operated by an individual who has their own user name. The column entitled user_ID (column C) contains the unique identification of the user involved in this particular entry of the schedule. It may be that just one of the “user” and “user_D” values are included in the schedule. In this way, the size of the schedule (in terms of necessary computational storage space) may be reduced. The column entitled “schedule_sequence” (column D) contains an integer value denoting the number of the event for the particular user. The counter begins at a value of zero and increases by a value of one with each successive event for that user. For example, the first entry in the schedule excerpt (row68) is the third event associated with user A. The column entitled “event_type” (column E) contains the title of the event that the entry describes. For example, the first entry of the schedule excerpt (line68) indicates a “transfer” event is to occur: this involves the transfer of an item/bundle/delivery vehicle from one user/vehicle to another, and thus could involve the transfer of one vehicle (bike) to or from another (van) in which the first vehicle travels. As another example, the second entry (line69) of the schedule excerpt indicates a “go_to” event is to occur: this involves the movement of the user/vehicle from one location to another. As yet another example, the third entry (line70) of the schedule excerpt indicates a “wait” event is to occur: this involves waiting at the same location until another user/vehicle arrives in order to perform a successive event. As a final example, the eleventh entry (line78) of the schedule excerpt indicates a “delivery” event is to occur: this involves the delivery of an item to a recipient (consumer). Other event types may be included. For example, a “pick_up” event may correspond to the collection of an item, for instance from a customer, to be transported back to the original supplier (i.e. returning an item) or to a van picking up compartments from the distributor. The column entitled “start_time” (column F) contains the time at which the event of the entry is due to start. Here, this is displayed as a cumulative second counter relative to the anticipated commencement of the schedule (or, alternatively, relative to the creation of the schedule). It may alternatively be depicted using any means of conventional date and time notation. The column entitled “end_time” (column G) contains the time at which the event is due to finish. Again, here this is displayed as a cumulative second counter relative to the anticipated commencement of the schedule (or, alternatively, relative to the creation of the schedule). It may alternatively be depicted using any means of conventional date and time notation. In the case that no time passes during the event, this value may be the same as the time_start value. For example, the finish time of a “wait” event may be understood to correspond to the time of the next, non-wait event. That is, a “wait” event is an instantaneous event, unless it has been modified by the durations service, as explained in more detail previously. In the example shown, a wait event has been added to the stop events for the van for each delivery agent. Zero duration wait events may be deleted from the schedule once the durations service has run, or may be retained for possible use in real-time adjustments. The column entitled “duration” (column H) contains the time that the event (entry) is due to take. For example, a “transfer” event may be allocated a value of 120 as it is expected a user will take 120 seconds to perform the item/bundle/delivery vehicle transfer. Alternatively, a “transfer” event may be allocated a value based on the weight/dimensions of the item/bundle/delivery vehicle to be transferred. As discussed in the previous paragraph, a “wait” event is allocated a value of zero, at least initially. A “go_to” event may be allocated a value corresponding to the anticipated time required to reach the destination; this may be based on historical data of that particular user or address and/or may be calculated using map-data (for example, using address parsing and/or using AI algorithms to look at imagery and maybe even using secondary data sources such as estate agents adverts). Further, this value may consider, for example, the need to use a staircase/lift if the destination is not on the same level as the building entrance. Alternatively, a “go_to” event may be allocated a value of zero if the user is already at that destination (for example, following a “wait” event). A “delivery” event may be allocated a standard value of 120 as it is expected that a user will take 120 seconds to perform the delivery. Alternatively, a “delivery” event may be allocated a value based on historical delivery speed data of that particular user/agent/address. The duration value for an entry, in combination with the start_time value for the same entry, may be used to calculate the end_time value for the same entry. Therefore, to reduce schedule size, it may not be necessary to include all three time values in the schedule. The column entitled “completion_timestamp” (column I) contains the time at which the event (entry) of the schedule is actually completed. This field is updated when it is detected—either through manual user input (for instance entry on the GUI of a mobile device) or through determination based on user location data (for example, through geofencing)—that the event has been completed. In this way, it is possible for others viewing the schedule to establish the status of the delivery schedule. Further, these values may be used to provide more accurate event duration estimates in future generated schedules. The column entitled “address” (column J) contains the address at which the event (entry on a single line) is to occur. For example, in the first entry of the schedule excerpt (line68), the transfer involving user A is to happen at the address “333 Camden Passage”. The column entitled customer_name (column K) contains the name or unique identifier of the customer to whom an item is to be delivered. A value is only provided in cases where the entry corresponds to a delivery event or a transfer event that involves the transfer or an individual item (i.e. in the initial packing of a bundle). In this schedule excerpt, the customer name is provided as a randomised string of number, but it may be that the full name of the customer is provided to enable personalised delivery of the item. The column entitled “customer_address” (column L) contains the address to which the item is to be delivered. As with the customer name, a value is only provided in cases where the entry corresponds to a delivery event or a transfer event that involves the transfer of an individual item (for example, in the initial packing of a bundle). The column entitled from (column M) contains the name of the location/vehicle/user from which the transfer is to occur. The column entitled to (column N) contains the name of the location/vehicle/user to which the transfer is to occur. A value is therefore only provided in cases where the entry corresponds to a transfer event. For example, the first entry of the schedule excerpt (line68) indicates that the transfer is to occur from VAN 1 to user A. The column entitled from_bundle (column O) contains the name of the bundle from which an item delivery is to occur. A value is only provided in cases where the entry corresponds to a delivery event of an individual item from a bundle. For example, the eleventh entry of the schedule excerpt (line78) indicates that delivery is to occur and that the item to be delivered is stored in the bundle labelled BAG Z. The column entitled to_bundle (column P) contains the name of the bundle that is being transferred in transfer events involving the movement of an item between bundles or transfer events involving the initial packing of a bundle. A value is therefore only provided in cases where the entry corresponds to a transfer event. The column entitled bundle (column Q) contains the name of the bundle that is being transferred in transfer events involving the movement of entire bundles between vehicles/users. For example, the first entry of the schedule excerpt (line68) indicates that the bundle BAG X is to be transferred from VAN 1 to user A. The column entitled vehicle (column R) contains the name of the vehicle that is being transferred in transfer events involving the movement of a vehicle from one vehicle to a user or to another vehicle. For example, it may be the case that a delivery van contains a delivery bicycle, which is to be transferred to a user who—until this point—has been on foot, or who is about to start work. The column entitled item_size (column S) contains the physical dimensions and the mass of the item to which the entry (event) is directed. A value is only provided in cases where the entry represents either a delivery event or a transfer event that involves the transfer of an individual item (for example, in the initial packing of a bundle). For example, the eleventh entry of the schedule excerpt (line78) indicates that the item that is to be delivered is 0.9×1×0.3 units in volume (each individual number here corresponds to a length in units of metres) and weighs 2.9 kg. The column entitled item_identifier (column T) contains an identifier for a package/parcel such as a bar code. The column entitled item_comments (column U) contains comments relevant to the item. These comments may be, for example, customer-supplied delivery instructions or comments noting the fragility of the item. FIG.18is a screenshot of a web-based GUI, illustrating a mapped delivery schedule for multiple users. It may be used at a central control, for example on a PC, or by a van delivery agent. At the top of this example is a banner, which contains multiple clickable buttons.1802displays the name of the current schedule; the drop-down option displays a list of schedules that are available to load.1804displays the date of the current schedule; the calendar button brings a calendar into focus to enable the user to choose a schedule by date. Icon1806, on selection, loads the map screen, which will be described shortly. Icon1808, on selection, loads an interface for managing digital item manifests. Icon1810, on selection, loads an interface for managing schedules. Icon1812, on selection, loads an interface for generating schedules. Icon1814, on selection, loads an interface for providing detailed analytics relevant to the delivery. Icon1816, on selection, loads an interface for viewing and editing the list of available vehicles. Icon1818, on selection, loads an interface for viewing and editing the list of users and the user-vehicle assignments. Icon1820, on selection, loads an interface for viewing and editing a list of orders (i.e. a list of individual items and their delivery address, recipient, etc.). Icon1822, on selection, loads an interface for viewing and editing organisations (i.e. any warehouse addresses). Icon1824, on selection, loads an interface for viewing the GUI's current user's preferences and provides functionality for logging out of the system. The large map screen under the banner demonstrates a mapped delivery route, which indicates the scheduled route for any parent and delivery agents. For example, the lines (overlaying the roads of the map) indicate the routes as determined by a TSP algorithm during schedule generation for 7 different agents: 6 walkers (with a current position illustrated by a human icon) and 1 van and driver (with a current position illustrated by a van icon). Solid lines are used to represent any van's scheduled route; dashed lines are used to represent any biker's scheduled route (note that there are none displayed here); and dotted lines are used to represent any walker's scheduled route. Different colours may be used for the route of each agent to improve readability. Each route interconnects event markers for the relevant user. For example, an event marker containing a downwards arrow indicates that the user is scheduled to perform a delivery at this location. Similarly, a sideways double-headed arrow indicates that the user is scheduled to perform a transfer at this location. Finally, an event marker containing a circle is used to indicate the user is scheduled to wait at this location. Different colours may be used to indicate the current status of the event. For example, a green event marker may indicate that the event has been completed; a blue marker may indicate that the event is the current (i.e. the next to be completed) event for the user; and a grey marker may indicate that the event is yet to be completed (i.e. it is a future event in the agent's schedule). Additionally, icons representing such significant locations as the delivery warehouse or viable reload stations may be displayed. Advantageously, it is possible to bring up further details of the event and/or user by clicking on the relevant icon. By communicating with the delivery agents and/or any GPS units held by the delivery agents, it may be possible to update the agents' current locations and event statuses in real time. Additionally, functionality is provided to view only selected agents, to zoom in and out of the current view, pan between map locations, and modify the layering of the underlying map (for example, opting to view satellite images rather than graphical depictions of the area). FIG.19is a screenshot of a mobile application-based GUI, illustrating only the events relevant for a particular user (the user operating the mobile phone). At the top and bottom of this example are banners, which contain multiple clickable buttons. Icon1902, on selection, toggles the visibility of the scheduled routes and events of all other members of the current user's team. Icon1904, on selection, highlights the user's scheduled stops on the map. Icon1906, on selection, hides the user's scheduled stops on the map. Icon1908, on selection, loads an interface providing an overview of the user's schedule. Icon1910, on selection, indicates the progress of the user's delivery schedule. Icon1912, on selection, loads an interface providing access to the functionality of a connected smart device or wearable. Icon1914, on selection, loads an interface for viewing and editing the user's settings. Between the top and bottom banners is a map screen, which contains event markers for the users as previously described. Also indicated are the current locations of a van and van driver and of a station suitable for reloading the delivery agent's compartment. The current location of the user of the mobile phone may also be illustrated as a van or person or bike icon, for example. Functionality is provided to zoom in and out of the current view and to pan between map locations. Additionally, a navigated route between event markers may be provided. FIG.20is an alternative GUI presentation, with schedule information which may be selectable from the GUI screen ofFIG.19using icon1908. This GUI presentation is suitable in particular for a delivery agent. The scrollable screen shown on the left is a high-level view of the individual schedule in a time-sequence of banners (or horizontal data strips) from the top of the screen to the bottom of the screen. A top, glimpsed banner indicates a completed event. The next event, at 9:22 is a travel event from the current location to a transfer location, to load a bundle items. The current time is shown as a line above this first event. The next event is transfer of a bundle from parent agent “Otis” at 9.25. The subsequent four banners are deliveries to different addresses, at 9.25, 9.32, 9.38 and 9.41. Travel events are not shown in this view. Finally, there is a wait (just shown) for a reload. At the bottom of the screen, an input field allows the user to return to the current event. The different kinds of events can be depicted in different colours and have different icons. For example, a go to event icon may have an arrow icon pointing up to the right, a transfer event icon may have two horizontal arrows slightly vertically offset and pointing towards each other, a delivery event icon may have an arrow pointing downwards to a line and a wait event icon may be a simplified clock face. A vertical timeline to the right of the banners may be provided as an abstract overview of the day's events, with a quick navigation handle. The four screens shown on the right show further screens which can be used for the different events, by tapping on the events in the overview screen shown to the left. The upper left of the four screens is a go to screen (as shown in the title banner) which in the top half gives more details of the route to a delivery with the starting and ending addresses, and below that the timing and distances. Two input fields allow the user to start navigation instructions (“start now”) or show the route on the map (“show on map”). The upper right of the four screens is a transfer screen (as shown in the title banner). This screen has the start and end times below the title, and the bundle number, followed by the address, then identification of the parent agent that the bundle is being transferred from (including a picture) and at the bottom of the screen, an input field to show the location on the map (“show on map”). The lower left of the four screens is a wait screen (as shown in the title banner). This screen again has the start and end times below the title, followed by the address, identification of the parent agent who is to arrive (including a picture) and at the bottom of the screen, an input field to show the location on the map (“show on map”). The lower right of the four screens is a delivery screen (as shown in the title banner). This screen has the start and end times below the title as before, and the bundle number, followed by the delivery address, and at the bottom of the screen, a “start now” input field to enter a completed delivery and an input field to show the location on the map (“show on map”). FIG.21illustrates the software system architecture used for MoDe:Link with the modules necessary for execution of the software. On the left, the user applications are shown in the form of a web application (for use at a central control, for example) and a mobile application, with various different functionalities. The mobile application may utilise Mapbox, which provides in-app turn-by-turn navigation. There may be a link to a signalling device on a trolley or vehicle to aid the delivery agent (and passers-by) to see where the trolley or vehicle is to go) or to any smart wearable technologies, such as a smart jacket which provides navigational instructions/signals, for example by use of lighting and/or haptic devices on the jacket body and/or sleeves. The client Application Programming Interface (API) links to data storage and via an SQL database to a web-hosting and data storage service such as Amazon Web Services. There is also a custom user authentication and authorization block with cloud authorization. Schedule generation is connected to the client API and to modules (microservices) used within the software which together create the schedule. Bin packing may use a 3D packing program on the cloud. Delivery Times and Distances are custom-built modules and Distances links into a geocoding, mapping and routing provider such as Graphhopper or HERE Maps. A Clustering module is also custom built and a Durations module may be provided to vary delivery time using data in the manifest. The custom-built TSP module refers to Graphhopper or another routing service. Error logging may be provided by such a service as Splunk, which provides system logging on the cloud for the microservices. FIG.22is a block diagram of a computing device, such as a data storage server which embodies the present invention, and which may be used to implement a method of an embodiment of arranging delivery at a central control point. The computing device comprises a processor993, and memory,994. Optionally, the computing device also includes a network interface997for communication with other computing devices, for example with other computing devices of invention embodiments. For example, an embodiment may be composed of a server, connected to the cloud and providing schedule generation and a client API, a web application using the client API running on the server or on a separate terminal and mobile devices, connected to the cloud and also using the client API. Optionally, the computing device also includes one or more input mechanisms such as a touchscreen or keyboard and mouse996, and a display unit such as one or more display screens or monitors995. The components are connectable to one another via a bus992. The memory994may include a computer readable medium, a term which may refer to a single medium or multiple media (e.g., a centralized or distributed database and/or associated caches and servers) configured to carry computer-executable instructions or have data structures stored thereon. Computer-executable instructions may include, for example, instructions and data accessible by and causing a general purpose computer, special purpose computer, or special purpose processing device (e.g., one or more processors) to perform one or more functions or operations. Thus, the term “computer-readable storage medium” may also include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methods of the present disclosure. The term “computer-readable storage medium” may accordingly be taken to include, but not be limited to, solid-state memories, optical media and magnetic media. By way of example, and not limitation, such computer-readable media may include non-transitory computer-readable storage media, including Random Access Memory (RAM), Read-Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM) or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory devices (e.g., solid state memory devices). The processor993is configured to control the computing device and execute processing operations, for example executing code stored in the memory to implement the various different functions described here and in the claims. The memory994stores data (such as records and scheduling information) being read and written by the processor993. As referred to herein, a processor may include one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. The processor may include a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processor may also include one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. In one or more embodiments, a processor is configured to execute instructions for performing the operations and steps discussed herein. Each module described above may run on the processor994as appropriate and use memory994. The display unit995may display a representation of data stored by the computing device and may also display GUI components of the web application such as a cursor and dialog boxes and screens enabling interaction between a user and the programs and data stored on the computing device. The input mechanisms996may enable a user to input data and instructions to the computing device. The network interface (network I/F)997may be connected to a network, such as the Internet, and is connectable to other such computing devices (such as mobile devices using the mobile application) via the network. The network I/F997may control data input/output from/to other apparatus via the network. Other peripheral devices such as microphone, speakers, printer, power supply unit, fan, case, scanner, trackerball etc. may be included in the computing device. Methods embodying the present invention may be carried out on a computing device such as that illustrated inFIG.22. Such a computing device need not have every component illustrated inFIG.22, and may be composed of a subset of those components. A method embodying the present invention may be carried out by a single computing device in communication with one or more data storage servers via a network. The computing device may be a data storage itself storing the scheduling information and navigation and other instructions to agents. A method embodying the present invention may be carried out by a plurality of computing devices operating in cooperation with one another. | 80,062 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.